report
stringlengths
320
1.32M
summary
stringlengths
127
13.7k
Section 107 of the Persian Gulf War Veterans’ Benefits Act required VA to conduct a study to evaluate the health status of spouses and children of Persian Gulf War veterans. Under the study, VA was directed to provide diagnostic testing and medical examinations to any individual who (1) is the spouse or child of a veteran listed in VA’s Persian Gulf War Veterans’ Registry who is suffering from an illness or disorder; (2) is apparently suffering from, or may have suffered from, an illness or disorder (including a birth defect, miscarriage, or stillbirth) that cannot be disassociated from the veteran’s service in the Southwest Asia theater of operations; and (3) in the case of a spouse, has granted VA permission to include relevant medical data in the Registry. These tests and examinations were to be used to determine the nature and extent of the association, if any, between the illness or disorder of the spouse or child and the illness of the veteran. The Congress authorized VA to use contractors to provide the medical examinations and specified that the amount spent for the program may not exceed $2 million. The entire $2 million was designated for examinations; the program administrative costs are to be covered by each coordinating medical center’s operating budget. The act does not authorize funding for the treatment of family members of Persian Gulf veterans or reimbursement of participants for travel, lodging, or lost wages. The act stipulated that the program was to be carried out from November 1, 1994, through September 30, 1996. Program implementation was delayed until April 1996 because of a disagreement between VA and members of the Senate over the appropriate approach for establishing the program. VA proposed doing research, via the National Health Survey of Persian Gulf Veterans, which was designed to gather information on the incidence and nature of health problems occurring in Persian Gulf veterans and their families. The survey includes the examination of randomly selected Persian Gulf spouses and children as well as a control group, for comparison, of nondeployed Persian Gulf-era veterans and their families. (See app. II for a description of the National Health Survey of Persian Gulf Veterans.) However, in a November 1995 letter, the Ranking Minority Member of the Senate Committee on Veterans’ Affairs and the Senate Minority Leader notified VA that its approach would not meet the mandate expressed in section 107 of the act. The letter stated that VA was expected to provide spouses and children the opportunity to seek medical examinations for conditions that family members believe are related to Persian Gulf service and to enter the examination information in the Persian Gulf Registry. The letter further stated that, while the survey was viewed as an important epidemiological study for which the Congress expressed approval by enactment of section 109 of the act, it would not meet the mandate of section 107 of the act. In response to these concerns, the Secretary of Veterans Affairs indicated in a February 1996 letter to the Ranking Minority Member of the Senate Committee on Veterans’ Affairs that the Veterans Health Administration (VHA) would proceed immediately to provide voluntary examinations to Persian Gulf family members. VA initiated the Persian Gulf Spouse and Children Examination Program in April 1996 when it began accepting requests for clinical examinations. On October 9, 1996, the Veterans’ Health Care Eligibility Reform Act of 1996 (P.L. 104-262) extended the program through December 31, 1998. The program is administered through VHA’s Office of Public Health and Environmental Hazards and is implemented through coordinating VA medical centers established in each of VA’s 22 Veterans Integrated Service Networks (VISN). The program is offered at 36 of VA’s 172 medical centers. (See fig. 1 for a map showing locations of the VISNs and the 36 coordinating medical centers). Coordinating medical centers are responsible for establishing contracts, usually with their university-affiliated medical schools, for the examination of Persian Gulf spouses and children using standard medical protocols and guidelines developed by VA. The Persian Gulf Spouse and Children Examination Program has faced implementation problems that, to this point, have limited the program’s effectiveness. To inform potential participants about the program, VA headquarters initiated national, broad-based outreach efforts with coordinating medical centers providing for local outreach. As of January 1998, VA coordinating medical centers have received 2,802 requests for examinations, but only 31 percent (872) of requested examinations have been completed. Factors contributing to the low completion rate include the lengthy and cumbersome process for scheduling examinations, which we found takes an average of 15 weeks from the time applicants first apply to the time examinations are completed. Also, examination sites are not easily accessible for some participants because only 36 of VA’s 172 medical centers participate in the program, and the law does not allow for VA to reimburse participants for costs such as travel and lodging. In addition, as of January 1998, no examinations had been conducted in 3 of the 16 coordinating medical centers we contacted because those centers had not negotiated contracts with affiliated medical schools or other providers. Problems also existed with obtaining additional diagnostic testing in some locations. VA headquarters initiated national outreach through notices about the program in the Persian Gulf Review (a quarterly newsletter sent to about 67,000 Persian Gulf veterans in the Registry), public service television announcements, nationally broadcast television interviews with VA officials about Persian Gulf issues, and announcements on the Internet. The Office of Public Affairs, through its regional structure, provided coordinating VA medical centers with press releases about the program. All 172 VA medical centers received basic information about the Persian Gulf Spouse and Children Examination Program. Also, one nationwide teleconference, available to all VA medical centers, was held at the start of the program to encourage centers to inform veterans about the availability of free examinations for the family members of Persian Gulf veterans. According to a VA program official, local outreach was the responsibility of the 36 coordinating VA medical centers. Outreach efforts at the medical centers we contacted ranged from direct mailings to veterans on the Persian Gulf Registry to relying only on national outreach efforts. For example, the Tampa and Seattle medical centers contacted all veterans who had received Persian Gulf Registry examinations at their centers by letter or telephone. Some medical centers sent brochures to Persian Gulf veterans, and Persian Gulf coordinators visited reserve units and service organizations and informed them about the program. However, the Denver and Salt Lake City medical centers relied on national outreach efforts to provide program information. Without information on how participants learned of the program or knowledge of the potential universe of Persian Gulf spouses and children who believe their illnesses or disorders may be related to a family member’s service in the Gulf, it is difficult to assess the effectiveness of national or local outreach efforts. However, VA estimated that, on the basis of the $2 million authorized for the program, it could provide about 4,500 examinations, based on an average cost of $400 per examination, and have a reserve of $200,000 to cover the cost of additional diagnostic tests. Examinations were offered on a first come, first served basis. As of January 1998, coordinating medical centers reported they had received 2,802 requests for examinations. By January 1998, about 7 percent ($148,916) had been expended from the $2 million allocated for the program. Eight hundred seventy-two examinations of spouses and children, 31 percent of examinations requested, had been completed as of January 1998. Forty-one percent of family members who requested examinations did not report for appointments, refused examinations, or had not yet responded to requests to schedule examinations, as shown in table 1. Several factors contribute to the low completion rate for requested examinations. For example, obtaining an examination requires several steps in a lengthy and cumbersome process. Individuals cannot contact a VA medical center to request an examination. Instead, requests for examinations are made by calling (toll free) the Persian Gulf War Veterans’ Helpline. Next, Helpline staff forward requests to VA headquarters, which checks the VA and DOD Persian Gulf registries or the DOD Persian Gulf Deployment Listing to see if the veteran served in the Persian Gulf. VA headquarters then refers requests to one of the 36 coordinating VA medical centers to further establish eligibility. The medical center contacts the individual requesting the examination and asks him or her to provide a marriage certificate (for a spouse) or a birth certificate (for a child). Finally, the medical center sends the validated request to the affiliated medical school or provider, whose representative schedules an examination appointment with the requester. Our analysis showed that the process from requesting an examination to completion of the examination takes an average of over 15 weeks. According to a VA program official, the process for scheduling examinations was established as an efficient way to control, verify, and forward requests to the nearest coordinating medical center. Because the Persian Gulf Helpline already existed and operated 24 hours a day, it offered a means to monitor the number of requests received nationally. Also, Helpline staff were knowledgeable about a range of Persian Gulf issues and services available for veterans. Verification of veterans’ service in the Persian Gulf is centrally administered because VA headquarters staff have access to the VA and DOD Persian Gulf registries and the Persian Gulf Deployment Listing. Verification of the child or spousal relationship is assigned to medical center staff who also provide a local VA contact and forward verified requests to examination providers to schedule appointments. Another major deterrent to obtaining examinations is the distance to examination sites or the accessibility of sites. VA implemented the program through 36 of its 172 medical centers. VA’s Office of Public Health and Environmental Hazards issued a directive through the Chief Network VISN Office for each network to identify at least one medical center to participate in the program. All VISNs identified at least one coordinating medical center, 12 networks established two coordinating centers, and one VISN established three coordinating medical centers. A VA official stated that medical center decisions to participate in the program were based on the demographics of the Persian Gulf veteran population and the medical centers’ ability to obtain contracts with their affiliated medical schools. Our analysis of the median distance between requesters’ residences and the designated coordinating medical center showed that 48 requesters from Arizona were required to travel a median distance of 326 miles to the Albuquerque medical center to receive an examination. Our analysis also showed that 44 requesters from North Dakota, South Dakota, and Minnesota traveled a median distance of 219 miles for an examination in Minneapolis. According to a Washington, D.C., medical center official, family members considered the Georgetown University site to be inconvenient because it is not easily accessible by public transportation. Additional deterrents to obtaining examinations are lost income when taking time off from work and lack of reimbursement for travel and overnight lodging expenses. According to VA headquarters officials, enabling legislation would be necessary for VA to pay these expenses. VA decided to contract with affiliated medical schools where possible because established working relationships facilitated starting a program that had already been delayed. Program officials told us they were being flexible in also allowing medical centers to enter into agreements with managed care organizations or local physicians to examine family members in the absence of contracts with an affiliated medical school. However, no examinations were provided by 3 of the 16 coordinating medical centers we contacted—Augusta, Dayton, and Philadelphia—because they had not entered into agreements with their affiliated medical schools or other health care providers. Because Philadelphia was the only medical center participating in the program in VISN 4, no examinations had been given, as of January 1998, to the 88 family members who had requested examinations in the network. (See app. III for a table of coordinating medical centers and their affiliated medical schools.) VA headquarters officials were not aware that two of the three centers had not provided any examinations until we inquired about the program’s status in January 1998. Because of turnover in key medical center positions, including program coordinators, VA officials indicated that they were unaware of the status of some requests for examinations. Also, VA did not require monthly activity reports from coordinating medical centers until October 1997—1-1/2 years after the start of the program. In addition, the headquarters program office lacks the capacity to validate information reported by medical center staff and has no line authority over field units that implement the program. In the December 1997 activity report, six of the coordinating medical centers had not submitted their information to headquarters. As a result, VA headquarters did not know the status of the program in terms of the number of applicants contacted, number of examinations given, and the number of coordinating medical centers that had active programs. After our inquiries, coordinating medical centers without active contracts with their affiliated medical schools were attempting to establish contracts with managed care organizations or private physicians, or were providing examinations in-house. For example, the Minneapolis VA medical center plans to provide examinations to women by using VA medical center staff from the Women’s Clinic and to children by contracting with a local pediatrician. Additionally, the San Diego medical center contracted with a doctor with pediatric experience to conduct all of its examinations at a VA outpatient clinic. Women applicants receive additional tests from a VA nurse practitioner. Medical schools affiliated with the Denver and Minneapolis VA medical centers did not renew their contracts with VA because the volume of examinations was lower than expected and they were not paid in a timely manner. Other affiliated medical schools that still have contracts made similar complaints. For example, the Denver medical center told its affiliated medical school to anticipate conducting 200 examinations. However, only 54 requests for examinations were received, and the affiliated medical school ultimately performed only 16 examinations. In January 1998, the medical school received payment for 10 completed examinations that had been initially submitted for payment 9 months earlier in April 1997. We found that payment delays are caused, in part, because code sheets, which capture medical information for entry into the registry database, are rejected by VA’s Austin, Texas, processing center when they are not properly completed. VA headquarters’ guidance for establishing contracts stipulates that payment should be made only after satisfactory completion and submission to VA of all forms and code sheets. Staff from VA medical centers and affiliated medical schools complained that code sheets were difficult and time consuming to complete and lacked clear instructions. In addition, VA attempted to enter data into the registry using scannable code sheets. However, at one point, the program experienced a 100-percent rejection rate for code sheets because of problems with the scanning system. As a result, VA staff had to spend additional time correcting rejected code sheets. VA has since resorted to manually inputting the data. According to VA medical center officials, additional reasons for delayed payment include affiliated medical schools completing paperwork incorrectly, submitting untimely bills, and billing the wrong party. As of January 1998, of the 872 examinations completed, 541 examinations had been approved for payment. To conduct the examinations for spouses and children, VA developed a protocol that defines the standard tests and medical information collected during examinations. VA officials characterized the examination as a basic but complete physical. Adults receive diagnostic laboratory tests including blood count, blood chemistries, urinalysis, and, for women, a Pap smear. Children receive a physical examination and a medical history, including details on the development of symptoms. The children’s protocol does not require routine diagnostic testing. Examination results are conveyed to family members with a form letter from the examining physician. If physicians determine that a referral to a medical specialist for additional diagnostic testing would be helpful to understanding a patient’s symptoms, VA headquarters must give written approval if total examination and additional diagnostic testing costs exceed $400. At the locations we visited, examination costs ranged from $140 to $473. VA headquarters officials told us they approved all requests received for referrals to medical specialists—about 20. But officials at the Houston medical center said that although their examining physician requested only two referrals, she wanted to refer about 20 percent of those examined (47 patients) for additional diagnostic tests. However, these officials did not ask for additional referrals because they believed resources were constrained and the approval process would take additional time and require participants to make another trip to the medical school. On the other hand, the medical school affiliated with the Minneapolis medical center performed additional diagnostic tests without requesting approval. This strained the contractual relationship with the medical center because the medical school was not reimbursed for these additional tests. After more than 1-1/2 years of operation, VA has yet to fully implement the program to provide medical examinations to spouses and children of Persian Gulf veterans. Only 872 of the 2,802 requested examinations have been completed as of January 1998. Although a program of clinical examinations may not resolve issues related to whether illnesses among Persian Gulf family members are related to illnesses of veterans, the clinical examination approach provides Persian Gulf family members an opportunity to visit with a physician and to receive a free medical examination. Standardized examinations also give VA a health surveillance tool for cataloging prominent symptoms among Persian Gulf family members. The Persian Gulf Spouse and Children Program is scheduled to expire in December 1998. At the current rate of examinations, it is not likely that significant numbers of additional examinations will be completed by that date. If the Congress gives Persian Gulf family members the opportunity to be examined beyond December 1998, VA will need to seek ways to reduce barriers to participation, ensure that the necessary health care providers are available to provide examinations, and improve its capacity to monitor program implementation. If the Congress gives Persian Gulf family members the opportunity to be examined beyond December 1998, we recommend that the Secretary-designate of Veterans Affairs direct the Under Secretary for Health to simplify the process for requesting and scheduling examinations, offer examinations in more locations and seek approval to reimburse participants who are required to travel long distances to receive examinations, and enhance the capacity of the Office of Public Health and Environmental Hazards to monitor program implementation by field personnel. We provided a draft of this report to VA for comment, but VA did not provide comments in time to be included in this report. However, VA provided technical comments on March 19, 1998, which we incorporated where appropriate. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time we will send copies of this report to the Secretary-designate of Veterans Affairs and interested congressional committees. We will also make copies available to others upon request. Please contact me on (202) 512-7101 if you or your staff have any questions concerning this report. Major contributors included George Poindexter, Brian Eddington, Jean Harker, Mike Gorin, and Alan Wernz. We obtained information for our review by visiting 8 of the 36 coordinating VA medical centers for the Persian Gulf Spouse and Children Examination Program in Chicago, Denver, East Orange, Houston, Minneapolis, Salt Lake City, Seattle, and Washington, D.C. We also contacted by telephone eight additional coordinating medical centers—Augusta, Birmingham, Dayton, Honolulu, Palo Alto, Philadelphia, San Diego, and Tampa. We selected these sites on the basis of their geographic mix, volume of examinations, and recommendations from VA and the Senate Committee on Veterans’ Affairs. In some instances, medical school or clinic representatives were present during our visits. We telephoned some medical school representatives who were not present during site visits to obtain their views on the program and its implementation. Because of time constraints, we did not contact individual veterans or their family members. We interviewed officials at VA headquarters and the VA payment center located in Denver and also contacted the Office of Public Affairs concerning its outreach efforts. We reviewed reports on the status of contractual agreements, the number of examinations scheduled and completed, and the amount of funds disbursed for examinations. We analyzed data from VA’s Austin data center (all 321 completed code sheets) and corresponding data from the Persian Gulf War Veterans’ Helpline (requests for examinations that included the date the veteran or family member called) to determine the average length of time required to schedule and obtain an examination. We also analyzed data from the Persian Gulf Helpline to determine the distance to the nearest coordinating medical center for selected areas. We did not verify the accuracy of data received from either the Austin data center or the Persian Gulf Helpline. As agreed with your staffs, we did not evaluate the appropriateness of the survey instruments or medical evaluations used in the program. Initiated in July 1994, the National Health Survey of Persian Gulf Veterans is an epidemiological research study designed to estimate the prevalence of various symptoms and other health outcomes for Persian Gulf veterans and their families. The study is being conducted in three phases. In phase I, a questionnaire was mailed to each of 30,000 veterans (15,000 Persian Gulf veterans and 15,000 non-Persian Gulf veterans). In phase II, a sample of 8,000 nonrespondents was randomly selected for follow-up telephone calls to assess potential nonrespondent bias and to supplement the mailed survey data. In addition, during phase II, selected self-reported data collected during phase I was validated through records reviews for 2,000 veterans from each group. VA has completed the first two phases of this survey. In phase III, the same 2,000 veteran respondents and family members from each group will be invited to participate in a physical examination under a uniform comprehensive clinical examination protocol. VA is currently identifying 15 of its medical centers to examine veterans and family members over an 18-month period. The medical centers will be selected in a way that ensures a medical center will be within 3 to 4 hours driving time of the majority of the families sampled. Veterans will be examined at VA medical centers. The requested budget also permits up to half of the spouses and all of the children to be examined at affiliated medical schools. Veterans and spouses will be paid $200 per adult examination and $100 per child examination to compensate them for their time and inconvenience. Mileage or airfare, per diem, and lodging costs will be paid for families who live far enough away to require overnight stays. According to a VA official, these payments are allowable costs as part of this research project. The estimated report date for the survey is December 2000. Spouse: Evans Medical Foundation, Boston Medical Center Children: Child Health Foundation of Boston Spouse: Syracuse VA Medical Center Children: SUNY Health Science Center New Jersey University of Medicine and Dentistry No affiliated medical school contract No affiliated medical school contract University of Puerto Rico School of Medicine University of Tennessee Medical Group The Wilson Group, Vanderbilt University No affiliated medical school contract Wayne State University Medical School No affiliated medical school contract University of Missouri Health Science Center (continued) The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of Veterans Affairs' (VA) implementation of the Persian Gulf Spouse and Children Examination Program, focusing on: (1) outreach efforts; (2) obstacles to family members' participation; and (3) contracting issues. GAO noted that: (1) the Persian Gulf Spouse and Children Examination Program has faced numerous implementation problems that have limited its effectiveness in providing medical examinations; (2) to inform Persian Gulf veterans and their family members about the program, VA approached outreach in two ways--with a national campaign supplemented by local efforts at coordinating VA medical centers; (3) GAO found that some medical centers made efforts to contact Persian Gulf veterans and their families, while others relied on headquarters' outreach efforts; (4) however, GAO could not assess the effectiveness of these efforts because of a lack of information on the potential number of Persian Gulf family members who believe their illnesses are related to a family member's service in the Gulf War; (5) although as of January 1998 coordinating medical centers had received 2,802 requests for examinations, VA has completed only 872; (6) forty-one percent of applicants either failed to report for appointments, refused examinations, or had not yet answered requests to schedule examinations; (7) program participants face a lengthy and cumbersome scheduling process carried out through VA offices other than the local VA medical centers; (8) GAO's analysis showed that it takes an average of over 15 weeks for a participant to get an examination; (9) in addition, because VA chose to administer the program through only 36 of its 172 medical centers, examination sites are not always easily accessible to participants; (10) three of the 16 coordinating medical centers GAO contacted have not conducted any examinations because they have not contracted with their affiliated medical schools or other providers; (11) VA headquarters officials were unaware that examinations had not been conducted in two of the centers because of turnover in key center positions, and because VA did not start requiring monthly activity reports, which give the cumulative status of examination requests, from coordinating medical centers until October 1997; (12) GAO found that payment delays are caused in part by contractors incorrectly completing required paperwork, which staff from VA medical centers and affiliated medical schools told GAO is time consuming to complete and lacks clear instructions; and (13) although VA reserved $200,000 of authorized funds to cover the costs of tests, medical center officials told GAO they would have requested more referrals but believed resources were limited and the approval process would require additional time and travel for participants.
Time—specifically the period that begins with the submission to FDA of a new drug application (NDA) and that ends when a final decision is made on that application (the period known as the NDA review phase of drug development)—is the focus of this report. At your request, we have assembled data on all new drug applications submitted to FDA in 1987-94 to answer three questions: Has the timeliness of the review and approval process for new drugs changed in recent years? What factors distinguish NDAs that are approved relatively quickly from those that take longer to be approved? What distinguishes NDAs that are approved from those that are not? Additionally, as you asked, we obtained the most recently available data on how long it takes for drugs to be approved in the United Kingdom and compared them with approval times in the United States. Because GAO has access to all applications, both those that have been approved and those that have not, our report is the first to present comprehensive data on review time for all NDAs submitted to FDA. The process of bringing a drug to market is lengthy and complex and begins with laboratory investigations of the drug’s potential. For drugs that seem to hold promise, preclinical animal studies are typically conducted to see how a drug affects living systems. If the animal studies are successful, the sponsoring pharmaceutical firm designs and initiates clinical studies in which the drug is given to humans. At this point, FDA becomes directly involved for the first time. Before any new drug can be tested on humans, the drug’s sponsor must submit an investigational new drug application to FDA that summarizes the preclinical work, lays out a plan for how the drug will be tested on humans, and provides assurances that appropriate measures will be taken to protect them. Unless FDA decides that the proposed study is unsafe, clinical testing may begin 31 days after this application is submitted to FDA. While clinical trials progress through several phases aimed at establishing safety and efficacy, the manufacturer develops the processes necessary to produce large quantities of the drug that meet the quality standards for commercial marketing. When all this has been done, the pharmaceutical firm submits an NDA that includes the information FDA needs to determine whether the drug is safe and effective for its intended use and whether the manufacturing process can ensure its quality. The first decision FDA must make is whether to accept the NDA or to refuse to file it because it does not meet minimum requirements. Once FDA has accepted an NDA, it decides whether to approve the drug on the basis of the information in the application and any supplemental information FDA has requested. FDA can approve the drug for marketing (in an “approval letter”) or it may indicate (in an “approvable letter”) that it can approve the drug if the sponsor resolves certain issues. Alternatively, FDA may withhold approval (through a “nonapprovable letter” that specifies the reasons). Throughout the process, the sponsor remains an active participant by responding to FDA’s inquiries and concerns. The sponsor has the option, moreover, of withdrawing the application at any time. For each NDA submitted between 1987 and 1994, we obtained from FDA information on the dates of its significant events between initial submission and final decision as well as the last reported status of the application as of May 1995. To ensure that the data were valid, we independently checked them against values in published reports and other sources. (The variables that we used in our analysis and the procedures that we used to validate the data can be found in appendix I.) We computed time by measuring the interval between all significant events. Results using other ways to calculate review time are compared to ours in appendix II. We used regression analysis to determine the factors that were significantly related to time and to determine which factors were significantly related to approval. (The results of the regression analyses on time are in appendix IV, on approval in appendix V.) Some of our analyses include all the NDAs, while others focus on specific subgroups. Most notably, we restricted analyses of overall time to NDAs that had been submitted by the end of 1992 to avoid the bias introduced by including applications that have had an insufficient time to “mature.” (Appendix VI describes the implications of this decision for our results.) Because our analyses of final decisions concentrate on NDAs submitted through the end of 1992, the data we present do not address the consequences of the full implementation of the Prescription Drug User Fee Act of 1992. Our findings pertain only to FDA’s Center for Drug Evaluation and Research and do not reflect the activities of the agency’s five other centers. We focused only on the NDA review phase—the final critical step of bringing a drug to market. We did not address the lengthier process of initial exploration and clinical testing, which together with the NDA phase average more than a decade, nor did we study the phase that follows a drug’s approval, during which additional studies can be conducted and attention paid to potential adverse events associated with its widespread use in the general population. FDA received 905 NDAs in 1987-94. The total number of NDAs fell from 1987 but remained relatively stable in the ensuing years through 1994 (with the exception of the uncharacteristically small number of submissions in 1993). The number of NDAs for new molecular entities (NMEs) and priority NDAs remained relatively stable over the years. Overall, 17 percent of the NDAs were for priority drugs. (See table 1.) A large percentage of the applications were not approved. Only 390 of the 700 NDAs submitted through 1992 had been approved by May 16, 1995. In other words, 44 percent of the applications submitted were for drugs that FDA did not find to be safe and effective or that sponsors chose not to pursue further. NMEs were approved at a higher rate than non-NMEs (64 percent to 52 percent), and priority drugs were approved more often than standard drugs (76 percent to 52 percent). This means that whether an NDA is or is not ultimately approved is as relevant a question as how long approval takes. (See table 2.) The data in table 2 show that NDAs that are submitted by experienced sponsors and priority NDAs are more likely to be approved than standard NDAs or NDAs submitted by sponsors with little experience with the process. These results are supported by a regression analysis that shows that both the NDA’s priority and the sponsor’s experience are statistically significant predictors of outcome (see appendix I for our definition of sponsor experience and appendix V for the regression analysis). The regression analysis found that, statistically controlling for the effects of the other explanatory variables in the model, priority NDAs are four times more likely to be approved than standard NDAs and that applications submitted by the most experienced companies are three times more likely to be approved than those submitted by less experienced sponsors. Table 3 shows for 1987-92 the average time (in months) from when NDAs were first submitted to when final decisions were made for both NDAs that were approved and those that were not. The table also distinguishes between all NDAs and those that were approved in three categories: new molecular entities, priority applications, and standard applications. As can be seen from the table, the processing time for all eight categories of NDAs fell considerably (from 33 to 18 months, or 45 percent, for all NDAs, or from 33 to 19 months, or 42 percent for approved NDAs). In addition, the reductions in time came for NDAs submitted throughout the period of our study. This finding is consistent with FDA’s statements that review time has decreased in recent years. Alternative presentations of the data demonstrate the same result. For example, table 4 shows that the number of months that passed before half of all submissions were approved declined from 58 months for NDAs submitted in 1987 to 33 months for 1992 submissions. Since just 56 percent of the NDAs submitted between 1987 and 1992 were approved, this measure captures the approval period for almost all the approvals that will ultimately be granted. Similarly, table 4 shows that the proportion of submitted NDAs that were approved within 2 years increased from 23 percent for NDAs submitted in 1987 to 39 percent for NDAs submitted in 1992. Closer examination of the individual NDAs shows that they differed considerably in how long it took before a final decision was made. Some NDAs were approved within a few months (the shortest was 2 months); others took years (the slowest was 96 months). The variation was similar among applications that were not approved. Some were withdrawn on the day they were submitted. The longest outstanding application was 92 months old. This considerable variation raises the question of what differentiates one NDA from the next: Do some factors predict the time it will take to reach a final decision? When we tested potential explanatory variables, we found that the priority FDA assigned to an application and the sponsor’s experience in submitting NDAs were statistically significant predictors of how long review and approval took. (See appendix IV.) More specifically, controlling for the effects of the other explanatory variables in the model, our regression analysis found that priority NDA applications are approved 10 months faster than standard applications and that applications from the most experienced sponsors are approved 4 months faster than applications from less experienced sponsors. The interval between first submission and final decision indicates how long the public must wait for drugs after sponsors believe they have assembled all the evidence to support an approval decision. Alternative measures provide insight into what happens to an NDA before FDA approves it. One such measure is the extent to which FDA is “on time” in making decisions. We examined both the degree to which FDA was on time and the factors that influenced whether it made its decisions on time. The criteria for “on time” performance that we used in this analysis were established under the Prescription Drug User Fee Act of 1992. Although on-time performance may be seen as one indicator of FDA’s efficiency, it is important to note that FDA is not required to meet these criteria until 1997. Of all the decisions FDA made on the NDAs submitted between 1987 and 1993, 67 percent were on time. Simpler decisions (for example, refusals to file) were made on time more often than relatively complex decisions (for example, priority applications in which the first decision was an approval). Overall, the on-time percentage remained relatively stable, varying between a low of 62 percent for NDAs submitted in 1992 and a high of 72 percent for NDAs submitted in 1987. In sharp contrast to the decline in overall time between submission and final decision shown in table 3, this stability shows that there is little relationship between the time FDA takes to reach a final decision and whether or not it meets its deadlines for specific actions. Another process measure of review time is based on where responsibility lies for different parts of the process—with FDA for the intervals during which it acts on an application, or with the sponsor, for the intervals during which FDA waits for the sponsor to provide additional information or to resubmit the application. Figure 2 shows how their relative times were distributed for approved NDAs submitted between 1987 and 1992. As can be seen from the figure, sponsors accounted for approximately 20 percent of the time in the NDA phase for applications that FDA approved. Importantly, the time for both sponsors and FDA diminished for NDAs submitted between 1987 and 1992. Regulatory processes similar to FDA’s have been mentioned as models for reforming FDA. The one most often mentioned is the United Kingdom’s. Proponents of FDA reform have argued that the British counterpart to the FDA, the Medicines Control Agency, performs reviews of equivalent quality and does so significantly more quickly. Comparisons between the Medicines Control Agency and FDA are difficult because the workload, approval criteria, and review procedures followed by the agency may not be exactly the same as FDA’s and because its reports cover a slightly different period than FDA’s. However, the most recent data show that overall approval times are actually somewhat longer in the United Kingdom than they are in this country. For the 12-month period ending September 30, 1994, the Medicines Control Agency reported that the median approval time for applications that were apparently equivalent to NMEs was 30 months. The average time was 24 months. The fastest approval was granted in about 4 months, the slowest in 62 months. According to FDA, the median approval time for NMEs approved in the United States in calendar year 1994 was 18 months, the average about 20 months. The fastest FDA approval took about 6 months and the slowest about 40 months. (See appendix VII for a fuller comparison.) Aside from shedding light on the central issue of time, the data we assembled provide some interesting but rarely mentioned facts about FDA’s drug review and approval process. First, nearly half the NDAs submitted to FDA are not approved for marketing. The 44 percent of NDAs that were not approved in our sample either were not judged by FDA to be safe and effective or were not pursued by their sponsors. Second, the percentage of NDAs for drugs that are viewed by FDA as offering an important therapeutic advance is relatively small. As we pointed out in table 1, only 17 percent of all NDAs were given priority status. Third, our data on drug review and approval show that approximately one fifth of the time in that process comprises activities for which sponsors are responsible. With respect to time, NDAs are moving more quickly through the drug review and approval process. Whether this improvement is because of actions by FDA or the pharmaceutical industry or some other factors is an issue that is beyond the scope of this report. However, the consistency of all our results supports the conclusion that the reduction in time is real and not an artifact of how time is measured. Further, the magnitude of the reduction—more than 40 percent—should be considered in the ongoing discussions of the need to change the NDA review process or the agency in order to speed the availability of drugs to patients. FDA officials reviewed a draft of this report and discussed their comments with us. They generally agreed with our analytic methods and findings. However, they expressed concerns about some aspects of our analysis of FDA’s on-time performance. These comments, and our responses to them, appear in appendix II. FDA also provided a number of specific technical comments that have been incorporated into the report where appropriate. As we agreed with your offices, we plan no further distribution of this report until 30 days from its date of issue, unless you publicly announce its contents earlier. We will then send copies to the Secretary of Health and Human Services, the Commissioner of Food and Drugs, and to others who are interested. We will also make copies available to others upon request. If you have any questions regarding our report, please call me at (202) 512-2900 or George Silberman, Assistant Director, at (202) 512-5885. At our request, FDA provided detailed information about all new drug applications, totaling 905, initially submitted between January 1, 1987, and December 31, 1994. This included the contents and date of all FDA decisions and all major communications between FDA and the NDA sponsors through May 16, 1995. The variables we used in our analysis are described in the next section. Our choice of this time period has important implications for the analysis of drug review time. First, we started with 1987 because that was the first full year following a major change in FDA’s drug review procedures. We do not believe that examining new drug applications from before 1987 would shed any light on FDA’s current activities. Second, most reports of drug approval times, including those published by FDA, measure time for drugs approved during a particular period, regardless of when they were submitted. Some approved drugs may have been submitted much earlier. By limiting our analysis to new drug applications submitted (but not necessarily approved) in 1987 and later, we have limited the maximum value of review time. However, we do not believe that this has significantly biased our findings, since relatively few drugs win approval after exceptionally long review periods. (Appendix VI describes the outcomes of the review process as a function of year of approval in our sample.) While we were unable to independently verify the accuracy of all the data FDA provided, we did undertake a number of validation procedures to ensure the quality of the data. First, we performed extensive checks of the internal consistency of the databases FDA provided. In several cases, we uncovered discrepancies in the level of detail for different categories of drugs and between the information contained in one data file and that contained in another file. We resolved all these inconsistencies with FDA. Second, we compared the information in the data files with published sources where possible. For approved drugs, many reports (by FDA and by others) list the names, submission dates, and approval dates. We were able to resolve with FDA the few inconsistencies we discovered through this method. However, it is important to note that we were unable to do this for nonapproved drugs because there are no published reports on them. Third, for an earlier report, we had already obtained documentation for all NDAs for NMEs submitted in 1989. We compared those documents with the data FDA provided us for this report, and we were able to resolve all apparent inconsistencies. This section describes the variables we used in our analyses. Our definitions of the variables do not necessarily agree with FDA’s practice. FDA provided some of the variables directly to us; we computed others from the data FDA provided and from other sources. Priority drugs. Those that FDA determines to represent a significant therapeutic advance, either offering important therapeutic gains (such as the first treatment for a condition) or reducing adverse reactions. Nonpriority, or standard, drugs offer no therapeutic advantage over other drugs already on the market. New molecular entities. Drugs with molecular structures that have not previously been approved for marketing in this country, either as a separate drug or as part of a combination product. Drugs that are not NMEs are from one of six categories defined by FDA: a new ester or salt, a new dosage form or formulation of a previously approved compound, a new combination of previously approved compounds, a new manufacturer of a previously approved drug, a new indication for an already approved drug, or drugs already marketed but without an approved NDA (that is, drugs first marketed before FDA began reviewing NDAs). Initial submission. The first submission of the application to FDA. Resubmission. After a sponsor has withdrawn an application or FDA has refused it for filing, sponsors can resubmit it. Major amendments. Substantial submissions of new information by the sponsor to FDA, either of the sponsor’s own volition or in response to an FDA query. Refusal to file. After FDA receives a new drug application, the agency first determines if the application is sufficiently complete to allow a substantive review. If not, FDA can refuse to file it. Since the implementation of user fees in 1993, applications must be rejected if the sponsor has failed to pay the appropriate fee to FDA. These applications are categorized as “unacceptable for filing,” not refusal to file. Approval. If FDA is satisfied that a drug is safe and effective, it approves the drug for marketing for its intended use as described in the label. Approvable. FDA determines that a drug is approvable if there is substantial evidence that it is safe and effective, but the sponsor must either supply additional information or agree to some limiting conditions before FDA grants final approval. Not approvable. If FDA determines that the evidence submitted by the sponsor to show that the drug is safe and effective is insufficient, the agency notifies the sponsor that the drug is not approvable. Withdrawal. The sponsor of an NDA may withdraw it at any time for any reason. Final status. We examined the data file for each NDA to see if the drug had ever been approved. If not, we searched the file for the last event that was a withdrawal, not approvable, approvable, or a refusal to file, and we identified that event as the application’s final status. However, since FDA never definitively rejects applications, some whose final status is other than approval may ultimately be approved. (See appendix III.) Year of submission. The calendar year in which an application is first submitted to FDA. Review time. The period between the date of the initial submission of an NDA, even if FDA refuses to file it, and the date of the application’s final status in the data file. For approved drugs, review time is the period between the initial submission and the date of approval. FDA time and sponsor time. For some of the analyses, we divided the total review time into time that is FDA’s responsibility and time that is the sponsor’s responsibility. FDA time consists of periods that begin when the agency has the information it has requested from the sponsor for that stage of the review and that end when FDA issues a judgment of refusal to file, approval, approvable, or not approvable or the application is withdrawn. Sponsor time consists of periods when FDA is waiting for the sponsor to provide additional information or to resubmit the application. FDA time and sponsor time are complementary and together sum to total review time. Review cycles. Each period of FDA time is one review cycle. FDA’s on-time performance. The Prescription Drug User Fee Act of 1992 established specific performance goals for each review cycle. The agency must issue refusals to file within 60 days of submission and must reach all other decisions for priority drugs within 6 months and for standard drugs within 12 months. We applied these guidelines retroactively to identify actions as either on time or not on time for each review cycle for NDAs submitted between 1987 and 1994. Experience. We divided the sponsoring pharmaceutical companies into four groups, based on their activities between 1987 and 1994. We defined the most experienced companies as those that submitted 9 or more NDAs to FDA during this period (that is, at least one per year). Those that submitted between 5 and 8 NDAs in that period made up the middle-experience group. The two least experienced groups submitted 4 or fewer NDAs. We further divided the least experienced companies into one group with affiliations with other companies that sponsored NDAs during this period and another group without such affiliations. Affiliation meant that another sponsoring company had a significant ownership stake in the sponsor of the NDA. We identified affiliations by reviewing business and financial directories. Most of our statistical analyses consist simply of listing average review times, or the number of NDAs with a particular characteristic, separately by year of submission or by the outcome of review. However, we also conducted two regression analyses, one to identify variables related to the length of the review process and another to identify factors related to drug approval. (See appendixes IV and V.) This allowed us to isolate the effects of one variable (for example, drug priority) while statistically holding constant the other predictor variables (for example, year of submission and the experience of the sponsoring company). All our statements about statistical significance are based on the results of the regressions, which answer the question: If there were no differences among these NDAs except, for example, drug priority, does drug priority influence the chances of approval? We performed our work in accordance with generally accepted government auditing standards. The key statistics presented in this report are the average times to final decisions for NDAs submitted in consecutive calendar years from 1987 onward. Previous reports on time have presented other results, sometimes relying on slightly different measures of time, sometimes reporting other statistics (medians rather than averages), and usually constructing cohorts based on the years in which the NDAs were approved rather than the years in which they were submitted. In the sections that follow, we place our work in the context of other studies of drug review and approval time by examining the differences in approach. In our study, review time begins with the first submission of the NDA to FDA. In FDA’s statistical reports, it starts the clock with the submission of an “accepted” NDA. The two measures would provide similar results if the NDA were accepted on the first submission or, if FDA refused to file it, the sponsor never resubmitted the application. However, in any situation in which FDA refused to file the NDA and the sponsor eventually resubmitted it, our measure of review time would be longer by the interval between the first submission and the date of an accepted submission. Approximately 1 in 10 NDAs (9.4 percent) fall into this category. The average time to resubmission for these applications was a little less than 2 months (1.7 months). Therefore, our review times are slightly longer on average than those reported by FDA. Another approach to time measurement is to be less concerned with how long the process took than with whether it was completed within a specified period. FDA takes this approach when it reports the extent to which the agency meets its user fee performance goals as referenced in the Prescription Drug User Fee Act. Data on our measure of on-time performance appear in the body of this report. Table II.1 shows an annual breakdown of “on time” performance. Percent taken “on time” Actions taken as of May 16, 1995. As can be seen from table II.1, the percentages have changed little over the years. Interestingly, this is in contrast to the reduction in total review time (the entire interval between submission and approval) during this period. Seemingly, FDA has managed to reduce the overall time even though it has not increased the proportion of specific actions taken on time. In commenting on a draft of this report, FDA officials agreed with our general conclusions but made two points regarding our analysis of on-time performance. First, FDA emphasized that the 6- and 12-months guidelines used in our analysis were not in effect during the years we studied and that FDA is not required to meet them until 1997. Second, while FDA believes that its review cycle on-time performance may not have improved, the agency cautioned that the nature of its actions has changed with the initiation of the user fee program, particularly for not-approvable letters. Prior to the initiation of user fees, not-approvable letters were not necessarily a complete listing of all the deficiencies in the NDA. For example, FDA may have sent one not-approvable letter when the review of one section of the NDA was complete and additional not-approvable letters as other sections of the review were completed. After user fees, FDA is required to take complete actions, so a not-approvable letter must contain all the deficiencies FDA identifies. In other words, FDA must now complete more work to satisfy a post-user fee deadline than it had to before user fees were introduced. We agree with FDA’s first point. FDA’s second point argues for caution in making comparisons of on-time performance between different years. We agree that changes in procedure would invalidate such comparisons. For that reason, we did not use this measure as an indicator of whether the overall timeliness of the drug approval process had improved. Rather, we included the trends in on-time performance in the report in order to be comprehensive in presenting all measures of time that others had reported. Throughout this report, we have reported the average times for NDA review. An alternative is to report the median review time, the time for the 50th percentile application. In this case, medians reduce the influence of drugs with unusually long review periods and are therefore usually somewhat lower than average review times. Table II.2 lists the average and median approval times for the drugs we examined by year of submission. While the median values are generally slightly lower, they show the same pattern of consistent decrease as the average values. FDA and others frequently report time statistics for NDAs that group the applications by the year in which they were approved rather than the year in which they were submitted. To some extent, this reflects FDA’s general orientation away from publishing data on submissions (given that much of that information is proprietary until they are approved). Table II.3 compares the average approval times we computed using year of submission with the average approval times FDA computed using year of decision. The discussion that follows the table indicates why grouping NDAs by year of submission is preferable for our purpose. We do not present values for these years because they may be biased as a result of the censoring problem discussed in appendix VI. Table II.3 shows an obvious difference between the decrease in approval times when NDAs are grouped by year of submission and the stability when they are grouped by year of approval. This difference arises because grouping by year of approval incorporates into the calculation whatever backlog of NDAs existed at FDA. For example, several NDAs submitted in 1987 that had very lengthy 5-year reviews would increase the average review time in 1987 for year-of-submission statistics but would add to the average review time in 1992 for year-of-approval figures. Thus, whenever the possibility of a backlog exists, basing time on year of approval is a less appropriate way to measure current practice because it incorporates the older applications. In contrast, time based on year of submission eliminates the confounding effects of the backlog and, therefore, is the preferable measure for assessing the current performance of the agency. In 1987, the first year in our study, FDA had a considerable backlog of NDAs submitted in 1986 and earlier and that backlog affected times throughout nearly the entire period of our study. This can be seen from table II.4. As the table shows, a considerable proportion of the approvals in every year except for 1994 were for older NDAs that had been under review for a long time. The first years in which FDA seemed to make progress in reducing the backlog were 1992 and 1993, when larger percentages of older applications were approved. This progress was reflected in the smaller percentage of older NDAs that were approved in 1994 and in the sharp drop in times measured by year of approval between 1993 and 1994 (see table II.3). The decrease from 33 to 26 months indicates that the backlog may have finally passed through the system. In this appendix, we present data on what happens to the NDAs as they move through the review process, focusing on three kinds of activities: first actions, review cycles, and major amendments. Table III.1 shows the first action taken on NDAs submitted in each successive year. It can be seen that approval is the initial decision for relatively few NDAs. Given that approximately 55 percent of all NDAs are ultimately approved, the data in table III.1 also show that such “negative” decisions as refusal to file, not approvable, and withdrawal are not necessarily fatal to an application. Of the 110 NDAs submitted from 1987 to 1992 that FDA initially refused to file, 35 (32 percent) were ultimately approved. Similarly, 43 percent of the NDAs that had a not-approvable first action were ultimately approved, and 27 percent of the withdrawals were resubmitted and approved. Overall, 43 percent of the 390 drugs submitted from 1987 to 1992 that were approved were refused, withdrawn, or found not approvable at some point on their way to approval. FDA reports the review cycles that an NDA goes through in its yearly Statistical Reports. A cycle starts with the submission or resubmission of an NDA and ends with the withdrawal of the NDA, a refusal to file decision, or an approval, approvable, or not-approvable letter. Each new cycle starts the review clock anew. Table III.2 shows the number of cycles for various types of NDAs. Not applicable. As can be seen from table III.2, some types of NDAs are more likely to go through multiple review cycles than others. Approved NDAs go through more cycles on average than applications that get dropped along the way; priority NDAs go through fewer cycles on average than standard NDAs; and, similarly, NMEs go through fewer cycles on average than non-NMEs. The number of cycles for both approved NDAs and all NDAs has decreased for submissions since 1987. This decrease is consistent with the decrease in time to final decisions. FDA has questions about almost all NDAs and requires sponsors to submit additional data in response to those questions. The sponsors submit these data in the form of amendments. Relatively small amounts of data (for example, clarification of a point or correction of a value) are classified as minor amendments, and relatively large amounts of data (for example, a reanalysis or results of an additional study) are classified as major amendments. Not applicable. Table III.3 shows the number of amendments for different types of NDAs. As expected, NDAs that are pursued through to approval have more major amendments on the average than NDAs that drop out of the process. NDAs for priority drugs and for NMEs required more amending on average than applications for standard drugs and non-NMEs. As with the data on cycles, table III.3 shows a decrease in the number of amendments for submissions since 1987. These data, along with those in table III.1 showing a steady decrease in the numbers of not approvables and in table III.2 showing fewer cycles, suggest that the drug review and approval process is getting “cleaner.” This change may result from different applications submitted by the sponsors of new drugs, different FDA review procedures, or both. Without additional study, it is not possible to identify the reasons for this. However, all three sets of data (on first action, cycles, and major amendments) are consistent with a quicker review process. We conducted two regression analyses predicting review time, one for approved new drug applications and the other for applications that were not approved. As table IV.1 shows, we found that the length of time until approval was significantly affected by three factors—year of submission, drug priority, and sponsor experience. Applications submitted in later years were approved much faster than earlier applications (for example, 11 months quicker in 1992 than in 1987). Drug applications given therapeutic priority by FDA were approved nearly 10 months faster than standard drugs. Applications from sponsors that submitted many NDAs were approved more quickly than applications from relatively inexperienced sponsors (for example, applications from the most experienced sponsors were approved 4 months faster than those from inexperienced sponsors that were not affiliated with other sponsoring companies). Year of submission (vs. 1987) Priority drugs (vs. standard) New molecular entity (vs. not) Sponsor experience (vs. inexperienced, unaffiliated) For applications first submitted from 1987 to 1992, N = 390, and R-squared = 0.24. The mean review time is 26.36 months. In contrast, for drugs that were not approved, the only significant factor was year of submission. Applications submitted in later years were acted on more quickly than those submitted earlier (see table IV.2). Neither therapeutic priority nor the experience of the sponsor affected review time. It is important to reiterate that FDA does not definitively reject applications it does not approve. Therefore, FDA may take further action on some of the applications in this analysis. Year of submission (vs. 1987) Priority drugs (vs. standard) New molecular entity (vs. not) Sponsor experience (vs. inexperienced, unaffiliated) For applications first submitted from 1987 to 1992, N = 308, and R-squared = 0.16. Mean review time is 24.93 months. Table V.1 presents the results of a logistic regression analysis predicting NDA approval. The outcome variable is dichotomous: “1” indicates that the drug has been approved, “0” that it has not been approved. Fifty-six percent of the NDAs were approved. The data set for the regression consists of the 698 drugs first submitted between 1987 and 1992 that had final status values as of May 16, 1995 (two applications were pending). Year of submission (vs. 1987) Priority drug (vs. standard) New molecular entity (vs. not) Sponsor experience (vs. inexperienced, unaffiliated) The regression uncovered two statistically significant factors—drug priority and sponsor experience. Priority drugs were approved at nearly four times the rate of nonpriority drugs. Applications from sponsors that submitted many NDAs during this period were approved more often than applications from relatively inexperienced sponsors (applications from the most experienced sponsors were approved three times more often than applications from inexperienced sponsors that were not affiliated with other sponsoring companies; applications from companies with mid-levels of experience were approved nearly twice as often). As mentioned in appendix II, basing our selection of NDAs for analysis on the year of submission has one significant advantage over the more traditional approach of examining NDAs by year of approval. That is, our approach avoids the contamination of the averages by whatever backlog exists. However, relying on year of submission can introduce another form of bias in that averages for approval time computed from all the 1993 and 1994 cohorts incorporate only a highly selective group of NDAs from those 2 years. As table VI.1 shows, the final status distribution for NDAs submitted in 1993 and 1994 is radically different from that for NDAs submitted earlier. Clearly, this is because many of the applications had not had time to “mature” by the time we collected our data. While more than 50 percent of NDAs submitted in every year from 1987 to 1992 were approved by May 1995, comparatively few of the NDAs submitted in 1993 and 1994 had been approved. Most importantly, the only NDAs from 1993 and 1994 that were approved were those that had been approved relatively quickly. As a result, the average approval time for NDAs submitted in 1987-92 is 26.4 months, while the average time for approved NDAs submitted in 1993 and 1994 is 12.6 months. Because of this bias, we excluded NDAs submitted after 1992 whenever we examined final status. Final status as of May 16, 1995. Percentages may not total 100 because of rounding. Percentages for 1993 and 1994 do not total 100 because NDAs found “unacceptable for filing” because user fees were not paid are not included in the table. However, we included NDAs from 1991 and 1992 because we found no evidence that including these years risks exposure to the censoring bias found in 1993 and 1994. As table VI.1 shows, the approval rates for 1991 and 1992 are equivalent to those from earlier years. That is, almost all the NDAs from 1991 and 1992 for which approval ultimately would be expected have already been approved by FDA. Approval times for those years are not likely to increase much. The question that remains is whether the trend in decreasing time that we observed for submissions between 1987 and 1992 continued for 1993 and 1994 submissions. That question cannot be answered definitively until the 1993 and 1994 cohorts have had time to mature. However, preliminary evidence suggests that the trend continues. Table VI.2 compares the percentage of all applications submitted before 1993 that were approved quickly to the same percentage for NDAs submitted in 1993 and 1994. As table VI.2 shows, approximately the same percentages of NDAs were approved quickly both before and after 1992. From this evidence, we have no reason to suspect that the trend of speedier drug approval for 1987-92 submissions was reversed for 1993-94 submissions. The United Kingdom’s equivalent of FDA is the Medicines Control Agency (MCA). MCA publishes information similar to that contained in FDA’s statistical reports, including data on workload (number and type of submissions) and time (how long it takes to review applications). MCA’s 1994-95 annual report indicates that the assessment of an application for a new active substance (the apparent equivalent of what FDA terms a new molecular entity) took an average of 56 working days. This figure stands in sharp contrast to FDA’s reports that show an average approval time of 20 months for applications for NMEs approved in 1994. No doubt, the sharp contrast in these two averages is one factor creating the impression that approval times are much shorter in the United Kingdom than they are in this country. However, closer examination of the data in MCA’s annual report shows that they should be compared to our data on FDA with caution. Most importantly, the drug review process in the United Kingdom is very different from that in the United States. In the United Kingdom, MCA’s assessment is only the first step in a multistage process of drug review and approval. All applications for new active substances are also automatically referred to a government body called the Committee on the Safety of Medicines (CSM). CSM’s expert subcommittees also assess the application, and these assessments, along with those from MCA, are provided to CSM. CSM then provides advice to the Licensing Authority, which actually grants or denies the product license. However, the rate of rejection of applications or requests for modifications or additional information is very high (99 percent for applications submitted 1987-89), although many of these issues are minor and quickly resolved. Applications with remaining unresolved issues then go through a formal appeals process that may involve additional work on the part of the applicant, reassessment by MCA or CSM, and, in rare cases, the involvement of another body called the Medicines Commission. Thus, the total time until the license is actually granted is considerably longer than the period of initial assessment by MCA. In contrast, the time FDA reports includes all the steps between an accepted NDA and the final decision on it. When one examines total time for both processes, the United Kingdom does not appear to be dramatically faster than the United States. One recent study compared approval times for 11 drugs that were approved in both countries during the period 1986-92. The median time in the United States (about 23 months) was 15 percent longer than the median time in the United Kingdom (20 months). The most recent data from MCA show that overall approval times are actually somewhat longer than that. These data indicate that MCA granted licenses for applications representing 32 new active substances during the 12-month period ending September 30, 1994. The median time for granting a license was 30 months and the average was 24 months. The fastest license was granted in about 4 months, the slowest in 62 months. FDA’s data for the calendar year ending December 31, 1994, indicate that the agency approved a total of 22 new molecular entities. The median approval time was 18 months, average approval time about 20 months. The fastest approval reported by FDA took about 6 months and the slowest about 40 months. Thus, the most recent data show that approval times for NMEs are actually shorter in the United States. In addition, a broader perspective shows that approval processes in many industrialized nations may be converging.Approval times over the past 10 years for France, Germany, Japan, the United Kingdom, and the United States all seem to be moving toward the 2-year point. The trend in the United States (which had lengthy times throughout the mid-1980s) has been toward more rapid times, whereas the process has been getting slower in some of the other (originally faster) countries. This report was prepared by Martin T. Gahart, Michele Orza, George Silberman, and Richard Weston of the Program Evaluation and Methodology Division. FDA User Fees: Current Measures Not Sufficient for Evaluating Effect on Public Health (GAO/PEMD-94-26, July 22, 1994). FDA Premarket Approval: Process of Approving Lodine as a Drug (GAO/HRD-93-81, April 12, 1993). FDA Regulations: Sustained Management Attention Needed to Improve Timely Issuance (GAO/HRD-92-35, February 21, 1992). FDA Drug Review: Postapproval Risks 1976-1985 (GAO/PEMD-90-15, April 26, 1990). FDA Resources: Comprehensive Assessment of Staffing, Facilities and Equipment Needed (GAO/HRD-89-142, September 15, 1989). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided data on the Food and Drug Administration's (FDA) new drug application (NDA) process, focusing on: (1) whether the timeliness of the review and approval process for new drugs changed in recent years; (2) the factors that distinguish NDA that are approved quickly from those that take longer to approve; (3) what distinguishes NDA that are approved from those that are not; and (4) how FDA drug approval process compares with the approval process in the United Kingdom. GAO found that: (1) the average number of months for NDA to be approved by FDA decreased from 33 months in 1987 to 19 months in 1992; (2) the overall decrease in approval times was achieved through gradual reductions in the submission of all NDA from 1987 to 1992; (3) the priority FDA assigns to an NDA and the experience of its sponsor determine the timeliness and likelihood of the approval process; and (4) although comparable data is limited, the review times for FDA and its counterpart agency in the United Kingdom are similar.
The DOD Joint Ethics Regulation defines ethics as standards that guide someone’s behavior based on their values—which the regulation defines as core beliefs that motivate someone’s attitudes and actions. The Joint Ethics Regulation identifies 10 primary ethical values that DOD personnel should consider when making decisions as part of their official duties. These values are: honesty, integrity, loyalty, accountability, fairness, caring, respect, promise-keeping, responsible citizenship, and pursuit of excellence. In addition to DOD’s ethical values, each of the military services has established its own core values. For example, the core values of the Navy and the Marine Corps are honor, courage, and commitment. The Air Force’s core values include integrity and service before self, and the Army’s include loyalty, honor, duty, integrity, respect, and selfless service. For the purposes of this report, we distinguish between compliance-based ethics programs and values-based ethics programs. We refer to compliance-based ethics programs as those that focus primarily on ensuring adherence to rules and regulations related to financial disclosure, gift receipt, outside employment activities, and conflicts of interest, among other things. In contrast, we use values-based ethics programs to refer to ethics programs that focus on upholding a set of ethical principles in order to achieve high standards of conduct. Values- based ethics programs can build on compliance to incorporate guiding principles such as values to help foster an ethical culture and inform decision-making where rules are not clear. Professionalism relates to the military profession, which DOD defines as the values, ethics, standards, code of conduct, skills, and attributes of its workforce. One of the military profession’s distinguishing characteristics is its expertise in the ethical application of lethal military force and the willingness of those who serve to die for our nation. While DOD’s leaders serve as the foundation and driving force for the military profession, DOD considers it the duty of each military professional to set the example of virtuous character and exceptional competence at every unit, base, and agency. There are numerous laws and regulations governing the conduct of federal personnel. The Compilation of Federal Ethics Laws prepared by the United States Office of Government Ethics includes nearly 100 pages of ethics-related statutes to assist ethics officials in advising agency employees. For the purposes of this report, we note some key laws and regulations relevant to military ethics and professionalism. The laws and regulations are complex and the brief summaries here are intended only to provide context for the issues discussed in this report. The Ethics in Government Act of 1978 as amended established the Office of Government Ethics, an executive agency responsible for providing overall leadership and oversight of executive branch agencies’ ethics programs to prevent and resolve conflicts of interest. To carry out these responsibilities, the Office of Government Ethics ensures that executive branch ethics programs are in compliance with applicable ethics laws and regulations through inspection and reporting requirements; disseminates and maintains enforceable standards of ethical conduct; oversees a financial disclosure system for public and confidential financial disclosure report filers; and provides education and training to ethics officials. The Ethics in Government Act of 1978 also requires certain senior officials in the executive, legislative, and judicial branches to file public reports of their finances and interests outside the government, and places certain limitations on outside employment. The main criminal conflict of interest statute, Section 208 of Title 18 of the U.S. Code, prohibits certain federal employees from personally and substantially participating in a particular government matter that will affect their financial interests or the financial interests of their spouse, minor child, or general partner, among others. The Office of Government Ethics implemented this statute in Title 5 of the Code of Federal Regulations (C.F.R.) Part 2640, which further defines financial interests and contains provisions for granting exemptions and individual waivers, among other things. The Uniform Code of Military Justice establishes the military justice system and provides court-martial jurisdiction over servicemembers and other categories of personnel. Among other things, it defines criminal offenses under military law; and it authorizes commanding officers to enforce good order and discipline through the exercise of non-judicial punishment. The Office of Government Ethics issued 5 C.F.R. Part 2635, which contains standards that govern the conduct of all executive branch employees. To supplement Title 5, some agencies have issued additional employee conduct regulations, as authorized by 5 C.F.R. § 2635.105. The Office of Government Ethics also issued Part 2638, which contains the Office of Government Ethics and executive branch agency ethics program responsibilities. For example, 5 C.F.R. § 2638.602 requires an agency to file a report annually with the Office of Government Ethics covering information on each official who performs the duties of a designated agency ethics official; statistics on financial disclosure report filings; and an evaluation of its ethics education, training and counseling programs. Additionally, 5 C.F.R. § 2638.701 requires that an agency establish an ethics training program that includes an initial orientation for all employees, and annual ethics training for employees who are required to file public financial disclosure reports and other covered employees. The Joint Ethics Regulation is DOD’s comprehensive ethics policy and guidance related to the standards of ethical conduct. The regulation incorporates standards and restrictions from federal statutes, Office of Government Ethics regulations, DOD’s supplemental regulation in 5 C.F.R. Part 3601, and Executive Order 12674 to provide a single source of guidance for the department’s employees on a wide range of rules and restrictions, including issues such as post-government employment, gifts, financial disclosure, and political activities. The Joint Ethics Regulation establishes DOD’s ethics program and defines the general roles and responsibilities of the officials who manage the ethics program at the departmental and subordinate organizational levels. For example, the Joint Ethics Regulation requires that the head of each DOD agency assign a designated agency ethics official to implement and administer all aspects of the agency’s ethics program. This regulation also defines the roles and responsibilities of ethics counselors related to ethics program implementation and administration. The Panel on Contracting Integrity was established by DOD in 2007 pursuant to Section 813 of the John Warner National Defense Authorization Act for Fiscal Year 2007. Chaired by the Under Secretary of Defense for Acquisition, Technology, and Logistics, the Panel consists of a cross-section of senior-level DOD officials who review the department’s progress in eliminating areas of vulnerability in the defense contracting system that allow fraud, waste, and abuse to occur, and it recommends changes in law, regulations, and policy. The Panel was due to terminate on December 31, 2009, but Congress extended the Panel’s existence until otherwise directed by the Secretary of Defense, and at a minimum through December 31, 2011. As directed, in 2007, the Panel began submitting annual reports to Congress containing a summary of the Panel’s findings and recommendations. Several of the Panel’s findings and recommendations pertain to DOD ethics. DOD has a management framework to help oversee its required ethics program, and it has initiated steps to establish a management framework to oversee its professionalism-related programs and initiatives. However, DOD has not fully addressed an internal recommendation to develop a department-wide values-based ethics program, and it does not have performance information to assess the Senior Advisor for Military Professionalism’s (SAMP) progress and to inform its decision on whether the office should be retained beyond March 2016. DOD has a decentralized structure to administer and oversee its required ethics program and to ensure compliance with departmental standards of conduct. This structure consists of 17 Designated Agency Ethics Officials positioned across the department. Each Designated Agency Ethics Official, typically the General Counsel, is appointed by the head of his or her organization, and is responsible for administering all aspects of the ethics program within his or her defense organization. This includes managing the financial disclosure reporting process, conducting annual ethics training, and providing ethics advice to employees. To assist in implementing and administering the organization’s ethics program, each Designated Agency Ethics Official appoints ethics counselors. Attorneys designated as ethics counselors support ethics programs by providing ethics advice to the organization’s employees, among other things. Within the military departments, the Judge Advocate Generals provide ethics counselors under their supervision with legal guidance and assistance and support all aspects of the departments’ ethics programs. The DOD Standards of Conduct Office (SOCO), on behalf of the DOD General Counsel, administers the ethics program for the Office of the Secretary of Defense and coordinates component organization ethics programs. SOCO is responsible for developing and establishing DOD- wide ethics rules and procedures and for promoting consistency among the component organizations’ ethics programs by providing information, uniform guidance, ethics counselor training, and sample employee training materials. According to the Joint Ethics Regulation, the DOD General Counsel is responsible for providing SOCO with sufficient resources to oversee and coordinate DOD component organization ethics programs. The DOD General Counsel also represents DOD on matters relating to ethics policy. DOD has taken steps toward developing a values-based ethics program but has not fully addressed the recommendation of the Panel on Contracting Integrity to develop a department-wide values-based ethics program. For instance, DOD has taken steps such as conducting a department-wide survey of its ethical culture and a study of the design and implementation of such a program. DOD also began delivering values-based ethics training annually in 2013 to select personnel. In 2008, the Panel on Contracting Integrity recommended in its report to Congress that DOD develop a department-wide values-based ethics program to complement its existing rules-based compliance program managed by SOCO. The report noted that while SOCO had been effective in demanding compliance for set rules, the ethics program may have provided the false impression that promoting an ethical culture was principally the concern of the Office of General Counsel, when integrity is a leadership issue, and therefore everyone’s concern. In 2010, the Panel also noted that an effective values-based ethics program, as evidenced by the many robust programs employed by DOD contractors, cannot be limited to educating DOD leadership; rather, it must be aimed at promoting an ethical culture among all DOD employees. The Panel’s recommendation was based in part on the Defense Science Board’s 2005 finding that while DOD had in place a number of pieces for an ethically grounded organization, it lagged behind best-in-class programs in creating a systematic, integrated approach and in demonstrating the leadership necessary to drive ethics to the forefront of organizational behavior. The Panel reiterated its recommendation for a department- wide values-based program in its 2009 and 2010 reports to Congress. In response to the Panel’s recommendation, DOD contracted for a 2010 survey and a 2012 study to assess DOD’s ethical culture and to design and implement a values-based ethics program, respectively. The 2010 survey assessed various dimensions of ethical behavior, including the level of leadership involvement in the ethics program and the extent to which employees perceive a culture of values-based ethics and are recognized and rewarded for ethics excellence. The survey report findings showed that DOD’s overall ethics score was comparable to that of other large federal government organizations, but advocated for a values-based approach to address ethical culture weaknesses. For example, the survey report stated that: employees believe that DOD rewards unethical behavior to an extent that is well above average; employees fear retribution for reporting managerial/commander misconduct to an extent that is well above average; and the number of employees who acknowledge regularly receiving ethics information and training is comparatively low. The 2012 study reinforced the need for a department-wide values-based ethics program—noting that DOD lagged behind common practices, among other things—and made 14 recommendations related to establishing such a program. Notably, these recommendations included developing an independent Office of Integrity and Standards of Conduct; adopting a set of core values representing all of DOD; conducting annual core values training for all DOD employees; and periodically measuring program effectiveness. In 2013, the Panel on Contracting Integrity issued a memorandum to SOCO stating that, after reviewing the 2012 study’s recommendations, SOCO was better positioned than the Panel to implement the study’s recommendations. In 2013, SOCO partially implemented 1 of the study’s 14 recommendations by annually delivering values-based ethics training to DOD financial disclosure filers—who are required to receive annual ethics training—as well as other select military and civilian personnel. This training emphasizes DOD and military service core values such as honor, courage, and integrity; highlights cases of misconduct; discusses ethical decision-making; and features senior-leader involvement in presentations to emphasize its importance. In 2014, the Under Secretary of Defense for Acquisition, Technology, and Logistics directed that all acquisition workforce personnel also complete this training annually to reinforce the importance of ethical decision-making. SOCO officials stated that they encourage all DOD organizations to administer this values- based annual ethics training and to extend this training to other personnel not required to receive mandatory annual ethics training. In 2014, DOD reported that about 146,000 department personnel received annual ethics training. We estimate that this represents about 5 percent of DOD’s total workforce. The Federal Sentencing Guidelines, a key source of guidance often used in developing effective ethics programs, encourage organizations to train all employees periodically on ethics. Similarly, DOD’s 2012 study recommended mandatory annual training on integrity and ethics for all DOD employees, and the 2008 Panel report stated that an effective values-based ethics program must be aimed at promoting an ethical culture among all DOD employees. Several of the DOD, foreign military, and industry organizations we spoke with cited the importance of training to convey information about ethics. For example, SOCO officials stated that positive feedback from the initial values-based training rollout in 2013 influenced their decision to continue with this format in 2014, while officials from the SAMP office stated that employees need to be reminded of ethics periodically, and that senior leadership should be retrained continuously on ethics rules. Additionally, officials from each of the four industry and foreign military organizations we contacted stated that ethics training within their organizations was either mandatory for all employees on a periodic basis or available to all employees in one or more formats. As noted above, SOCO encourages DOD organizations to administer values-based annual training to non-mandatory personnel, but neither SOCO nor the military departments have assessed the feasibility of expanding this training to additional personnel. A SOCO official stated that annual training could be expanded to a larger group of employees, potentially on a periodic instead of an annual basis, but that any decision to appreciably expand ethics training would have to consider factors such as associated costs related to the time and effort for leaders and ethics counselors to conduct training, employee hours to take training, and administrative support time to track compliance with the training requirement. This SOCO official noted also that the Army required face- to-face annual ethics training for all employees from approximately 2002 through 2006 but subsequently eliminated the requirement because of the resource burden and the concern that training was not needed for most enlisted personnel and junior officers. Our work on human capital states that agencies should strategically target training to optimize employee and organizational performance by considering whether expected costs associated with proposed training are worth the anticipated benefits over the short and long terms. Without considering such factors in an assessment of the feasibility of expanding mandatory annual values- based ethics training to a greater number of DOD employees, the department may be limited in its ability to properly target this training, and therefore may be missing opportunities to promote and enhance DOD employees’ familiarity with values-based ethical decision-making. With respect to the other 13 recommendations from the 2012 study, SOCO officials stated that they do not plan to take further action. These officials also stated that they have not formally responded to the Panel’s original recommendation to develop a values-based ethics program or its subsequent memorandum. SOCO officials expressed support for developing a values-based ethics program provided that such a program were properly resourced and focused on substantive issues instead of process. Similarly, officials from the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics stated that the department would benefit from the creation of such a program, and stressed the need for senior leaders to be involved in promoting awareness of ethical issues. However, SOCO officials stated that the Panel and 2012 study recommendations were not binding, and that SOCO—which is staffed by five attorneys—would not be optimally positioned to develop a department-wide program. These officials also stated that implementing all of the study’s other 13 recommendations was neither feasible nor advisable, and they cited existing practices as being consistent with some of the study’s recommendations. For example: The study’s recommendation to move SOCO from under the Office of General Counsel and rebrand it as an independent Office of Integrity and Standards of Conduct was not possible because ethics counselors are required to be attorneys, according to the Joint Ethics Regulation, and must therefore remain under the supervision of the DOD General Counsel in order to provide the legal advice that the department and its personnel require. The study’s recommendation to create a direct link between senior leadership and the Secretary of Defense on ethics and professionalism matters is addressed, in part, by the SAMP position that was created in March 2014. However, as discussed later in this report, if DOD decides not to renew this position or retain its functions beyond March 2016, DOD will lose its direct link between senior leadership and the Secretary of Defense on ethics and professionalism matters. Both SAMP and SOCO officials stated that there is an enduring need for such a link or the functions performed by the SAMP office, and officials from three of the four industry and foreign military organizations we contacted stated that their organization had in place a direct link to senior leadership on ethics- related matters. The study’s recommendation to assess and mitigate ethical culture and compliance risk is consistent with SOCO’s current practice of informally reviewing misconduct reports and survey results, conducting ethics program reviews, consulting ethics officials, and factoring perceived trends into training plans and appropriate ethics guidance and policy. Federal internal control standards emphasize the need for managers to respond to findings and recommendations from audits and reviews and to complete all actions that correct or otherwise resolve the matters brought to management’s attention within established timeframes, or alternatively to demonstrate that actions are unwarranted. However, DOD has not identified actions or established timeframes for fully responding to the Panel’s recommendation or the 2012 study’s other 13 recommendations; nor has it informed the Panel that it plans to take no further action. While not binding, the Panel’s recommendation to establish a department-wide values-based ethics program represents a need identified by senior leaders from across the department. Without identifying actions DOD intends to take, with timeframes, to address the Panel’s recommendation, including the study’s other 13 recommendations, or demonstrating that further action is unwarranted, the department does not have assurance that the identified need for a values-based ethics program has been addressed. In March 2014, the Secretary of Defense reaffirmed the previous Secretary’s prioritization of professionalism as a top concern for DOD’s senior leadership by establishing the office of the SAMP, headed by a Navy Rear Admiral (Upper Half), which reports directly to the Secretary of Defense. The SAMP position was established for a 2-year term, with an option to renew, and it is supported by an independent office consisting of six permanent staff members comprised variously of Air Force, Army, Navy, Marine Corps, and Army National Guard Lieutenant Colonels, Colonels, Commanders and Captains, and one contract employee who provides administrative support. SAMP officials stated that they were unclear about the rationale behind the initial 2-year term. The office is embedded in the Office of the Under Secretary of Defense for Personnel and Readiness, and it has been fully staffed since July 2014. The purpose of the SAMP office is to coordinate and ensure the integration of the department’s ongoing efforts to improve professionalism, and to make recommendations to senior DOD leadership that complement and enhance such efforts. The office primarily interacts with senior DOD leadership through the Senior Leadership Forum on Military Professionalism, which meets every 5 weeks, and is comprised of the Secretary of Defense, military service secretaries and chiefs, and the DOD General Counsel, among others. The office supports this forum by promulgating an agenda, raising issues for discussion and decision, and briefing leadership on relevant department-wide activities. Recent department-wide activities have been wide-ranging, and include (1) 13 character development initiatives for general and flag officers; (2) a review of ethics content in professional military education; and (3) the development of tools, such as command climate and 360-degree assessments, that can be used to identify and assess ethics-related issues. These and various other initiatives and senior-level communications directed by the President, the Secretary of Defense, and Congress are intended to enhance DOD’s ethical culture and to emphasize the importance of ethics and professionalism to departmental personnel. A timeline of key ethics and professionalism events and communications since 2007 is shown in appendix II. In September 2014, the SAMP office developed a plan outlining its major tasks across three phases: (1) assess the state of the profession, (2) strengthen and sustain professional development, and (3) foster trust through transparent accounting of efforts. Tasks across each respective phase include conducting a survey to assess DOD’s ethical culture; identifying tools for individual professional development and evaluation; and developing an annual report card that highlights trends, best practices, and underperforming professionalism-related programs. DOD does not have timelines or performance measures to assess SAMP’s progress and to inform its decision on whether the SAMP position should be retained. Our work on strategic planning has found that leading practices, such as developing detailed plans outlining major implementation tasks and defining measures and timelines to assess progress, contribute to effective and efficient operations. Additionally, leading organizations that have progressed toward results-oriented management use performance information as a basis for making resource allocation decisions, planning, budgeting, and identifying priorities. The SAMP office has taken steps toward implementing its major tasks, but DOD does not have key performance information to help inform the decision as to whether the SAMP position should be retained beyond its initial 2-year term—which is set to expire in March 2016. The SAMP office has drafted a white paper exploring the relationship between the military profession and the military professional, developed a catalogue documenting tools that can be used to assess ethics-related issues, and initiated steps to update the 2010 department-wide survey of DOD’s ethical culture. In addition, the SAMP office has canvassed the military services to identify service-level initiatives for civilian personnel that are similar to the 13 general and flag officer initiatives, conducted sessions with senior officers to identify areas of interest to senior leadership, and begun to partner with academic institutions to pursue research related to utilizing behavioral science and neuroscience to address issues of ethics, character, and competence in the military. While the SAMP office has taken steps toward completing its major tasks, it has not defined timelines or measures to (1) assess its progress or impact; (2) determine whether it has completed its major tasks; or (3) help inform the decision on whether its initial 2-year term should be renewed. SAMP officials stated that while the office has not defined timelines or measures, they believe that the office’s activities should help to establish self-perpetuating professionalism efforts within the military services. SAMP officials stated that such efforts within the services may somewhat diminish the need for SAMP, but these same officials also noted that the work of the office will remain necessary and that its function should exist beyond the initial 2-year term because building and sustaining an ethical culture and professionalism capacity constitute a continuous effort at every grade level. They added that the Secretary of Defense will also continue to need a mechanism for looking across the services, working with other countries, and influencing departmental policies. The need for senior-level oversight of professionalism or ethics issues also was cited by other DOD, industry, and foreign military organizations we contacted. For example, SOCO officials expressed support for maintaining the SAMP position or function beyond the initial 2-year period, stating that there is enduring value in having an office like SAMP because it provides a sense of permanence to ethics and professionalism and will help institutionalize related improvement efforts. Similarly, as previously stated, officials from three of the four industry and foreign military organizations we contacted stated that their organization had in place a direct link to senior leadership on ethics-related matters. Without timelines or measures to assess the office’s progress, DOD does not have performance information for determining whether SAMP’s efforts are on track to achieve desired outcomes, and the department may find it difficult to determine the future of the office and its function. Further, DOD will not be positioned to assess whether SAMP is the appropriate vehicle to achieve these outcomes or how best to allocate resources within the department to achieve them. DOD has identified a number of mandatory and optional tools that defense organizations can use to identify and assess individual and organizational ethics and professionalism issues. However, two key tools—command climate and 360-degree assessments—have not been fully implemented in accordance with statutory requirements and departmental guidance, and DOD has not yet developed performance metrics to measure its progress in addressing ethics-related issues. DOD has identified several climate, professional development, and psychometric tools that can be used to identify and assess individual and organizational ethics-related issues. Climate tools are designed to assess opinions and perceptions of individuals within an organization, and they include instruments such as surveys. Professional development tools include a range of self-and-peer assessment instruments that are designed to provide individuals with feedback on their development. Psychometric tools include instruments such as the Navy’s Hogan Insights, which are designed to provide a holistic behavioral review of an individual, and are generally used to assess and identify individual behavior and personality traits. The SAMP office is completing an inventory of climate, professional development, and psychometric tools that are used across the department to enhance interdepartmental visibility of these tools and to promote best practices. SAMP officials stated that while these tools could be used to assess ethics-related issues, none of the tools were designed exclusively for that purpose. Figure 1 shows some of the tools identified by the SAMP office that could be used to identify and assess individual and organizational ethics-related issues. Officials from the SAMP office and from each of the military services have cited command climate assessments and 360-degree assessments as the department’s primary tools that could be used for identifying ethics- related issues. Command climate assessments are designed to assess elements that can impact an organization’s effectiveness such as trust in leadership, equal opportunity, and organizational commitment. These assessments can include surveys, focus groups, interviews, records of analyses, and physical observations. The command climate assessment’s main component is a survey administered online by the Defense Equal Opportunity Management Institute. Survey results, which are provided to the unit commander, include a detailed analysis of unit results in comparison to other units within the organization. In addition, 360-degree assessments are a professional developmental tool that allows individuals to gain insights on their character traits by soliciting feedback about their work performance from superiors, subordinates, and peers. A variety of 360-degree assessments are used across the department to enable different levels of personnel to obtain such feedback. For example, the Army conducts three different 360- degree assessments under the Multi-Source Assessment Feedback Program, which are targeted toward officers (Brigadier General and below), non-commissioned officers, and civilian leaders. SAMP officials stated that while none of these tools is specifically designed to assess ethics issues, the office is investigating whether a combination of them can be used to provide a more holistic picture of ethical behavior, and exploring what might be gained by sharing data captured by these tools across the department. The military services have issued guidance to implement command climate assessments, but the Army, the Air Force, and the Marine Corps do not have assurance that they are in compliance with all statutory requirements because their guidance does not fully address implementing and tracking requirements. In addition, the Army’s and the Navy’s guidance do not fully address DOD guidance related to the size of the units required to complete command climate assessments. The National Defense Authorization Act for Fiscal Year 2014 contains requirements related to (1) tracking and verifying that commanders are conducting command climate assessments, (2) disseminating results to the next higher level command, and (3) recording the completion of command climate assessments in commanders’ performance evaluations. As shown in table 1, the Navy has developed guidance that addresses all of the four Fiscal Year 2014 National Defense Authorization Act’s requirements, but the Army’s, the Air Force’s, and the Marine Corps’ guidance do not fully address two of the four requirements that relate to recording in the performance evaluations of a commander whether the commander has conducted a command climate assessment. As table 1 shows, all of the military services’ guidance addresses section 587(a) of the authorization act, which requires that the results of command climate assessments be provided to the commander and to the next higher level command, as well as section 1721(d), which requires that the military departments track and verify whether commanding officers have conducted a command climate assessment. In addition to complying with these requirements, the Army, the Air Force, and the Navy also have command climate assessments reviewed above the next highest level. For example, Navy officials stated that their command climate assessment results are aggregated, analyzed, and reported to Navy leadership annually to inform service policy and training. With respect to sections 587(b) and 587(c) of the authorization act, the Navy’s guidance addresses these sections, but the Army’s, the Air Force’s, and the Marine Corps’ respective guidance do not. For example, the Army’s performance evaluation process requires that raters assess a commander’s performance in fostering a climate of dignity and respect, and in adhering to the requirements of the Army’s Sexual Harassment/Assault Response and Prevention Program, which requires that command climate assessments be conducted. However, this program does not specifically require that commanders include a statement in their performance evaluations as to whether they conducted an assessment, or that failure to do so be recorded in their performance evaluation. In addition, not all of the military services’ guidance fully meets DOD guidance. Specifically, in July 2013, the Acting Under Secretary of Defense for Personnel and Readiness issued a memorandum requiring the secretaries of the military departments to establish procedures in their respective operating instruction and regulations related to the implementation of command climate assessments. Among other things, the guidance addresses the size of units for conducting command climate assessments and the dissemination of assessment results. In response to this guidance, each of the military services has developed written guidance. As shown in table 2, the Air Force’s and the Marine Corps’ guidance address all command climate guidance in the Under Secretary’s memorandum, while the Army’s and the Navy’s guidance do not require that units of fewer than 50 servicemembers shall be surveyed with a larger unit in the command to ensure anonymity and to provide the opportunity for all military personnel to participate in the process, as laid out in the memorandum. Without requiring that commanders include a statement in their performance evaluations about whether they have conducted a command climate assessment, and requiring that the failure of a commander to conduct a command climate assessment be noted in the commander’s performance evaluation, the Army, the Air Force, and the Marine Corps will not be complying with the mandated level of accountability Congress intended during the performance evaluation process. Additionally, without requiring organizations of fewer than 50 servicemembers to be surveyed with a larger unit, the Army and the Navy may be unable to ensure that all unit members are able to participate anonymously in command climate surveys as intended by DOD guidance. The development and use of 360-degree assessments for general and flag officers vary across the military services and the Joint Staff, and they do not cover all intended military personnel. Specifically, the 2013 General and Flag Officer Character Implementation Plan memorandum states that 360-degree assessments would be developed and used for all military service and Joint Staff general and flag officers, and a November 2013 memorandum issued by the Chairman of the Joint Chiefs of Staff to the President reiterates the department’s commitment to developing and implementing 360-degree assessments for all general and flag officers. The Air Force and the Army have developed and implemented 360- degree assessments for all of their general officers, but the Navy, the Marine Corps, and the Joint Staff have developed and implemented 360- degree assessments only for certain general and flag officers. Table 3 shows the extent to which the military services and the Joint Staff have developed and implemented 360-degree assessments for their general and flag officers. The Navy, the Marine Corps, and the Joint Staff cited different reasons for developing and implementing 360-degree assessments only for certain general and flag officers. For example, in 2013, the Navy required new flag officers promoted to the Rear Admiral (Lower Half) rank, as well as Rear Admiral (Lower Half) selects, to complete 360-degree assessments. A Navy official stated that expanding 360-degree assessments to include all Navy flag officers would incur significant costs, particularly with regard to the cost of specially trained personnel to coach individuals on how to respond to the results of their 360-degree assessments. Similarly, officials from the SAMP office and Joint Staff cited coaching as a driver of costs for 360-degree assessments. A RAND study released on behalf of DOD in April 2015 also noted that 360-degree assessments are resource- intensive to design, implement, and maintain. Due to the costs associated with expanding 360-degree assessments and other concerns, such as the value of the feedback elicited by the tool, the Navy is investigating other tools and techniques that can provide critical self- assessment for its personnel. For example, Navy officials stated they are using a similar tool—the Hogan Assessment—as part of a Command Leadership Course for some prospective commanding officers. According to Marine Corps officials, in 2014, two general officers from the Marine Corps participated in a Joint Staff 360-degree assessment pilot program. These officials stated that there are no plans to expand the program to include Marine Corps general officers not assigned to the Joint Staff because Marine Corps senior officials are satisfied with the flexibility and feedback that the Joint Staff pilot provides, and because the Marine Corps also uses the Commandant’s Command Survey, which similarly focuses on the climate and conduct of leaders and commanders. In October 2014, following its pilot, the Joint Staff initiated 360-degree assessments for one and two star general and flag officers to occur at 6 months and 2 years after assignment to the Joint Staff. In July 2015, the Joint Staff issued guidance requiring that Joint Staff three star general and flag officers, civilian senior executives, and one, two, and three star general and flag officers at the combatant commands complete 360- degree assessments. Joint Staff officials stated that 360-degree assessments are not used at the four star rank because at that level the peer and superior populations are significantly smaller, creating a greater possibility of assessor survey fatigue and concerns about anonymity. Further, Joint Staff officials stated that four star level officers already conduct command climate surveys that allow everyone within their unit or organization to assess the leader and organization. While the Navy, the Marine Corps, and the Joint Staff cited varying reasons for implementing 360-degree assessment only for certain general and flag officers, the inconsistent implementation of this tool across the department denies a number of senior military leaders valuable feedback on their leadership skills and an opportunity for developing an understanding of personal strengths and areas for improvement. Taking into account the military services’ and the Joint Staff’s differing reasons, including costs, for implementing 360-degree assessments only for certain general and flag officers, DOD may benefit from reassessing the need and feasibility of developing and implementing 360-degree assessments for all general and flag officers. Federal internal control standards emphasize the importance of assessing performance over time, but DOD is unable to determine whether its ethics and professionalism initiatives are achieving their intended effect because it has not yet developed metrics to measure the department’s progress in addressing ethics and professionalism issues. In 2012, we reported that federal agencies engaging in large projects can use performance metrics to determine how well they are achieving their goals and to identify any areas for improvement. By using performance metrics, decision makers can obtain feedback for improving both policy and operational effectiveness. Additionally, by tracking and developing a baseline for all measures, agencies can better evaluate progress made and whether or not goals are being achieved—thus providing valuable information for oversight by identifying areas of program risk and their causes to decision makers. Through our body of work on leading performance management practices, we have identified several attributes of effective performance metrics (see table 4). SAMP officials stated that they recognize the need to continually measure the department’s progress in addressing ethics and professionalism, and are considering ways to do so; however, challenges exist. For example, the SAMP office plans to update the 2010 ethics survey by administering a department-wide ethics survey in 2015 to reassess DOD’s ethical culture. SAMP officials stated that they expect the new survey to yield valuable information on DOD’s ethical culture, but they have not identified metrics to assess DOD’s ethical culture. Additionally, SAMP officials stated that they plan to modify questions from the 2010 survey to lessen its focus on acquisition-related matters, and to collect new information. While modifying the questions from the 2010 survey may improve DOD’s understanding of its ethical climate, doing so could limit DOD’s ability to assess trends against baseline (2010) data. Moreover, DOD’s ability to assess trends in the future may also be affected by uncertainty as to whether the survey will be administered beyond 2015. SAMP officials attributed this uncertainty, in part, to survey fatigue within the department—a factor cited by SAMP officials that could also affect the response rate for the 2015 survey, and therefore limit the utility of the survey data. To combat this challenge, the SAMP office is considering merging the ethics survey with another related survey, such as the sexual assault prevention and response survey. According to SAMP officials, the Under Secretary of Defense for Personnel and Readiness has established a working group to address survey fatigue within the department. SAMP officials stated that they have also considered using misconduct report data to assess the department’s ethical culture, but that interpreting such data can be challenging. For example, a reduction in reports of misconduct could indicate either fewer occurrences or a decrease in reporting—the latter of which could be induced by concerns over retribution for reporting, officials stated. Additionally, our review found that the department’s ability to assess department-wide trends in ethical behavior is limited because misconduct report data are not collected in a consistent manner across DOD. Specifically, DOD organizations define categories of misconduct differently, thereby precluding comparisons of misconduct data across different organizations, as well as aggregate- level analysis of department-wide data. To address this challenge, the DOD Office of Inspector General is developing common definitions to standardize the collection of misconduct report data across the department. DOD Office of Inspector General officials estimated that the definitions will be finalized in 2016. Because of such challenges, SAMP officials are considering certain activities, such as increased focus on ethics-related matters by DOD senior leadership, to be indicators of progress. Our work on performance management has found that intermediate goals and measures such as outputs or intermediate outcomes can be used to show progress or contribution to intended results. For instance, when it may take years before an agency sees the results of its programs, intermediate goals and measures can provide information on interim results to allow for course corrections. Also, when program results could be influenced by external factors beyond agencies’ control, they can use intermediate goals and measures to identify the program’s discrete contribution to a specific result. Our review found that various mechanisms were used by the industry and foreign military organizations we contacted to assess ethical culture, with officials from all four industry and foreign military organizations stating that their organization had used one or more tools to assess the ethical culture of their organizations. For example, one of the foreign military organizations we contacted administers a survey periodically to both civilian and military personnel to measure the organization’s ethical culture against a baseline that was established in 2003. SAMP officials similarly stated that a variety of data sources— including organizational, survey, attitudinal, behavioral, and perception of trust data—should be used to assess DOD’s ethical culture. However, without identifying specific sources, DOD will not have the information necessary to assess its progress. Moreover, without establishing clear, quantifiable, and objective metrics that include a baseline assessment of current performance to measure progress, or intermediate or short-term goals and measures, decision-makers in DOD and Congress will find it difficult to determine whether the department’s ethics and professionalism initiatives are on track to achieve desired outcomes. Maintaining a workforce characterized by professionalism and commitment to ethical values is key to executing DOD’s mission to protect the security of the nation; limiting conduct that can result in misuse of government resources; and maintaining servicemember, congressional, and public confidence in senior military leadership. As recent cases of misconduct demonstrate, ethical and professional lapses can carry significant operational consequences, waste taxpayer resources, and erode public confidence. Since 2007, DOD has taken significant steps to improve its ethical culture, for instance by conducting a department-wide ethics survey and follow-on study. The department has also acted to enhance oversight of its professionalism-related initiatives and issues, for example through creating the SAMP office. However, its overall effort could be strengthened by taking a number of additional steps. In particular, without fully considering the Panel on Contracting Integrity’s recommendation to create a values-based ethics program and the subsequent 2012 study recommendations, as well as assessing the feasibility of expanding annual values-based ethics training beyond the current mandated personnel, DOD will not have assurance that it is doing enough to promote an ethical culture, and it may face challenges in identifying areas for future action. Similarly, without performance information, including timelines and measures, DOD will not be optimally positioned to determine whether the SAMP—a key oversight position—should be renewed after its initial 2-year term, or to assess the SAMP office’s progress. At the military service level, further actions also could improve oversight of ethics and professionalism-related issues for senior leaders. For instance, without revising current guidance to comply with statutory requirements and departmental guidance and assure that commanders are conducting command climate assessments, the Army, the Air Force, the Navy, and the Marine Corps will be unable to discern whether commanders are obtaining feedback on their performance and promoting an effective culture. Furthermore, without examining the need for and feasibility of implementing 360-degree assessments for all general and flag officers, the Navy, the Marine Corps, and the Joint Staff will not have information that could enhance individual ethics and professional values. Finally, given the initiatives that DOD is planning and has under way, it is important that there be reliable means by which to gauge progress. Without identifying information sources and developing intermediate goals and performance metrics that are clear, quantifiable, and objective—and that are linked to an identified baseline assessment of current performance—decision makers in DOD and Congress will not have full visibility into the department’s progress on professionalism-related issues. As the department realigns itself to address new challenges, a sustained focus on ethics and professionalism issues will contribute to fostering the ethical culture necessary for DOD to carry out its mission. We recommend that the Secretary of Defense take the following six actions: 1. To promote and enhance familiarity with values-based ethical decision-making across the department, direct appropriate departmental organization(s), in consultation with the Office of General Counsel and the SAMP or its successor organization(s), to assess the feasibility of expanding annual values-based ethics training to include currently non-mandatory recipients. 2. To ensure that the need for a department-wide values-based ethics program has been addressed, direct appropriate departmental organization(s), in consultation with the Office of General Counsel, to identify actions and timeframes for responding to the Panel on Contracting Integrity recommendation, including the 14 related 2012 study recommendations, or alternatively demonstrate why additional actions are unwarranted. 3. To help inform decision makers on the SAMP’s progress as well as the decision regarding the extension of the SAMP’s term, direct the SAMP to define timelines and measures to assess its progress in completing its major tasks. 4. To increase assurance that commanders are conducting command climate assessments in accordance with statutory requirements and departmental guidance, direct the Secretaries of the Air Force, the Army, and the Navy, and the Commandant of the Marine Corps to modify existing guidance or develop new guidance to comply with requirements set forth in the Fiscal Year 2014 National Defense Authorization Act and internal DOD guidance. 5. To better inform the department’s approach to senior officers’ professional development, direct the Secretary of the Navy, the Commandant of the Marine Corps, and the Chairman of the Joint Chiefs of Staff to assess the need for and feasibility of implementing 360-degree assessments for all general and flag officers. 6. To improve DOD’s ability to assess its progress in addressing ethics and professionalism issues, direct the SAMP, through the Under Secretary of Defense for Personnel and Readiness, or SAMP’s successor organization(s), to identify information sources and develop intermediate goals and performance metrics. At minimum, these performance metrics should be clear, quantifiable, and objective, and they should include a baseline assessment of current performance. We provided a draft of this report to DOD for review and comment. In written comments, DOD concurred with comments on three of our six recommendations, partially concurred with two recommendations, and did not concur with one recommendation. DOD’s comments are summarized below and reprinted in appendix III. DOD also provided technical comments on the draft report, which we incorporated as appropriate. DOD concurred with comment on our first, second, and sixth recommendations, which relate to annual values-based ethics training, a department-wide values-based ethics program, and performance metrics, respectively. With regard to the first and sixth recommendations, DOD stated that the SAMP is a temporary office established by Secretary Hagel with a term ending no later than March 2016. As noted in our report, the SAMP office was established in March 2014 for an initial 2- year term, with an option to renew. Because the future of the SAMP office had not been determined at the time of this review, we directed these recommendations toward the SAMP or its successor organization(s). In its comments on our second recommendation, for DOD to respond to the Panel on Contracting Integrity recommendation, including the 14 related 2012 study recommendations, or alternatively to demonstrate why actions are unwarranted, DOD raised concerns regarding whether we are endorsing the 2012 study’s recommendations. We are not endorsing them. Our recommendation is for DOD to fully consider the Panel on Contracting Integrity’s recommendation and the subsequent 2012 study recommendations. If DOD does not believe such a program or the actions recommended by the 2012 study are warranted, then it should demonstrate why additional actions are unwarranted. Without fully considering the Panel’s recommendation, including the 2012 study recommendations, DOD will not have assurance that it is doing enough to promote an ethical culture. In addition, DOD voiced concern that the statement in the draft report that SOCO officials “do not plan to take any further action” with respect to the remaining 13 recommendations from the Phase II study could be misunderstood to imply that SOCO is unwilling to consider additional values-based ethics program initiatives. DOD elaborated that SOCO embraces values-based ethics training and other initiatives. DOD added that, as noted elsewhere in the report, DOD has practices in place that are consistent with a number of the recommendations in the Phase II study, and that SOCO is most receptive to assessing and recommending implementation of additional measures where appropriate and feasible. As noted in our report, in 2013, SOCO partially implemented 1 of the study’s 14 recommendations by annually delivering values-based ethics training to select military and civilian personnel. In addition, SOCO cited existing practices as being consistent with some of the study’s remaining 13 recommendations. However, SOCO officials told us that they do not plan to take further action, and that the Panel and 2012 study recommendations were not binding. These officials also stated that implementing all of the study’s remaining 13 recommendations was neither feasible nor advisable. We continue to believe that without identifying actions DOD intends to take, with timeframes, to address the Panel’s recommendation, including the study’s other 13 recommendations, or demonstrating that further action is unwarranted, the department does not have assurance that the identified need for a values-based ethics program has been addressed. DOD partially concurred with our fourth recommendation, that the Air Force, the Army, the Navy, and the Marine Corps modify existing guidance or develop new guidance to comply with requirements set forth in the National Defense Authorization Act for Fiscal Year 2014 and internal DOD guidance, to increase assurance that commanders are conducting command climate assessments in accordance with these statutory requirements and departmental guidance. In its comments, DOD stated that the Army’s performance evaluation process requires that raters assess a commander’s performance in fostering a climate of dignity and respect, thereby in DOD’s view satisfying the National Defense Authorization Act’s requirement that commanders must include a statement in their performance evaluations as to whether or not they conducted an assessment. In addition, DOD commented that although DOD guidance calls for organizations of fewer than 50 servicemembers to be surveyed with a larger unit, Army guidance calls for command climate surveys to be conducted at the company level and states that units of between 30 and 50 personnel may conduct their surveys separately or together with another unit, at the commander’s discretion; and that, since the survey response rate is sufficiently high (58 percent), the Army can survey organizations with fewer than 50 servicemembers. Therefore, DOD believes that the Army meets the intent of departmental guidance for command climate survey utilization. As noted in our report, the Army’s Sexual Harassment/Assault Response and Prevention Program requires that command climate assessments be conducted. However, this program does not specifically require that commanders include a statement in their performance evaluations as to whether they conducted an assessment, or that failure to do so be recorded in their performance evaluation, as required by the National Defense Authorization Act for Fiscal Year 2014. Therefore, we continue to believe that without requiring that commanders include a statement in their performance evaluations about whether they have conducted a command climate assessment, and requiring that the failure of a commander to conduct a command climate assessment be noted in the commander’s performance evaluation, the Air Force, the Army, and the Marine Corps will not be complying with the mandated level of accountability that Congress intended during the performance evaluation process. In addition, as noted in the report DOD guidance requires that organizations of fewer than 50 servicemembers shall be surveyed with a larger unit in the command to ensure anonymity and provide the opportunity for all military personnel to participate in the process. We continue to maintain that, regardless of the survey response rate, without requiring organizations of fewer than 50 servicemembers to be surveyed with a larger unit, the Army may be unable to ensure that all unit members are able to participate in command climate surveys, and to do so anonymously, as intended by DOD guidance. DOD partially concurred with our fifth recommendation, that the Navy, the Marine Corps, and the Joint Chiefs of Staff assess the need and feasibility of implementing 360-degree assessments for all general and flag officers, to better inform the department’s approach to senior officers’ professional development. In its comments, DOD stated that it concurs with the recommendation to assess the need for and feasibility of implementing 360-degree assessments, or 360-degree-like feedback assessments, where they are not already being performed. However, DOD stated that it does not believe it should assess the need and feasibility of implementing this tool for all general and flag officers, but rather only for three star ranks and below. As noted in our report, the 2013 General and Flag Officer Character Implementation Plan memorandum states that 360-degree assessments would be developed and used for all military service and Joint Staff general and flag officers, and a November 2013 memorandum issued by the Chairman of the Joint Chiefs of Staff to the President reiterates the department’s commitment to developing and implementing 360-degree assessments for all general and flag officers. The Air Force and the Army have developed and implemented 360-degree assessments for all of their general officers. However, as noted in the report, the Navy, the Marine Corps, and the Joint Staff have developed and implemented 360-degree assessments only for certain general and flag officers, citing varying reasons, including costs, for doing so. We continue to believe that, given the inconsistency of the implementation of this tool across the department, DOD may benefit from reassessing the need for and feasibility of developing and implementing 360-degree assessments for all general and flag officers. Further, we continue to maintain that such a reassessment would support the department’s approach to senior officers’ professional development by increasing and improving the consistency of the information provided to leadership. DOD did not concur with our third recommendation, that the SAMP define timelines and measures to assess its progress in completing its major tasks, in order to help inform decision makers on the SAMP’s progress as well as the decision regarding the extension of the SAMP’s term. In its written comments, DOD stated that the department will submit its Fiscal Year 2015 National Defense Authorization Act report on military programs and controls regarding professionalism to Congress on September 1, 2015, thereby satisfying the requirements of this recommendation. Although DOD states that the intent of our recommendation will be satisfied by the September 1, 2015, report to Congress, we have not been provided a copy of the draft report and cannot determine whether the report will include timelines and measures. Further, while DOD stated that SAMP's dissolution will occur in March 2016, a formal decision has not yet been made. As we discussed in our report, DOD officials stated that there is an enduring need for the work and functions of the SAMP office because, among other things, building and sustaining an ethical culture and professionalism capacity constitute a continuous effort at every grade level, and because of the importance of having a direct link between senior leadership and the Secretary of Defense on ethics and professionalism matters. The intent of our recommendation is to help equip decision makers with the information necessary to assess SAMP's progress and thereby determine next steps regarding its future. We continue to believe that without timelines or measures to assess the office’s progress, DOD will not be positioned to assess whether SAMP is the appropriate vehicle to achieve these outcomes, or how best to allocate resources within the department to achieve them. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Chairman, Joint Chiefs of Staff; the Secretaries of the Military Departments; and the Commandant of the Marine Corps. The report also is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3604 or farrellb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. To evaluate the extent to which the Department of Defense (DOD) has developed and implemented a management framework to oversee its programs and initiatives on professionalism and ethics for active duty officers and enlisted servicemembers we assessed—against leading practices for strategic planning and performance management, and federal internal control standards—guidance, plans, and work products to determine the extent to which DOD has defined roles, responsibilities, measures, and timelines for managing its existing ethics program and professionalism oversight framework. For example, we reviewed the Code of Federal Regulations and DOD guidance such as the Joint Ethics Regulation, which governs DOD’s ethics program and the management of related activities including training, financial disclosure reporting, and gift receipt. We also reviewed work plans and timelines that define the Senior Advisor for Military Professionalism (SAMP) position and the scope of its activities. We compared, against federal internal control standards and practices for effective ethics programs and strategic training, actions and work products related to the department’s ongoing and planned initiatives to establish a values-based ethics program and to develop an ethical and professional culture. These documents included studies commissioned by DOD to assess its ethical culture and to design and implement a values-based program; memorandums and work products related to the 13 general and flag officer character initiatives; and Secretary of Defense memorandums requiring actions including ethics training and professional military education reviews. We also interviewed officials responsible for ethics and professionalism from the Office of the Secretary of Defense, the military services, and the Joint Staff to identify additional actions and determine progress in these areas. We assessed these documents by comparing them against leading practices for strategic planning and performance measurement that relate to the need for detailed plans outlining major implementation tasks and defined measures and timelines to measure progress; and federal internal control standards related to the need for performance measures and indicators, and the importance of managers determining proper actions in response to findings and recommendations from audits and reviews and completing such actions within established timeframes. We obtained and analyzed Fiscal Year 2012 to 2014 misconduct data from the DOD Office of Inspector General to identify discernible trends in reported misconduct, as well as data regarding the number of DOD personnel receiving annual ethics training. Specifically, we obtained calendar year 2014 DOD annual ethics training data that included active duty, reserve, and civilian personnel reported to the Office of Government Ethics by the 17 DOD Designated Agency Ethics Officials, excluding the National Security Agency. These are the most current data available on annual ethics training, and they are the data used by the Office of Government Ethics to determine DOD’s compliance with the annual training requirement for financial disclosure filers. We did not assess the reliability of these data, but we have included them in the report to provide context. We did not use these data to support our findings, conclusions, or recommendations. To determine the percentage of DOD personnel who have completed annual ethics training, we obtained Fiscal Year 2014 data from the Office of the Under Secretary of Defense (Comptroller) on the number of DOD personnel, including active duty and reserve component military personnel and civilian full-time equivalents. We also reviewed relevant literature to identify ethics-related issues and best practices within DOD, and we met with foreign military officials, defense industry organizations, and commercial firms that we identified during our preliminary research and in discussion with DOD officials as having experience in implementing and evaluating compliance-based or values- based ethics programs in the public and private sectors, both domestically and internationally, to define the concept of values-based ethics and to gather lessons learned from values-based ethics program implementation. A full listing of these organizations can be found in table 5. To evaluate DOD’s tools and performance metrics for identifying, assessing, and measuring its progress in addressing ethics and professionalism issues, we examined assessment tools identified by DOD as containing ethics-related content, including command climate surveys and 360-degree assessments. We used content analysis to review and assess actions the department has taken to implement and use the results of command climate and 360-degree assessments in accordance with statutory requirements and departmental guidance. These requirements pertain to the implementation, tracking, and targeting of these tools, among other things. To do this, we met with officials from the Office of the Secretary of Defense, the military services, and the Joint Staff to obtain information on the status of their efforts to implement and track command climate assessments, and to develop and implement 360- degree assessments for general and flag officers in accordance with statutory requirements and departmental initiatives. We then assessed guidance and instructions developed by the military services and the Joint Staff to determine whether they addressed each of the statutory requirements and departmental guidance related to command climate assessments and 360-degree assessments. To ensure accuracy, one GAO analyst conducted the initial content analysis by coding the military services’ and the Joint Staff’s actions with respect to each requirement, and a GAO attorney then checked the analysis for accuracy. We determined that command climate guidance and instructions addressed a statutory or departmental requirement if it addressed each aspect of the requirement. Similarly, we determined the extent to which the military services and the Joint Staff had developed and implemented 360-degree assessments for all general and flag officers by evaluating the steps they had taken to develop and implement these tools for each general and flag officer rank within each organization. Any disagreements in the coding were discussed and reconciled by the analyst and attorney. We also spoke with officials within the Office of the Secretary of Defense, the Joint Staff, and the military services to identify performance metrics that could be used by the department to measure its progress in addressing ethics and professionalism issues, and we assessed the department’s efforts to identify such metrics against federal internal control standards and our prior work on performance measurement leading practices. In addressing both of our audit objectives, we interviewed officials from the organizations identified in table 5. We conducted this performance audit from September 2014 to September 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Brenda S. Farrell, (202) 512-3604 or farrellb@gao.gov. In addition to the contact named above, Marc Schwartz, Assistant Director; Tracy Barnes; Ryan D’Amore; Leia Dickerson; Tyler Kent; Jim Lager; Amie Lesser; Leigh Ann Sheffield; Michael Silver; Christal Ann Simanski; Cheryl Weissman; and Erik Wilkins-McKee made key contributions to this report.
Professionalism and sound ethical judgment are essential to executing the fundamental mission of DOD and to maintaining confidence in military leadership, but recent DOD and military service investigations have revealed misconduct related to, among other things, sexual behavior, bribery, and cheating. House Report 113-446 included a provision for GAO to review DOD's ethics and professionalism programs for military servicemembers. This report examines the extent to which DOD has developed and implemented (1) a management framework to oversee its programs and initiatives on ethics and professionalism; and (2) tools and performance metrics to identify, assess, and measure progress in addressing ethics and professionalism issues. GAO analyzed DOD guidance and documents related to military ethics and professionalism, reviewed literature to identify ethics issues and practices, and interviewed DOD, industry, and foreign military officials experienced in implementing ethics and professionalism programs. The Department of Defense (DOD) has a management framework to help oversee its existing ethics program and has initiated steps to establish such a framework to oversee its professionalism-related programs and initiatives, but its efforts could be strengthened in both areas. DOD has a decentralized structure to administer and oversee its existing, required compliance-based ethics program, which focuses on ensuring adherence to rules. However, DOD has not fully addressed a 2008 internal recommendation to develop a department-wide values-based ethics program, which would emphasize ethical principles and decision-making to foster an ethical culture and achieve high standards of conduct. In 2012, DOD studied the design and implementation of a values-based ethics program and in 2013 delivered related training to certain DOD personnel. DOD has decided to take no further actions to establish a values-based ethics program, but it has not demonstrated that additional actions are unwarranted or assessed the feasibility of expanding training to additional personnel. As a result, the department neither has assurance that it has adequately addressed the identified need for a values-based ethics program nor has information needed to target its training efforts appropriately. DOD established a 2-year, potentially renewable, position for a Senior Advisor for Military Professionalism, ending in March 2016, to oversee its professionalism-related efforts. Since 2014 the Advisor's office has identified and taken steps toward implementing some of its major tasks, which relate to coordinating and integrating DOD's efforts on professionalism. Professionalism relates to the values, ethics, standards, code of conduct, skills, and attributes of the military workforce. However, the office has not developed timelines or information to assess its progress in completing its major tasks. Thus, DOD does not have information to track the office's progress or assess whether the SAMP position should be retained after March 2016. DOD has not fully implemented two key tools for identifying and assessing ethics and professionalism issues, and it has not developed performance metrics to measure its progress in addressing ethics-related issues. DOD has identified several tools, such as command climate and 360-degree assessments, that can be used to identify and assess ethics and professionalism issues. However, guidance issued by the military services for command climate assessments does not meet all statutory requirements and DOD guidance. As a result, the services do not have the required level of accountability during the performance evaluation process over the occurrence of these assessments, or assurances that all military personnel are able to anonymously participate in them. Further, the Navy, Marine Corps, and Joint Staff have developed and implemented 360-degree assessments for some but not all general and flag officers, and therefore some of these officers are not receiving valuable feedback on their performance as intended by DOD guidance. Finally, federal internal control standards emphasize the assessment of performance over time, but DOD is unable to determine whether its ethics and professionalism initiatives are achieving their intended effect because it has not developed metrics to measure their progress. GAO recommends DOD determine whether there is a need for a values-based program, assess the expansion of training, modify guidance, assess the use of a key tool for identifying ethics and professionalism issues, and develop performance metrics. DOD generally or partially concurred with these recommendations but did not agree to develop information to assess the Advisor's office. GAO continues to believe the recommendations are valid, as further discussed in the report.
There is no single definition for financial literacy, which is sometimes also referred to as financial capability, but it has previously been described as the ability to make informed judgments and to take effective actions regarding current and future use and management of money. Financial literacy encompasses financial education—the processes whereby individuals improve their knowledge and understanding of financial products, services, and concepts. However, being financially literate refers to more than simply being knowledgeable about financial matters; it also entails utilizing that knowledge to make informed decisions, avoid pitfalls, and take other actions to improve one’s present and long-term financial well-being. Federal, state, and local government agencies, nonprofits, the private sector, and academia all play important roles in providing financial education resources, which can include print and online materials, broadcast media, individual counseling, and classroom instruction. Evidence indicates that many U.S. consumers could benefit from improved financial literacy efforts. In a 2010 survey of U.S. consumers prepared for the National Foundation for Credit Counseling, a majority of consumers reported they did not have a budget and about one-third were not saving for retirement. In a 2009 survey of U.S. consumers by the FINRA Investor Education Foundation, a majority believed themselves to be good at dealing with day-to-day financial matters, but the survey also revealed that many had difficulty with basic financial concepts. Further, about 25 percent of U.S. households either have no checking or savings account or rely on alternative financial products or services that are likely to have less favorable terms or conditions, such as nonbank money orders, nonbank check-cashing services, or payday loans. As a result, many Americans may not be managing their finances in the most effective manner for maintaining or improving their financial well-being. In addition, individuals today have more responsibility for their own retirement savings because traditional defined-benefit pension plans have increasingly been replaced by defined-contribution pension plans over the past two decades. As a result, financial skills are increasingly important for those individuals in or planning for retirement to help ensure that retirees can enjoy a comfortable standard of living. Efforts to improve financial literacy in the United States involve a range of public, nonprofit, and private participants. Among those participants, the federal government is distinctive for its size and reach, and for the diversity of its components, which address a wide array of issues and populations. At our forum last year on financial literacy, many participants said that the federal government had a unique role to play in promoting greater financial capability. They noted that the federal government has a built-in “bully pulpit” that can be used to draw attention to this issue. Participants also highlighted the federal government’s ability to convene the numerous agencies and entities involved in financial literacy, noting that the government has a powerful ability to bring people together. In addition, some participants noted the federal government’s ability to take advantage of existing distribution channels and points of contact between the government and citizens to distribute messages about financial literacy. In our ongoing work, we have found examples of federal agencies acting on such opportunities—for example, the Securities and Exchange Commission has worked with the Internal Revenue Service to include an insert about its investor education resources, including its “Investor.gov” education website, in the mailing of tax refund checks. At our first forum on financial literacy in 2004, participants noted that the federal government can serve as an objective and unbiased source of information, particularly in terms of helping consumers make wise decisions about the selection of financial products and services. Some participants expressed the belief that while the private sector offers a number of good financial literacy initiatives, it is ultimately motivated by its own financial interests, while the federal government may be in a better position to offer broad-based, noncommercial financial education. In preliminary results from an ongoing review, we have identified that, in fiscal year 2010, there were 16 significant financial literacy programs or activities among 14 federal agencies, as well as 4 housing counseling programs among 2 federal agencies and a federally chartered nonprofit corporation. We defined “significant” financial literacy programs or activities as those that were relatively comprehensive in scope or scale and included financial literacy as a key objective rather than a tangential In prior work, we cited a 2009 report that had identified 56 federal goal.financial literacy programs among 20 agencies. That report, conducted by the RAND Corporation, was based on a survey that had asked federal agencies to self-identify their financial literacy efforts. However, our subsequent analysis of these 56 programs found that there was a high degree of inconsistency in how different agencies defined financial literacy programs or efforts and whether they counted related efforts as one or multiple programs. We believe that our count of 16 significant federal financial literacy programs or activities and 4 housing counseling programs is based on a more consistent set of criteria. During his confirmation hearing, Comptroller General Dodaro noted that financial literacy was an area of priority for him, and he has initiated a multi-pronged strategy for GAO to address financial literacy issues. First, we will continue to evaluate federal efforts that directly promote financial literacy. In addition to our recent financial literacy forum, we have ongoing work that focuses on, among other things, the cost of federal financial literacy activities, the federal government’s coordination of these activities, and what is known about their effectiveness. Second, we will encourage research of the various financial literacy initiatives to evaluate the relative effectiveness of different financial literacy approaches. Third, we will look for opportunities to enhance financial literacy as an integral component of certain regular federal interactions with the public. Finally, we have recently instituted a program to empower GAO’s own employees. This program includes a distinguished speaker series, as well as an internal website with information on personal financial matters and links to information on pay and benefits and referral services through GAO’s counseling services office. Having multiple federal agencies involved in financial literacy efforts can have certain advantages. In particular, providing information from multiple sources can increase consumer access and the likelihood of educating more people. Moreover, certain agencies may have deep and long- standing expertise and experience addressing specific issue areas or serving specific populations. For example, the Securities and Exchange Commission has efforts in place to protect securities investors from fraudulent schemes, while the Department of Housing and Urban Development (HUD) oversees most, but not all, federally supported housing counseling. Similarly, the Department of Defense (DOD) may be the agency most able to efficiently and effectively deliver financial literacy programs and products to servicemembers and their families. However, as we stated in a June 2011 report, relatively few evidence-based evaluations of financial literacy programs have been conducted, limiting what is known about which specific methods and strategies—and which federal financial literacy activities—are most effective. Further, the participation of multiple agencies highlights the need for strong coordination of their activities. In general, we have found that the coordination and collaboration among federal agencies with regard to financial literacy have improved in recent years, in large part due to the multiagency Financial Literacy and Education Commission. The commission was created by Congress in 2003 and charged, among other things, with developing a national strategy to promote financial literacy and education, coordinating federal efforts, and identifying areas of overlap and duplication. Among other things, the commission, in concert with the Department of the Treasury, which provides its primary staff support, has served as a central clearinghouse for federal financial literacy resources—for example, it created a centralized federal website and has an ongoing effort to develop a catalog of federal research on financial literacy. The commission’s 2011 national strategy identified five action areas, one of which was to further emphasize the role of the commission in coordination. The strategy’s accompanying Implementation Plan lays out plans to coordinate communication among federal agencies, improve strategic partnerships, and develop channels of communication with other entities, including the President’s Advisory Council on Financial Capability and the National Financial Education Network of State and Local Governments. The Financial Literacy and Education Commission’s success in implementing these elements of the national strategy is key, given the inherently challenging task of coordinating the work of the commission’s many member agencies—each of which has its own set of interests, resources, and constituencies. Further, the addition of the Bureau of Consumer Financial Protection, whose director serves as the Vice Chair of the commission, adds a new player to the mix. In our recent and ongoing work, we have found instances in which multiple agencies or programs share similar goals and activities, which raises questions about the efficiency of some federal financial literacy efforts. For example, four federal agencies and one government- chartered nonprofit corporation provide or support various forms of housing counseling to consumers—DOD, HUD, the Department of Veterans Affairs, the Department of the Treasury, and NeighborWorks America. Other examples of overlap lie in the financial literacy responsibilities of the Bureau of Consumer Financial Protection, which was created by the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act). The act established within the bureau an Office of Financial Education and charged this office with developing and implementing a strategy to improve financial literacy through activities including opportunities for consumers to access, among other things, financial counseling; information to assist consumers with understanding credit products, histories, and scores; information about saving and borrowing tools; and assistance in developing long-term savings strategies. This office presents an opportunity to further promote awareness, coordinate efforts, and fill gaps related to financial literacy. At the same time, the duties this office is charged with fulfilling are in some ways similar to those of a separate Office of Financial Education and Financial Access within the Department of the Treasury, a small office that also seeks to broadly improve Americans’ financial literacy. In addition, the Dodd-Frank Act charges the Bureau of Consumer Financial Protection with developing and implementing a strategy on improving the financial literacy of consumers, even though the multiagency Financial Literacy and Education Commission already has its own statutory mandate to develop, and update as necessary, a national strategy for financial literacy. As the bureau has been staffing up and planning its financial education activities, it has been in regular communication with the Department of the Treasury and with other members of the Financial Literacy and Education Commission, and agency staff say they are seeking to coordinate their respective roles and activities. The Dodd-Frank Act also creates within the bureau an Office of Financial Protection for Older Americans, which is charged with helping seniors recognize warning signs of unfair, deceptive, or abusive practices and protect themselves from such practices; providing one-on-one financial counseling on issues including long-term savings and later-life economic security; and monitoring the legitimacy of certifications of financial advisers who advise seniors. These activities may overlap with those of the Federal Trade Commission, which also plays a role in helping seniors avoid unfair and deceptive practices. Further, the Department of Labor and the Social Security Administration both have initiatives in place to help consumers plan for retirement, and the Securities and Exchange Commission has addressed concerns about the designations and certifications used by financial advisers, who often play a role in retirement planning. Officials at the Bureau of Consumer Financial Protection told us that they have been coordinating their financial literacy roles and activities with those of other federal agencies to avoid duplication of effort. In prior work we have noted the importance of program evaluation and the need to focus federal financial literacy efforts on initiatives that work. Relatively few evidence-based evaluations of financial literacy programs have been conducted, limiting what is known about which specific methods and strategies are most effective. Financial literacy program evaluations are most reliable and definitive when they track participants over time, include a control group, and measure the program’s impact on consumers’ behavior. However, such evaluations are typically expensive, time-consuming, and methodologically challenging. Based on our previous work, it appears that no single approach, delivery mechanism, or technology constitutes best practice, but there is some consensus on key common elements for successful financial education programs, such as timely and relevant content, accessibility, cultural sensitivity, and an evaluation component. There are several efforts under way that seek to enhance evaluation of federal financial literacy programs. For example, the Financial Literacy and Education Commission has begun to establish a clearinghouse of evidence-based research and evaluation studies, current financial topics and trends of interest to consumers, innovative approaches, and best practices. In addition, the Bureau of Consumer Protection recently contracted with the Urban Institute for a financial education program evaluation project, which will assess the effectiveness of two existing financial education programs and seeks to identify program elements that improve consumers’ confidence about financial matters. We believe these measures are positive steps because federal agencies could potentially make the most of scarce resources by consolidating financial literacy efforts into the activities and agencies that are most effective. The Bureau of Consumer Financial Protection was charged by statute with a key role in improving Americans’ financial literacy and is being provided with resources to do so. As such, the bureau offers potential in enhancing the federal government’s role in financial literacy. At the same time, as we have seen, some of its responsibilities overlap with those of other agencies, which highlights the need for coordination and may offer opportunities for consolidation. As the bureau’s financial literacy activities evolve and are implemented, it will be important to evaluate how those efforts are working and make appropriate adjustments that might promote greater efficiency and effectiveness. In addition, the overlap we have identified among programs and activities increases the risk of inefficiency and emphasizes the importance of coordination among financial participants. This underscores the importance of steps the Bureau of Consumer Financial Protection has been taking to delineate its roles and responsibilities related to financial literacy vis-à-vis those of other federal agencies, which we believe is critical in order to minimize overlap and the potential for duplication. Chairman Akaka, Ranking Member Johnson, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Subcommittee may have at this time. For future contacts about this testimony, please contact Alicia Puente Cackley at (202) 512-8678 or at cackleya@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Jason Bromberg, Mary Coyle, Roberto Piñero, Rhonda Rose, Jennifer Schwartz, and Andrew Stavisky also made key contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Financial literacy plays an important role in helping to promote the financial health and stability of individuals and families. Economic changes in recent years have further highlighted the need to empower all Americans to make informed financial decisions. In addition to the important roles played by states, nonprofits, the private sector, and academia, federal agencies promote financial literacy through activities including print and online materials, broadcast media, individual counseling, and classroom instruction. This testimony discusses (1) the federal government’s role in promoting financial literacy, including GAO’s role; (2) the advantages and risks of financial literacy efforts being spread across multiple federal agencies; and (3) opportunities to enhance the effectiveness of federal financial literacy education efforts going forward. This testimony is based on prior and ongoing work, for which GAO reviewed agency budget documents, strategic plans, performance reports, websites, and other materials; convened forums of financial literacy experts; and interviewed representatives of federal agencies and selected private and nonprofit organizations. While this statement includes no new recommendations, in the past GAO has made a number of recommendations aimed at improving financial literacy efforts. The federal government plays a wide-ranging role in promoting financial literacy. Efforts to improve financial literacy in the United States involve an array of public, nonprofit, and private participants, but among those participants, the federal government is distinctive for its size and reach and for the diversity of its components, which address a wide range of issues and populations. At forums of financial literacy experts that GAO held in 2004 and 2011, participants noted that the federal government can use its “bully pulpit,” convening power, and other tools to draw attention to the issue, and serve as an objective and unbiased source of information about the selection of financial products and services. In prior work, GAO cited a 2009 report by the RAND Corporation in which 20 federal agencies self-identified as having 56 federal financial literacy programs, but GAO’s subsequent analysis found substantial inconsistency in how different agencies defined and counted financial literacy programs. Based on a more consistent set of criteria, GAO identified 16 significant financial literacy programs or activities among 14 federal agencies, as well as 4 housing counseling programs among 3 federally supported entities, in fiscal year 2010. The Comptroller General has initiated a multi-pronged strategy to address financial literacy issues. First, GAO will continue to evaluate federal efforts that directly promote financial literacy. Second, it will encourage research of the various financial literacy initiatives to evaluate the relative effectiveness of different approaches. Third, GAO will look for opportunities to enhance financial literacy as an integral component of certain regular federal interactions with the public. Finally, GAO has recently instituted a program to empower its own employees, which includes an internal website with information on personal financial matters and links to information on pay and benefits and referral services through its counseling services office and a distinguished speaker series. Having multiple federal agencies involved in financial literacy offers advantages as well as risks. Some agencies have long-standing expertise and experience addressing specific issue areas or populations, and providing information from multiple sources can increase consumer access and the likelihood of educating more people. However, the participation of multiple agencies also highlights the risk of inefficiency and the need for strong coordination of their activities. GAO has found that the coordination and collaboration among federal agencies with regard to financial literacy has improved in recent years, in large part as a result of the Financial Literacy and Education Commission. At the same time, GAO has found instances of overlap, in which multiple agencies or programs, including the new Bureau of Consumer Financial Protection, share similar goals and activities, underscoring the need for careful monitoring of the bureau’s efforts. In prior work GAO has noted the importance of program evaluation and the need to focus federal financial literacy efforts on initiatives that work. Federal agencies could potentially make the most of scarce resources by consolidating financial literacy efforts into the activities and agencies that are most effective. In addition, the Bureau of Consumer Financial Protection offers potential for enhancing the federal government’s role in financial literacy, but avoiding duplication will require that it continue its efforts to delineate its financial literacy roles and responsibilities vis-à-vis those of other federal agencies with overlapping responsibilities.
During the last decade, a new kind of entity has emerged in public education: the for-profit provider of education and management services. Historically, school districts have contracted with private companies for noninstructional services, such as transportation and food service, and have also relied on contractors in some cases to provide limited instructional services to specified populations. Until recently, public schools have generally not contracted for the comprehensive programs of educational and management services that these companies typically offer. In recent years, the options available to public schools considering contracting with private companies have steadily grown. Today, approximately 20 major companies manage public schools. Nationally, it is estimated that these companies as well as other smaller companies serve over 300 schools out of the nation’s approximately 92,000 public schools. Although these companies manage public schools at all grade levels, most such privately managed public schools are elementary and middle schools. In these public schools, companies generally provide the same kinds of educational and management services that school districts do for traditional public schools. Educational services typically include a curriculum as well as a range of services designed to enhance or support student achievement, such as professional development opportunities for teachers, opportunities for parental involvement and school environments that aim to facilitate student support. Management services typically include personnel, payroll, and facilities management. Although these are the services that are typically offered to schools, companies also may adapt their services to respond to the preferences or needs of individual schools. For example, while some companies offer a particular curriculum or educational approach, others appear more willing to work with the curriculum the school or school district has already adopted. Typically, companies provide their services to public schools in one of two ways. First, they can contract directly with school districts to manage traditional public schools; such schools are known as “contract schools.” Second, they can manage charter schools, which are public schools that receive a degree of autonomy and freedom from certain school district requirements in exchange for enhanced accountability. Generally, charter schools are run by individual boards of trustees, which in most states and the District of Columbia have the authority to decide whether to contract with a private company. Both contract schools and charter schools remain public schools, however, and are generally subject to federal and state requirements for public schools in areas such as the application of standardized tests and special education. While the reasons public schools turn to private companies vary, the potential to increase student achievement appears to be one factor. In particular, according to certain experts and company officials we spoke to, school districts that seek a company’s help often do so with the expectation of raising achievement in struggling or failing schools. While management services appear to be especially important for charter schools that contract with such companies, charter schools also consider the potential to raise student achievement or a particular educational approach consistent with the school’s mission, according to school officials and experts we spoke with. Both types of schools that seek these companies’ assistance—struggling schools and charter schools—appear concentrated in urban areas. Further, several of the major companies reportedly serve a predominantly disadvantaged urban and minority student population. Recent changes in federal law have implications for the role played by these companies in public schools. The No Child Left Behind Act of 2001requires that schools that fail to meet state student achievement standards for 5 consecutive years must be restructured by implementing one or more alternative governance actions. One of the alternatives available to states and districts is to contract with an education management company. Three companies currently operate in the District of Columbia: Edison Schools, Mosaica Education, and Chancellor Beacon Academies. Edison began operating its first District school in 1998, and Mosaica and Chancellor Beacon first contracted with the District schools they manage in 2001. Throughout this report, these companies will generally be discussed in this order. Mergers and acquisitions are common among such companies. In 2001, Edison acquired nine schools nationwide through a merger with LearnNow. In the same year, Mosaica acquired nine schools nationwide through its acquisition of Advantage Schools. In addition, Chancellor and Beacon merged into a single company. Such changes can have several outcomes: in some cases, the company may operate schools that continue to use the educational program of another company; in other cases, the school may consider adopting the educational program of the new company or terminating the contract. The companies that operate public schools in the District of Columbia offer management and educational services as part of their programs; the extent to which District schools managed by these companies implemented all of the components of the companies’ programs varied. All of these companies offer programs that include management and educational services, such as curricula that integrate technology and professional development opportunities for teachers. Of the 10 District schools managed by these companies, 4 had completely implemented their company’s program. In school year 2001-02, all 10 District schools managed by these companies were charter schools with predominantly poor and minority student populations; most enrolled elementary and middle school students. Similar to traditional public schools, the District schools managed by these companies were required to be open to all students, up to their enrollment limits, and to meet District standards in areas such as health, safety, standardized testing, and compliance with federal special education requirements. The three for-profit companies that operate in the District of Columbia— Edison, Mosaica, and Chancellor Beacon—share common elements in terms of the management and educational services they offer to schools nationwide as well as those company officials described as distinctive. Each of the three companies generally offers similar management services. For example, all three offer management services such as personnel, payroll and facilities management, services that can be important for charter schools. In addition, the three companies employ some common approaches designed to improve student achievement. All three companies offer an extended school day and year. All three integrate technology in their educational programs. For example, all three offer students access to classroom computers. Similarly, all organize schools into smaller units to facilitate their tracking of students’ progress. All three provide summer training to teachers as well as other forms of professional development. Additionally, all have activities designed to involve and support parents and students. For example, each company uses parent satisfaction surveys. Experts we spoke to noted that these same approaches were being used in some other public schools. Finally, officials of all three companies stated that their companies contributed positively to school climate—a sense of mission and an environment conducive to learning—and cited aspects of school climate such as a safe and orderly school environment and teacher motivation. In addition to the characteristics they had in common, company officials identified others they believed were distinctive. These include, for example, their programs’ curriculum and instruction as well as the ability to provide economies of scale, develop community partnerships, and provide strong administrative support. As Table 1 shows, all three companies provided their services to schools in multiple states in 2001-02. According to Edison officials, its program has a number of distinctive characteristics. The first of these is its curriculum, which emphasizes basic skills, especially reading as the basis for future learning. It also includes enrichment in areas such as world languages (e.g., Spanish) and art. Edison’s basic skills curriculum includes components developed by Edison, such as a remedial reading program, and other components that Edison states are supported by research, such as Chicago Math and the Success for All reading program. Instructional methods are a second characteristic of Edison’s program. Edison schools use a variety of instructional methods. One of these, direct instruction, relies on repetition and drill. Other methods use projects, small groups, and individualized lessons. A third characteristic of Edison schools is their use of assessments. According to Edison officials, their program uses frequent assessments and the results of these assessments are promptly provided to teachers to assess student needs and provide appropriate additional help. “Systems and scale” is another key characteristic of Edison schools according to company officials. The company views its schools as part of a national system linked by a common purpose, and because of the system’s size, the company says it is able to purchase supplies at lower costs. Mosaica officials also identified certain distinctive characteristics of their company’s program. The first is the program’s curriculum, which has two parts. According to Mosaica officials, its morning program features instruction in traditional subjects such as reading and math. In the afternoon, students use Paragon—Mosaica’s own curriculum. According to company officials, Paragon stresses multidisciplinary learning, uses projects to emphasize the humanities, and recognizes students’ different learning styles. For example, students may use their reading, math, and social studies learning to build a pyramid or a Viking ship and thus study a period of history. According to company officials, projects accommodate a variety of learning styles—for example, some students learn visually, others by performing. Community involvement is a second key characteristic of Mosaica’s program. Company officials say that Mosaica brings community support into the school by networking with various community organizations. According to company officials, this provides its schools with access to additional resources. Chancellor Beacon officials also identified distinctive characteristics of their program. One is their willingness to customize their educational program to meet the needs and preferences of local schools. For example, in response to community interest, some Chancellor Beacon schools feature a cultural heritage element in the curriculum while one of its schools emphasizes the environment. Chancellor Beacon’s own curriculum was recently finalized in July 2002 and is based on an integration of the curricula of Chancellor and Beacon before they merged. One component of its curriculum is Core Knowledge—a program that expects students to master specific content in language arts, history, geography, math, science and fine arts. Other components emphasize ethics, morality and community volunteerism. A second key characteristic of Chancellor Beacon’s program is its operational support, according to company officials. These officials told us that in focusing on operational support, Chancellor Beacon allows schools to focus on academics. While the Chancellor Beacon program emphasizes customization as a key characteristic, the other two companies also allow schools to modify their programs. For example, in its reading program, Edison allows schools some flexibility regarding what books to read and in what order. In addition, up to one-fourth of its curriculum can be determined by the local school. Similarly, Mosaica allows its schools to use different approaches or materials in their morning session. While all of the 10 District schools managed by the companies during the 2001-02 school year obtained management services from these companies, the schools were more selective in implementing the companies’ educational programs. Of the 10 District schools, 4 have completely implemented the companies’ educational programs and 6 have adopted selected elements of their companies’ programs or chosen other programs, typically those of a previous company. A key factor that helps explain the difference between the programs the companies offer and what has been implemented by District schools is that recent mergers and acquisitions have led to changes in management companies in these 6 schools; these schools have generally left in place the educational programs of the companies that formerly managed them. Four schools, all managed by Edison, implemented the company’s educational program completely, according to company officials. These 4 schools all opened in 1998 as the result of a partnership between Friendship House, a nonprofit community organization serving District children and youth since 1904, and Edison. According to a Friendship House official, these schools completely implemented Edison’s program because they saw it as complementing their own goals. One of these schools—a high school—has supplemented the Edison program by developing a program to expose certain students to college through campus visits and workshops for parents. Six District schools adopted selected elements of their companies’ educational programs or chose other educational programs. These 6 schools include 2 schools managed by Edison, 2 by Mosaica, and 2 by Chancellor Beacon. All 6 schools have had recent changes in management companies as a result of mergers or acquisitions. The 2 schools that received services from Edison have opted to retain the curriculum already in place at the schools, rather than adopt the Edison program. In 2001, Edison bought LearnNow, the company that formerly provided services to the 2 schools. According to an Edison official knowledgeable about the schools formerly managed by LearnNow, the primary difference between the companies’ curricula was in elementary language arts, for which LearnNow preferred a different reading program than Success for All, which the Edison program uses in its other schools. The 2 schools managed by Mosaica have adopted some elements of the company’s educational program, and have plans to adopt more by 2003. In 2001, Mosaica bought Advantage, the company that formerly managed these schools. Both schools retained an instructional approach put in place by the previous company. This approach—direct instruction— emphasizes drill and repetition. By school year 2003, both schools expect to use direct instruction during the morning session and Paragon in the afternoon. The 2 schools managed by Chancellor Beacon both had distinct curricula in place before being managed by this company; one has combined its existing curriculum with elements of Chancellor Beacon’s, and the other has left its existing curriculum in place. The school that has adopted elements of Chancellor Beacon’s curriculum has done so by integrating the company’s language arts and math curriculum with the school’s existing curriculum, according to company officials. This school, which serves at-risk youth, had a curriculum called expeditionary learning, which focuses on learning through field trips and experiences. The other Chancellor Beacon school opted to retain its existing basic-skills curriculum, relying instead on the company’s management services and selected educational services, such as assessments. Chancellor Beacon officials support the schools’ choices regarding what company components to adopt. Company and school officials identified several reasons why these 6 schools did not completely implement the current company’s educational program, opting instead to continue with an existing curriculum. These included continuity for students, the company’s flexibility with regard to local customization, and the right of charter school boards to make broad curriculum decisions. The 10 schools in the District managed by these companies shared certain characteristics and served similar student populations in 2001-02. All were public charter schools governed by their own boards and accountable to District oversight authorities. Most (9) were combined schools spanning elementary and middle school grades. As public schools, they were required to accept any student who applied, up to their enrollment limit. Their student populations were substantially minority and poor: 92 to 100 percent African American and 48 to 95 percent receiving free or reduced school lunch. All served some students with special needs, such as learning disabilities: in 9 of the schools, the percentage ranged from 5 to 13 percent, and in one school, 32 percent of the student population had special needs. All but one served no or very few students with limited English proficiency; at the remaining school, students with limited English proficiency represented about 12 percent of all students enrolled. Little rigorous research exists on the effectiveness of the three educational management companies—Edison, Mosaica, and Chancellor Beacon—in the schools they manage across the country; as a result, we cannot draw conclusions about the effect that these companies’ programs have on student achievement, parental satisfaction, parental involvement, or school climate. Students in company managed schools have demonstrated academic progress, but more research is needed to determine if this improvement is directly the result of the companies’ programs and if this progress is different from that of comparable students in traditional public schools. We reviewed five studies that addressed student achievement, but only one was conducted in a way that allows an assessment of the effect the company’s program had on student achievement in one school. The remaining studies had methodological limitations that precluded such assessments. In an effort to learn more about effectiveness, Edison has recently commissioned RAND, a nonprofit research organization that has evaluated educational reforms, to complete a study to assess its program’s impact. Determining the effect of an educational company’s program can be challenging for researchers. Ideally, evaluations of program effectiveness should involve a comparison of outcomes for one group exposed to a particular program with outcomes of a second group not exposed to the program. Some evaluations assign participants randomly to one group or the other to increase the likelihood that the two groups are roughly equivalent on all characteristics that could affect outcomes. This technique of random assignment is often problematic in educational research because public school enrollment is generally based on residency requirements. Therefore the most common way to compare student achievement results from two different groups of students is to ensure the groups are similar in a number of ways, including socioeconomic status, ethnicity, and performance on prior academic assessments. In addition to controlling for the effects of these background characteristics, it is critical to follow the performance of students over time, preferably before any group has been exposed to the program, and at least one point thereafter.It is also beneficial to analyze individual student data, rather than grade or school-level averages, to account for individual differences and to factor in the effects of missing data. Within the context of rigorous educational program evaluations, various measurements can be used to capture a student’s performance on standardized tests. According to several experts, it is important to examine both the percent of students in a particular grade or school making yearly gains and the distribution of these gains across ability levels to ensure that students of all achievement levels are demonstrating academic growth. Another point of interest relates to the length of time students participate in a particular program. Some experts claim that students will exhibit greater gains the longer they participate in a program. However, it is particularly challenging to design studies that address this claim, because educational companies are still a relatively new phenomenon. We identified five studies concerning the three companies operating in the District that met the criteria for our review: inclusion of comparison groups, measurement over time, and focus on academic achievement, parental satisfaction, parental involvement, or school climate. All of the studies addressed the effectiveness of schools managed by Edison. One study also addressed the effectiveness of schools managed by all three private companies— Edison, Mosaica, and Chancellor Beacon. We were unable to identify any rigorous studies that included analysis of District public schools managed by any of these three companies. Of the studies included in our review, four studies addressed only outcomes related to student achievement, while one study addressed student achievement and other outcomes such as parental satisfaction and school climate. Only one of the studies, A Longitudinal Study of Achievement Outcomes in a Privatized Public School: A Growth Curve Analysis, based on one Edison school in Miami-Dade County, Florida, was conducted in a way that allows an assessment of the program’s effect on student achievement. This study followed individual student standardized test scores over a 3-year period and found that Edison students progressed at similar rates to those in the traditional Miami-Dade County Public Schools (MDCPS); this finding is not generalizable to other schools managed by Edison or any other private company. The study was designed to ensure that the Edison students were similar to the random sample of students drawn from MDCPS in terms of school grade, socioeconomic status, as indicated by the percent eligible for free/reduced price lunch, ethnicity, and achievement levels, as indicated by comparability in test scores prior to students enrolling in the Edison school. The study employed two different analytical techniques and both resulted in the finding that the Edison students progressed at similar rates to the traditional public school students. Several methodological techniques that would have strengthened its overall findings could have been employed. These include controlling more specifically for school-level differences between the participating students as well as better ensuring the two groups of students remained equivalent despite study dropouts (subsequently referred to as attrition). Differences in the composition of these groups, after attrition, could affect the test score results. This study did not examine the effect of this company’s program on parental satisfaction, parental involvement, or school climate. Significant limitations in the other four studies preclude our making assessments of the effectiveness of schools managed by Edison, Chancellor Beacon, or Mosaica that were included in the studies. These limitations included use of comparison groups that did not adequately control for differences between the students in the company’s schools and the students in traditional public schools, instances where achievement data were not available for all students, and lack of adjustment for high attrition rates. Company officials report that one way to determine if their programs are effective is to assess whether students demonstrate academic growth as evidenced by improvement on standardized tests. There is evidence to support the assertion that students enrolled in schools managed by Chancellor Beacon, Mosaica, and Edison have demonstrated academic improvement from one point in time to another, but it is important to determine if these gains are specifically the result of company programs. Additional research is in progress. Edison commissioned RAND to evaluate Edison schools across the country. Where possible, RAND plans to compare the scores of individual Edison students to those of traditional public schools students with similar characteristics. Since it is often difficult to gather individual level student data, RAND will also compare Edison data, either at the grade or school level, to publicly available state data at that same level. RAND expects to publish its findings in 2004. We received written comments on a draft report from the Department of Education. These comments are presented in appendix III. Education stated that there are insufficient data on the effectiveness of private education companies. Education also stated that it encourages others’ evaluation efforts. We also received comments from an expert on private education companies, the authors of the MDCPS study that we assessed, the District of Columbia Board of Education, the District of Columbia Public Charter School Board, as well as Edison Schools, Mosaica Education, and Chancellor Beacon Academies. These comments were also incorporated where appropriate. We are sending a copy of this report to the Secretary of Education, the District of Columbia Board of Education, the District of Columbia Public Charter School Board, Edison Schools, Mosaica Education, and Chancellor Beacon Academies. We will make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (202) 512-7215. Other contacts and contributors to this report are listed in appendix IV. The objectives of our review were to (1) identify the characteristics of the for-profit educational management companies operating in the District and determine the extent to which District schools managed by these companies have used their programs and (2) determine what is known about the effectiveness of these programs, as measured primarily by student achievement. We conducted our work between January and September 2002, in accordance with generally accepted government auditing standards. To identify the characteristics of the programs offered by for-profit companies operating in the District, and determine the extent to which District public schools managed by them have used their programs, we interviewed company officials, representatives of the 10 schools, as well as officials of the District’s chartering authorities. We collected information on the companies from their Web sites and obtained technical comments from the companies on the descriptions of their programs. We also contacted education experts and advocates to obtain both their recommendations on research regarding the three for-profit companies and information on any research they might have conducted on the companies. We also acquired information on the companies by reviewing relevant research summaries. We also observed an on-site review of one school’s program conducted for District oversight authorities. To determine what is known about the effectiveness of these programs, we collected, reviewed, and analyzed information from available published and unpublished research on the effect on student achievement, parental satisfaction, parental involvement, and school climate of the three companies managing schools in the District. We also spoke with RAND officials about the design and methods of their current evaluation of Edison Schools. To identify relevant research, we followed three procedures: (1) interviewed experts to find out what studies were completed or in the process of being completed on the effectiveness of company programs; (2) conducted library and Internet searches; and (3) reviewed bibliographies of studies that focused on the effectiveness of company programs. We reviewed studies concerning the three companies operating in the District that met the following criteria: included comparison groups and measurement over time, and focused on academic achievement, parental satisfaction, parental involvement, or school climate. Our final list of studies for review consisted of five studies, as listed in appendix II. We did not identify any studies that evaluated the effect of these three programs in District schools. Two GAO social scientists examined each study to assess the adequacy of the samples and measures employed, the reasonableness and rigor of the statistical techniques used to analyze them, and the validity of the results and conclusions that were drawn from the analyses. For selected studies, we contacted the researchers directly when we had questions about their studies. In order to identify research that explicitly addresses the effect on student achievement, parental satisfaction, parental involvement, or school climate of the three companies managing schools in the District, we interviewed experts to determine what studies were completed or in the process of being completed, conducted library and Internet searches, and reviewed bibliographies of studies that focused on the effect of these companies’ programs on student achievement. Although five studies met our criteria for review (inclusion of comparison groups, measurement over time, and focus on academic achievement, parental satisfaction, parental involvement, or school climate), we cannot draw conclusions, due to methodological weaknesses, from the four studies listed below.Conclusions from A Longitudinal Study of Achievement Outcomes in a Privatized Public School: A Growth Curve Analysis were presented in the text. Miron, Gary and Brooks Applegate. An Evaluation of Student Achievement in Edison Schools Opened in 1995 and 1996. Kalamazoo, Michigan: The Evaluation Center, Western Michigan University, December 2000. Miron and Applegate analyzed both individual and aggregate level data and compared improvements in the test scores of 10 Edison schools with those of comparison schools, districts, states, and national norms, where applicable. However, significant weaknesses prevented conclusive statements on the effects of Edison schools. These weaknesses included limitations in the available data, such as incompleteness and inconsistency, high attrition rates, and the lack of corresponding adjustments for attrition. Horn, Jerry and Gary Miron. An Evaluation of the Michigan Charter School Initiative: Performance, Accountability, and Impact. Kalamazoo, Michigan: The Evaluation Center, Western Michigan University, July 2000. Horn and Miron examined the percentage of students earning a passing grade on achievement tests in individual charter schools in Michigan in comparison with the percentage passing in the districts where these schools were located. The analysis included schools managed by Edison, Mosaica, and Beacon. Weaknesses included inadequate controls for differences between the students in charter schools and their host districts, no consideration of attrition rates, and the likelihood that analyses were often based on a small number of students. American Federation of Teachers. Trends in Student Achievement for Edison Schools, Inc.: The Emerging Track Record. Washington, D.C.: October 2000. Researchers examined school and grade-level achievement data from 40 Edison schools in eight states and compared it to data gathered from school districts and other schools. Weaknesses included insufficient information about the methodology employed by the states, including construction of comparison groups and matching techniques, and a lack of analysis of attrition rates. Gomez. Ph.D., Joseph and Sally Shay, Ph.D. Evaluation of the Edison Project School. Final Report, 1999-00 (portions related to parental satisfaction and involvement, and school climate). Office of Evaluation and Research, Miami-Dade County Public Schools (MDCPS), April 2001. Gomez and Shay examined responses from surveys MDCPS had administered to parents and teachers from both the Edison school and the control group. However, the outcomes related to parental satisfaction and involvement were measured with single-item survey questions that do not seem to capture the full context of the concepts. School climate was measured with a single-item question on a teacher survey and with school archival data. Shay and Gomez did not report whether any differences are statistically significant, in part because they acknowledged it would be inappropriate to conduct tests of significance on single-item questions. Therefore, there is no evidence to determine whether Edison school parents were more satisfied or involved than those in the control group, or whether the Edison school improved school climate. We are aware of other studies and reports that address the effect of Chancellor Beacon Academies, Mosaica Education, and Edison Schools on academic achievement, parental satisfaction, parental involvement, or school climate; however, the following are examples that did not meet the criteria for inclusion in our review. District of Columbia Public Charter School Board. School Performance Reports. Washington, D.C.: August 2001. Department of Research, Evaluation, and Assessment, Minneapolis Public Schools. Edison/PPL School Information Report 2000-2001. Minneapolis, Minnesota: 2001. Department of Administration, Counseling, Educational and School Psychology, Wichita State University. An Independent Program Evaluation for the Dodge-Edison Partnership School: First Year Interim Report. Wichita, Kansas: 1996. Missouri Department of Elementary and Secondary Education. Charter School Performance Study: Kansas City Charter Schools. Jefferson City, Missouri: 2001. Company-provided information such as annual reports and school performance reports. Other sources of general information included school district websites and other educational services, such as Standard and Poor’s School Evaluation Services and the National Association of Charter School Authorizers’ Educational Service Provider Information Clearinghouse. In addition to those named above, Rebecca Ackley and N. Kim Scotten made key contributions to this report. Jay Smale, Michele Fejfar, Kevin Jackson, Sara Ann Moessbauer, and Shana Wallace provided important methodological contributions to the review of research. Patrick Dibattista and Jim Rebbe also provided key technical assistance. School Vouchers: Characteristics of Privately Funded Programs. GAO-02- 752. Washington, D.C.: September 26, 2002. School Vouchers: Publicly Funded Programs in Cleveland and Milwaukee. GAO-01-914. Washington, D.C.: August 31, 2001. Charter Schools: Limited Access to Facility Financing. GAO/HEHS-00- 163. Washington, D.C.: September 12, 2000) Charter Schools: Federal Funding Available but Barriers Exist. HEHS-98- 84. Washington, D.C.: April 30, 1998. Charter Schools: Recent Experiences in Accessing Federal Funds. T-HEHS-98-129. Washington, D.C.: March 31, 1998. Charter Schools: Issues Affecting Access to Federal Funds. T-HEHS-97- 216. Washington, D.C.: September 16, 1997. Private Management of Public Schools: Early Experiences in Four School Districts. GAO/HEHS-96-3. Washington, D.C.: April 19, 1996. Charter Schools: New Model for Public Schools Provides Opportunities and Challenges. GAO/HEHS-95-42. Washington, D.C.: January 18, 1995. School-Linked Human Services: A Comprehensive Strategy for Aiding Students At Risk of School Failure. GAO/HRD-94-21. Washington, D.C.: December 30, 1993.
In recent years, local school districts and traditional public schools have taken various initiatives to improve failing schools. School districts and charter schools are increasingly contracting with private, for-profit companies to provide a range of education and management services to schools. In the District of Columbia, some public schools contract with three such companies: Edison Schools, Mosaica Education, and Chancellor Beacon Academies. These three companies have programs that consist of both management services, such as personnel, and educational services, which they offer to schools across the nation; in the District, most of the schools managed by these companies have either adopted selected elements of their companies' programs or chosen other educational programs. Each company provides services such as curriculum, assessments, parental involvement opportunities, and student and family support. Little is known about the effectiveness of these companies' programs on student achievement, parental satisfaction, parental involvement, or school climate because few rigorous studies have been conducted. Although the companies publish year-to-year comparisons of standardized test scores to indicate that students in schools they manage are making academic gains, they do not present data on comparable students who are not in their programs, a necessary component of a program effectiveness study.
Ex-Im is the official export credit agency of the United States, and operates under the authority of the Export-Import Bank Act of 1945, as amended. It operates as an independent agency of the U.S. government with a staff of approximately 370 full-time permanent employees. Ex-Im’s core mission is to support U.S. exports and jobs by providing export financing that is competitive with the official export financing support offered by other governments. To accomplish its mission, Ex-Im offers a variety of financing instruments, including loan guarantees, export credit insurance, and working capital guarantees for preexport financing. Between fiscal years 2003 and 2005, Ex-Im processed a yearly average of 3,055 requests for loans, guarantees, and insurance. Of the processed applications, Ex-Im approved an average of 2,981 applications (or 98 percent) per year. In general, Ex-Im’s charter prohibits the bank from extending financing for a project if doing so will adversely affect U.S. industry. Ex-Im tests for adverse effects by (1) reviewing projects for applicable trade sanctions and (2) conducting its own economic impact analysis. For this economic impact analysis, the charter provides that, if a commodity for export resulting from Ex-Im financing will compete with U.S. production of the same, similar, or competing commodity, or will be in surplus on world markets at the time of first production, Ex-Im must determine whether extending the financing will cause substantial injury to U.S. producers. (The charter defines “substantial injury” as the establishment or expansion of foreign production capacity equal to or exceeding 1 percent of U.S. production.) However, under its charter, Ex-Im may fund a project if, in the judgment of the board of directors, the short- and long-term benefits to industry and employment in the United States are likely to outweigh the injury to U.S. producers and employment of the same, similar, or competing commodity. This can put Ex-Im in the challenging position of balancing the interests of two different industries—the industry of the U.S. exporter it is financing and the industry that may face additional competition as a result of the initial export (see fig. 1). Economic impact is one of many factors Ex-Im considers when determining whether to finance a project. Other factors that Ex-Im must weigh include the project’s feasibility from an engineering point of view, the project’s possible environmental impact, whether the project involves small business, and the borrower’s creditworthiness. Other countries, such as Japan and the United Kingdom, also have export credit agencies with broad mandates to finance projects that benefit their domestic economies. However, unlike Ex-Im, these export credit agencies are not required to weigh the potential economic costs to domestic industries against the benefits associated with a specific financed export. Furthermore, these agencies do not consider the relevance of trade measures to a project, as Ex-Im is required to do. In its 2005 competitiveness report, Ex-Im states that having to consider these additional elements, such as the economic impact, when deciding whether to finance a project puts Ex-Im at a disadvantage compared with other export credit agencies. Ex-Im’s economic impact analysis screening process is designed to identify projects with the most potential to adversely impact U.S. industry; Ex-Im then conducts a detailed analysis of those projects. Applications are sequentially screened on the basis of criteria specified in Ex-Im’s charter or established by Ex-Im in the exercise of its discretion under the charter. For the applications that receive a detailed analysis, Ex-Im assesses whether the products that will result from its financing will be in surplus on world markets or in competition with U.S. production, and it estimates the net impact of the projects on U.S. trade flows. Ex-Im also solicits public and agency comments on the potential projects. Between fiscal years 2003 and 2005, Ex-Im approved most of the 771 requests to finance projects that involved increasing foreign production of an exportable good, and that, therefore, passed the first screen and were deemed applicable for further economic impact review. Ex-Im’s economic impact analysis screening process consists of a series of rules used to sequentially remove from further economic impact review applications for projects it deems unlikely to adversely impact U.S. industry. Ex-Im’s charter explicitly requires certain screens and Ex-Im introduced others, using its discretion under the charter. Between fiscal years 2003 and 2005, the screens identified 20 applications that required a detailed analysis. The screens remove most requests from the process because they involve financing of $10 million or less; however, Ex-Im reviews those projects postauthorization in its Annual Review of Economic Impact. Ex-Im screens applications for economic impact on the basis of several characteristics, some that Ex-Im’s charter explicitly requires, others that Ex-Im established exercising its discretion under the charter. During the screening process, Ex-Im staff in the Policy Analysis Division assign an economic impact code to each application. These screens are as follows: Foreign production of an exportable commodity. Ex-Im’s charter requires it to review for economic impact those requests to finance projects that would result in increased foreign production. Under Ex-Im’s procedures, only requests financing the export of capital goods or services from the United States that might allow a foreign company to increase production of an exportable good are subject to further scrutiny. This screen removes the bulk of applications from economic impact analysis. (Ex-Im codes requests to finance projects that do not increase foreign production as “not applicable,” or NA.) Trade measures. Ex-Im’s charter requires it to consider whether trade measures—antidumping or countervailing duty orders and section 201 injury determinations—apply to products that would result from Ex-Im financing. According to Ex-Im officials, Ex-Im does not fund projects directly subject to trade measures as a matter of practice, although it has the authority to do so if the board determines that a project’s benefits outweigh its costs. This screen removes applications whose projects are subject to trade measures not just from further economic impact analysis, but from eligibility for Ex-Im financing. (Ex-Im codes these requests as “trade sanctions,” or TS.) Foreign production of oil and gas or diamonds—”undersupplied” products. Ex-Im has determined, with input from other agencies, that all projects increasing the foreign production of oil and gas or diamonds are unlikely to adversely impact the U.S. economy. This screen removes requests to finance projects involving oil and gas or diamonds from further economic impact analysis. (Ex-Im codes these requests as “undersupplied,” or US.) Financing threshold of $10 million. Ex-Im presumes that projects requesting financing of $10 million or less are too small to adversely impact the U.S. economy. According to a senior Ex-Im official, Ex-Im selected $10 million as the threshold because that figure is used for a variety of other bank purposes, including whether applications should be reviewed by the board of directors or should receive an environmental assessment. The official also stated that the use of this threshold is reasonable for the economic impact process because $10 million financing is likely to result in little foreign production and, therefore, is not likely to adversely impact the U.S. economy. This screen removes applications requesting financing of $10 million or less from further economic impact analysis prior to final financing decisions (although these requests are subject to an annual review after authorization, which we describe later). (Ex-Im codes these requests as “annual review,” or AR.) One percent substantial injury test. Ex-Im’s charter requires it to conduct a detailed economic impact analysis when a project will cause “substantial injury,” defined as an increase in foreign production greater than or equal to 1 percent of U.S. production of the same or a similar good. To conduct this test, Ex-Im calculates a simple ratio of the expected increase in foreign production resulting from the project to current U.S. production in that industry. Ex-Im’s procedures also allow for the use of “proportionality” in conducting the 1 percent test, which Ex-Im defines as the relation of the dollar value of the Ex-Im-financed U.S. component of the project to its overall cost. This screen removes applications whose projects would increase foreign production by less than 1 percent from further economic impact analysis. (Ex-Im codes these requests as “no substantial injury,” or NSI.) The remaining applications are subject to detailed analysis. (Ex-Im codes these requests as “hold for analysis,” or HA.) See figure 2 for information on how Ex-Im categorizes applications throughout the screening process. The screens Ex-Im uses in its economic impact analysis identify a small share of applications for detailed analysis. Between fiscal years 2003 and 2005, the vast majority of applications was determined not to support foreign production of exportable goods and, therefore, was not applicable for economic impact analysis. Of the 771 requests that involved foreign production of an exportable good and that, therefore, were applicable for economic impact analysis, 679 were eliminated from the process because they were $10 million or less. Of the remaining 92 applications, 72 were eliminated by other screens and 20 were held for detailed analysis. Figure 3 illustrates the composition of applications by screening category, both in terms of the number of projects and the dollar value of applications. At the end of every fiscal year, Ex-Im aggregates projects it financed for less than $10 million by foreign buyer, and then by product, to determine if, collectively, a buyer’s portfolio of projects meets the definition of substantial injury. Ex-Im staff report their findings in a document entitled Annual Review of Economic Impact Cases. Ex-Im cannot rescind funding if it finds after the review that a buyer’s projects collectively meet the definition of substantial injury. When Congress reauthorized Ex-Im’s charter in 2006, it introduced a new process to ensure that smaller projects do not collectively meet the definition of substantial injury. The new legislation requires Ex-Im to review a foreign borrower’s requests on an ongoing basis, aggregating its applications over the previous 24 months to ensure that its financed portfolio does not surpass $10 million. If the aggregate financing does exceed $10 million, the bank must subject the entire aggregate production from the proposed project and relevant projects approved during the preceding 24-month period to further economic impact analysis. According to Ex-Im’s revised procedures, only the most recent, proposed project will be affected by the results of this economic impact scrutiny. For applications that remain after the screening process, Ex-Im conducts a detailed analysis. The detailed analysis’s components are designed to address specific legislative requirements, including comments solicited from the public and relevant U.S. government agencies. Ex-Im compiles its findings, along with its conclusion regarding whether the project will negatively impact the U.S. economy, in a memorandum to the board of directors. (See app. II for a list of applications for which Ex-Im began a detailed economic impact analysis between fiscal years 2002 and 2006.) In its detailed economic impact assessments, Ex-Im addresses the specific statutory requirements concerning the assessment of whether a foreign product will be in surplus in world markets or in competition with U.S. production, and estimates an overall impact on trade flows. The components of this analysis include (1) an assessment of whether the foreign product potentially supported by Ex-Im financing will be in surplus on world markets—which Ex-Im terms as being “in oversupply,” (2) an estimate of U.S. production that could be displaced by competition with the increased foreign production, and (3) the net impact on U.S. trade flows. According to its procedures, Ex-Im assesses whether the product to be produced by the foreign buyer is in oversupply using a set of indicators that include trade measures, such as antidumping duties on related products, and stagnating global prices. Finally, Ex-Im estimates the net effect on the U.S. economy by comparing the trade flows associated with the initial U.S. export and any follow-on, spare-part sales with the potential displaced production. This net economic impact assessment provides the type of analysis that, according to a senior Ex-Im official, could be informative to a board of directors’ decision to exercise its discretion in approving applications where, for example, foreign production could compete with U.S. producers and represents 1 percent or more of U.S. production. Ex-Im’s charter also requires it to solicit public comments. Ex-Im publishes a public notice in the Federal Register when beginning a detailed analysis and allows for a 14-day public comment period. For the applications we reviewed, Ex-Im’s public notices contained (1) the project’s value, (2) the country where the foreign borrower was located, (3) the goods to be produced, (4) the expected resulting amount of increased production of that good, and (5) the potential areas where the end product would be marketed. We found that Ex-Im consistently posted Federal Register notices containing the requisite information. The 2006 reauthorization codifies that practice and also requires Ex-Im to include information about the amount of the financing involved. In addition, the new legislation requires the bank to publish a revised public notice and allow for another comment period if a project changes materially. Ex-Im also consistently solicited comments on draft analyses from relevant U.S. government agencies: the Departments of Commerce, State, and the Treasury and the Office of the U.S. Trade Representative (USTR). The 2006 reauthorization codifies that practice and additionally requires Ex-Im to notify relevant congressional committees that it is conducting a detailed economic impact analysis. Ex-Im staff create an economic impact memorandum that is used to describe their findings, along with their conclusion regarding whether the project is likely to have a positive or negative impact on the U.S. economy. Between fiscal years 2003 and 2005, Ex-Im approved financing for about two-thirds of the projects that involved foreign production of a exportable good, and that, therefore, were applicable for economic impact review. When reviewing applications, Ex-Im’s board of directors considers economic impact and other factors. Ex-Im’s 2006 reauthorization requires the bank to provide a nonconfidential summary of the facts found and conclusions reached in any detailed economic impact analysis to the affected party, when requested. Ex-Im considered 771 applications applicable for economic impact review between fiscal years 2003 and 2005 and approved 525 projects, or 67 percent, which represented approximately $6.1 billion in financing. Of the approved projects, most had been removed from the economic impact process because the financing value was $10 million or less; however, these projects represented a relatively small portion of the approved financing ($615 million). Conversely, applications removed from the economic impact process because the project involved an “undersupplied” sector comprised a small number of approved projects (49) but the majority of approved financing ($3.8 billion). Of the 20 applications held for detailed analysis, Ex-Im approved 11, representing $1.7 billion. Figure 4 compares the number of approved projects by each economic impact code with the respective dollar value. The board or its designee decides whether to approve or deny any application on the basis of the economic impact designation in conjunction with many factors, including several other evaluations, such as an engineering feasibility study, an environmental impact assessment, and credit information about the applicant and the project. For those applications that undergo a detailed analysis, Ex-Im’s charter provides an exception that allows the board to approve the application if it finds that the short- and long-term benefits to industry and employment in the United States outweigh the costs to U.S. producers of a competing good. Under this authority, the board of directors could approve an application even if the staff concluded that the project would create a negative economic impact. The 2006 reauthorization requires Ex-Im to provide affected parties with a nonconfidential summary of the facts and conclusions of any detailed economic impact analysis within 30 days of receiving a written request. Prior to the reauthorization, Ex-Im published the board of directors’ financing decisions, but not information on whether the bank had conducted an economic impact analysis or the analysis’s findings. We identified substantial challenges and certain limitations in Ex-Im’s economic impact process. Determining the economic impact of a project is an inherently challenging process that requires defining which products and geographic markets will be affected and projecting future market trends. With respect to Ex-Im’s screening of applications to identify those for detailed analysis, we found varying effectiveness; the effectiveness of the $10 million threshold used by Ex-Im is uncertain and has not been analyzed by Ex-Im. We identified certain methods used in the detailed analysis that could be improved. These methods featured inconsistencies and limitations in how Ex-Im has estimated potential costs to U.S. producers related to their production being displaced over time by increased foreign competition. Also, how Ex-Im characterizes the net effect of its financing on the U.S. trade balance can be clarified. In addition, Ex-Im’s internal controls could be strengthened to better ensure that the identification process and analysis is conducted consistently and accurately. While the number of applications for financing received by Ex-Im annually creates challenges in assessing all potential applications for economic impact, we found that the screens Ex-Im established using its discretion under the charter varied in effectiveness. Excluding requests to finance projects in the oil and gas sector and the diamond sector from detailed economic impact analysis because they are undersupplied has been an effective screen; however, the effectiveness of the $10 million screen is uncertain. The number of applications for financing received by Ex-Im annually creates challenges in assessing all potential applications for economic impact. As we have previously discussed, Ex-Im processed 9,255 requests for financing from fiscal years 2003 through 2005, 771 of which involved foreign production of an exportable good and, therefore, were applicable for economic impact review. While Ex-Im reviews all applications for potential economic impact, the additional procedures it has introduced to screen out projects that are unlikely to have an adverse impact on U.S. producers are also intended to more effectively allocate Ex-Im’s limited resources. Ex-Im’s exclusion of the oil and gas sector and the diamond sector from detailed analysis because they are “undersupplied” has been an effective tool developed with input from other agencies and previous analyses in those sectors. Ex-Im initially developed a list of 31 natural resource sectors for which imports accounted for more than 50 percent of U.S. consumption as potentially “undersupplied.” Ex-Im reduced the list to 2 sectors (Ex-Im designated oil and gas as a single sector) with input from the U.S. government agencies that review the detailed analyses and the Department of Energy. Importantly, Ex-Im officials stated that, in the past, economic impact analyses of applications for projects in these sectors had always yielded a positive impact on the U.S. economy, and that, because these sectors were natural resources, the United States had limited ability to expand production domestically. Ex-Im created the undersupplied list to more effectively allocate its resources. The $10 million threshold’s effectiveness as a screen is uncertain because Ex-Im has not determined the extent to which it identifies projects that could meet the statutory definition of substantial injury. As we have previously discussed, the threshold was chosen, in part, on the basis of other Ex-Im practices that are triggered at $10 million, such as a board review and an environmental impact assessment. Ex-Im officials stated that requests for financing $10 million or less would generally be too small to increase foreign production by 1 percent or more of U.S. production. However, Ex-Im has not conducted an analysis to support that the $10 million threshold captures the appropriate projects. In theory, even a relatively small export of capital goods or services could be used to produce 1 percent or more of production in a small U.S. industry. More generally, the dollar value of a capital good project can be an imperfect signal of the size of the project in terms of its production as a percentage of the corresponding U.S. industry. For example, Ex-Im estimated that a $14 million export of equipment to Russia would allow production of polystyrene to expand by 1.4 percent of U.S. polystyrene production. In contrast, Ex-Im estimated that a $16.25 million export of mining equipment to Japan would allow a foreign company to produce roughly 14.6 percent of annual titanium production in the United States. We learned of or identified two requests for financing less than $10 million whose projects were associated with estimated foreign production of over 1 percent of U.S. production in an industry; data limitations did not allow us to do a thorough review of projects with a financed value of $10 million or less. First, an export of $9.9 million of ethanol dehydration equipment to Trinidad would allow a foreign company to produce an estimated 3.5 percent of U.S. production of anhydrous ethanol. We learned of the ethanol project because Congress required Ex-Im to conduct a postapproval detailed analysis in the Consolidated Appropriations Act of 2004. Second, we identified a $9.8 million export of mining equipment that would allow a foreign company to produce an estimated 1.73 percent of production in a U.S. industry. We identified the mining project when we attempted to sample 10 applications requesting financing for $10 million or less, from a universe of 80 applications between $5 and $10 million, to examine whether they resulted in foreign production equal to or greater than 1 percent of U.S. production in an industry. Of the 10 capital good exports in our sample, Ex-Im could provide information on the amount of production for 2—1 of which was the $9.8 million mining project that we have previously described. Thus, we were largely unable to determine the extent to which Ex-Im’s $10 million threshold screened out applications that would have met the 1 percent substantial injury test. The mining project and the ethanol project, while treated in accordance with Ex-Im’s procedure to exclude requests for financing $10 million or less from detailed economic impact analysis, indicate that requests of $10 million or less can be associated with production of over 1 percent of a corresponding U.S. industry. As we have previously noted, the 1 percent threshold is an important legislative criterion because it establishes whether a project meets the definition of substantial injury. Determining the economic impact of a project is an inherently challenging process; however, we identified limitations in certain assumptions Ex-Im makes to estimate potential costs to U.S. producers, and in how it characterizes the net effect of its financing on the trade balance. The modeling of international economic markets to determine the impact of government decisions and policies, including Ex-Im financing decisions, features a number of inherent challenges. Simplifications are always necessary to model complex economic interactions, and, even under simplified assumptions, precise data may not exist to address the question at hand. In some analyses, Ex-Im has found it challenging to define the industries that would be affected by Ex-Im-supported production, both in terms of products and geographic extent, a determination that will also influence estimates of the costs to U.S. producers. To calculate displaced production, Ex-Im must define the relevant industry, determine the regional or global markets in which there could be competition with U.S. producers, and collect trade and consumption data that are based on those markets. One case where Ex-Im officials noted challenges in obtaining the appropriate product data concerned a project supporting a denim plant in Turkey. To estimate potential displacement of U.S. denim exports, Ex-Im used data on U.S. exports of high-cotton-content denim (to reflect the Turkish manufacturer’s plan to produce “high-end” jeans). However, Ex-Im stated there was a lack of data on broader supply-and- demand factors for this denim—such as global capacity utilization for denim plants—and, thus, Ex-Im relied on projections for the price of jeans because 85 percent of all denim is used to produce jeans. In addition, an analysis of a semiconductor production facility in Singapore also illustrates market definition challenges. Ex-Im identified a type of “leading edge” semiconductor as the relevant product market, but also noted that because of the on-demand nature of production in the facility, it was difficult to conduct a trade flow analysis or determine potential displacement of semiconductors made in the United States. Defining the industry appropriately and collecting data to match that definition is an inherent challenge in conducting an analysis of this kind. More broadly, the full economic impact on U.S. industries of projects financed by Ex-Im depends on determinations or assumptions regarding what would happen in the absence of the financing. For Ex-Im, predicting these effects can involve determining or making assumptions regarding (1) what would happen to U.S. productive resources if Ex-Im’s financing for a project did not exist or (2) how global prices would evolve if new capacity were not added. Foreign competition for financing could also have implications for what would happen in the absence of Ex-Im financing. For example, if Ex-Im denied financing, the borrower might seek financing from another country’s export credit agency, resulting in similar capacity being added abroad without the use of U.S. goods or services. However, because foreign competition for financing can exist for many projects, a senior Ex-Im official noted that the application of this rationale would risk undercutting other economic impact provisions. In contrast, if a particular U.S. exporter would supply a foreign producer whether Ex-Im financed the project or not, then those exports would not be in addition to what would happen without Ex-Im support. There are limitations in certain assumptions that Ex-Im has made to estimate potential costs to U.S. producers related to displaced production that is spread over time or lower prices for U.S. competitors, which are important elements of the detailed economic impact analyses. Calculation of Displaced U.S. Production There are limitations and inconsistencies in how Ex-Im has calculated displaced U.S. production that is spread out over time. In measuring the potential cost of Ex-Im financing to U.S. industries, Ex-Im staff generally begin by estimating the annual level of displaced production in specific countries where U.S. production is expected to compete with the production supported by the Ex-Im loan or guarantee. This estimate is based on how much of the increased foreign production will be sold to countries that U.S. producers also supply, and on the current U.S. market share in those countries. While Ex-Im rightly considers both the present and future costs and benefits of its projects, we identified limitations and inconsistencies in its estimates—including its assumptions regarding (1) whether displacement, when it occurs, will happen every year or every other year and (2) how Ex-Im accounts for expected growth in global demand for a product in its estimates of displaced production—that can reduce or eliminate the amount of displaced production as initially estimated. These assumptions can, in some cases, significantly affect estimates of displaced production and, hence, net economic impact. Importantly, OMB notes that in cost-benefit analyses, major assumptions should be varied to determine how sensitive outcomes are to changes in the assumptions. Ex-Im has sometimes used an every-other-year method of calculating displaced production that occurs over time. Assuming that U.S. production would be displaced only every other year can significantly reduce estimates of displaced production as compared with an annual approach; it can reduce the estimated displaced production by close to half. In one 2005 case where Ex-Im used this approach, it estimated a net-positive trade impact with increased exports of $14.9 million and displaced production of $9.8 million over 8.5 years. Assuming every year displacement would have yielded a net negative impact. In a 2006 analysis, estimated costs were reduced from $221,000 to $123,000 by assuming that displacement would occur every other year, although in that case the estimated value of exports was substantially higher than the estimated displacement, so the assumption did not change the net trade effect estimate. Ex-Im has explained the use of every-other-year discounting on varying grounds, including normal supply-and-demand cycles and regular cyclical fluctuations in the industry. However, such cyclical fluctuations are not likely to reduce the level of displaced production relative to what would occur without Ex-Im’s financing, because the cyclical variation is not induced by the additional capacity supported by Ex-Im. In contrast, Ex-Im did not use an every-other-year approach to displaced production in a case where it characterized the industry as cyclical. Ex-Im has assumed in some analyses that growing demand for the commodities it is analyzing would eliminate the initial amount of displaced production it estimated. For example, in an analysis of a potential facility to increase foreign production of polypropylene, Ex-Im assumed that an estimated $83 million in displaced U.S. production over 8 years would not actually be displaced because of growing global demand for polypropylene. However, this implicitly assumes that, in the absence of Ex-Im support for the larger facility, U.S. production would not have expanded on its own to take advantage of that growing demand. Therefore, Ex-Im’s estimate of displaced production will be highly sensitive to assumptions regarding how U.S. producers would meet growing world demand if new Ex-Im-supported capacity did not exist. Ex-Im made similar assumptions—that growing demand would offset potential displaced production—in an analysis of flat glass production in Mexico. Officials at one agency from which Ex-Im solicits comments stated that these assumptions were very optimistic, and that a sensitivity analysis would be appropriate. Potential Costs Related to Lower Prices Ex-Im’s method of estimating displaced production does not adequately acknowledge the potential costs to U.S. producers in some cases as a result of lower global prices. Ex-Im’s methodology for estimating the economic losses to U.S. competitors does not capture indirect costs that are transmitted through changes in global market prices. As we have previously noted, the estimate of displaced production is focused on specific countries in which U.S. firms are expected to directly compete with the new foreign production. However, some costs to U.S. firms may come in the form of lower prices for homogeneous globally traded commodities, instead of directly displaced production. These price changes could occur even in markets where there is no direct competition with the Ex-Im-supported foreign production, and should be acknowledged even if they cannot be calculated precisely. An official from one of the agencies that Ex-Im consults on economic impact also stated that one cannot necessarily assume that an increase in production in a single region will not affect global prices. For example, in a detailed analysis of the economic impact of a plant in Egypt that would produce ammonia, Ex-Im’s estimate of the costs to domestic producers may not have captured the potential effect of lower global prices on those producers. Ex-Im stated that output from this plant was not expected to directly compete with U.S. ammonia exports. However, the United States procures ammonia globally and, therefore, is not insulated from even distant changes in market conditions. In comments provided to Ex-Im, industry officials also noted that because ammonia is a commodity, any increase in global supply would drive down prices. Similarly, in a detailed analysis of the economic impact of a plant in Israel that would produce polypropylene, Ex-Im focused on potential losses to U.S. producers in specific export markets. However, Ex-Im also noted in the analysis that polypropylene is a “bulk commodity that is widely traded and can easily be transported worldwide.” This suggests that additional polypropylene capacity abroad could reduce the polypropylene prices faced by U.S. producers, even if they are not in direct regional competition with the new production. There are a number of potential techniques, which vary in complexity, to estimate or characterize the potential impact of certain types of Ex-Im financing on global prices. The United States International Trade Commission often uses sophisticated and resource-intensive economic models to estimate an array of effects of changes in U.S. trade policies on, among other things, the prices faced by U.S. producers. However, other less complicated and less resource-intensive techniques could be used to approximate the impact of global supply changes on prices. According to OMB guidance, an enumeration of the different types of costs and benefits can be helpful in identifying the full range of potential effects, and, in addition, analyses should include a statement of the strengths and weaknesses of assumptions. Ex-Im officials stated that the separate assessment of oversupply should address some of these price effects. However, while the oversupply analysis may indicate the overall direction of global prices, it is not intended to measure the impact of Ex-Im- supported production on global prices or the potential effect of relatively lower prices on U.S. producers. Ex-Im’s characterization of its net trade flow analysis as reflecting impacts on the overall U.S. trade balance is misleading and can be clarified. As we have previously noted, a net comparison of how trade in two industries—the exporting industry and U.S. producers of the foreign- produced good—would be affected by Ex-Im financing is a key component of the detailed analyses. In its economic impact memorandums concerning its detailed analyses, Ex-Im generally presents the amount of this estimated net impact as a change in the U.S. trade balance, stating that the trade balance will “improve” by the full dollar value of the exports it finances, less lost production. This characterization is misleading because the incremental impact of Ex-Im financing is likely to be less than the total value of those exports. Economists generally agree that the aggregate trade balance is largely determined by macroeconomic factors, especially the domestic balance between savings and investment. Thus, the incremental impact of Ex-Im financing is likely to be much smaller than the total value of U.S. exports supported by Ex-Im or the total value of displaced production. However, while the size of the impact on the U.S. balance of trade is overstated, Ex-Im’s conclusions about net economic impact are likely to have been unaffected by this practice because these cost and benefits are both overstated. We found that the internal controls Ex-Im uses to ensure the accuracy of its economic impact identification and analysis process could be strengthened. According to the Standards for Internal Control of the Federal Government, internal controls should reasonably ensure the effectiveness and efficiency of operations and the compliance with applicable laws and regulations. Control activities include a wide range of diverse activities, such as training, approvals and verifications, and the creation and maintenance of related records that provide evidence of execution of these activities as well as appropriate documentation. The manner in which Ex-Im conducts at least three control activities does not reasonably ensure effective analyses. First, Ex-Im did not provide the employees conducting the analyses with formal training or guidance on how to conduct the analysis. Second, Ex-Im did not consistently document internal review of the analysts’ work. Third, Ex-Im does not maintain documentation of certain important pieces of information. Without strong internal controls, Ex-Im cannot ensure that all requests for financing are appropriately analyzed. Although appropriate training is a key internal control, Ex-Im provided the analysts with whom we spoke with limited training or systematic guidance on how to conduct an economic impact analysis. According to the Standards for Internal Control of the Federal Government, management should ensure that employees have the required skills to achieve organizational goals. Training should be aimed at developing and retaining employee skill levels to meet changing organizational needs. According to the five analysts with whom we spoke, Ex-Im’s training includes reading the economic impact procedures and previously conducted analyses and informal mentoring from coworkers. One analyst relied on a notebook compiled by his predecessor and another analyst relied on a template; however, according to bank officials, neither of these documents had been sanctioned by Ex-Im. This training and guidance may not be sufficient to ensure the use of the same fundamental, methodological approach across analyses, particularly given that the Policy Analysis Division, which is responsible for conducting the analyses, has had a lot of turnover since 2002. Officials from the Policy Analysis Division stated that the economic impact analysts always consult with the engineers when conducting a detailed analysis because they provide important technical expertise; however, the engineers do not consistently approve final analyses. According to the Standards for Internal Control of the Federal Government, key duties and responsibilities need to be divided or segregated among different people to reduce the risk of error. This includes separating the responsibilities for reviewing the analyses. The Ex-Im policy division relies on the engineering division for industry-specific information. For example, the Engineering and Environment Division generally calculates the 1 percent tests for all applications and helps the analysts define the appropriate commodity markets. In addition, engineers contact the exporters and borrowers to gather the technical information necessary to make those determinations. However, while the employee who conducted the analysis and the head of the policy division always signed the final economic impact analyses to denote their concurrence with the analysis, the engineers did not. Engineers signed only 6 of the 14 economic impact analyses for which the board of directors made final financing decisions. Ex-Im officials acknowledged that, although the policy division does consult with engineers for every detailed analysis, Ex-Im does not have any rigorous procedures prescribing when an engineer should sign an analysis. Without the consistent signatures denoting engineer review, Ex-Im loses an important layer of assurance that their analyses were accurately conducted. We also found that Ex-Im does not maintain documentation of important information concerning its detailed analyses. According to the Standards for Internal Control of the Federal Government, all transactions and other significant events need to be clearly documented, and the documentation should be readily available for examination. The policy division does not maintain records of the underlying data sources for its 1 percent test calculations, just the results of the calculations. Without the underlying data, the test cannot be replicated. The policy division also does not keep copies of draft analyses that it circulates to the reviewing agencies for their comments. The policy division also does not keep records of projects for which it began a detailed analysis, but which the applicants withdrew prior to the board making a final financing decision. A senior bank official noted that it probably would be a good idea for the policy division to start keeping files on the withdrawn data. Commerce, State, Treasury, and USTR have played an important role in the quality assurance process regarding Ex-Im transactions that undergo a detailed economic analysis. In addition to specifically notifying these agencies when it begins a detailed analysis, Ex-Im provides them with a copy of the draft detailed analysis and asks that they provide their analytic and policy opinions. An Ex-Im official noted that the bank has voluntarily circulated the draft analyses to be as inclusive as possible, but it is not required to do so by its charter. Each of the four agencies reviews the detailed economic impact analysis in light of larger U.S. government policies, laws, and economic principles. The agencies often provided Ex- Im with important quality assurance feedback through informal dialogue. For example, when reviewing a draft of a transaction concerning denim, USTR noted in an e-mail to Ex-Im that the analysis had not considered how the end of textile quotas, which had happened just prior to the transaction’s application for financing, would impact the global supply of textiles, including denim. Ex-Im modified its analysis to incorporate this consideration. In addition to providing quality assurance, the agencies’ comments can influence a transaction’s outcome. For example, when agencies expressed the opinion that steel production was in overcapacity, Ex-Im’s staff changed their conclusion that the transaction would have a “net positive impact” to that the transaction would have a “net negative impact.” In an early draft of a detailed analysis concerning direct reduced iron production, Ex-Im staff concluded that steel would not be in oversupply when the foreign buyer’s factory came on-line. However, three of the four agencies disagreed with this assessment. According to the economic impact memorandum for this transaction, Ex-Im staff deferred to the collective expertise among the agencies and changed its conclusion. Ex-Im generally requests the agencies’ comments 1 week after it circulates the draft detailed analysis to them. Several agency officials stated that 1 week is not enough time to thoroughly review an analysis because of the complexity of the analysis and the need to get the views of those in official, senior-level positions on the analysis. However, some agency officials noted that Ex-Im does try to accommodate their requests for additional information and review time. We found that some aspects of Ex-Im’s economic impact process lacked transparency. While Ex-Im publicly posts their procedures, the procedures are difficult to understand and contain undefined terms. In addition, Ex-Im does not provide all public comments to its board of directors as required by its procedures. Ex-Im’s publicly available procedures do not clearly lay out how it analyzes applications for economic impact; therefore, interested parties are unable to reasonably assess their project’s viability. In addition, Ex-Im could increase the process’s transparency by referencing its list of sensitive sectors in its procedures and publishing the detailed analyses’ outcomes. Ex-Im’s procedures for analyzing applications are unclear to lenders and exporters directly involved in those projects, other industry officials, and U.S. government officials. According to Ex-Im’s annual competitiveness report, many lenders and exporters involved in projects requesting the bank’s financial support expressed particular concern that the economic impact issue needs greater transparency and predictability. One exporter who participated in Ex-Im’s annual competitiveness survey noted that, because the economic impact process is unpredictable, project sponsors may consider finding an alternative to the U.S. product and financing if the project would be subject to economic impact analysis. Industry officials with whom we spoke also generally noted that the process was not clear. One industry official called the process “a black box.” Similarly, officials from one U.S. government agency with whom we spoke noted that Ex-Im’s criteria and methodological assumptions were unclear. Ex-Im’s oversupply assessment—which can be a key factor in determining economic impact—lacks a clear basis because Ex-Im has not defined oversupply or matched the list of oversupply indicators in its procedures with those that they actually use. As we have previously noted, a determination of oversupply—Ex-Im’s interpretation of the statutory consideration of whether production is in surplus on world markets—can be a basis for denial of an application. Ex-Im has also referred to information gathered in its assessment of oversupply in its determination of potential displaced production and, thus, its estimate of net economic impact. There is no generally accepted definition of oversupply, which Ex- Im’s procedures and staff both acknowledge. In fact, the excess supply of a good over demand is not likely to be a persistent condition because, in most markets, prices will adjust to bring the supply of the good in balance with the demand. However, various indicators can provide perspectives on the outlook for supply and demand, and on whether expansions in capacity might come at a time of falling prices. Ex-Im officials stated that they have not created an operational definition of oversupply to guide their assessment of it in detailed economic impact analyses. Instead, according to its procedures, Ex-Im analyzes transactions on a case-by-case basis and assesses oversupply according to a series of possible indicators. These indicators are as follows: Final antidumping and countervailing duty orders on similar products elsewhere. Section 201 investigations. Stagnating or falling global prices. Falling gross margins of domestic producers. Industry bankruptcy and unemployment trends. Trade Adjustment Assistance petitions. Preliminary antidumping and countervailing duty determinations. Multilateral production limitation agreements. Ex-Im has not generally used the more domestically focused indicators listed in its procedures to support conclusions regarding oversupply, and the procedures do not include a key indicator that it has used. Ex-Im officials stated that the oversupply assessment is made on a global basis. (Ex-Im’s charter refers to surplus on “world markets.”) However, most of the indicators listed in Ex-Im’s procedures refer to laws, programs, or conditions in the United States that are not necessarily reflective of conditions on global markets. These include, for example, trade measures used by U.S. firms to mitigate the adverse effects of competition from foreign imports. While Ex-Im’s economic impact memorandums often contained information on these trade measures in a separate section, the presence or absence of these measures is not generally identified as the basis for support of oversupply determinations. Furthermore, an indicator that has been important to Ex-Im’s determinations, capacity utilization, is not listed among the indicators of oversupply in its procedures. Ex-Im’s conclusions about oversupply are typically supported by information related to prices, capacity utilization, and direct measures or forecasts of global supply and demand. Differences in criteria considered important for determining oversupply have been the basis for disagreements regarding whether Ex-Im should deny an application on economic impact grounds. An Ex-Im official stated that the lack of a definition for oversupply has been problematic because individuals may differ regarding whether a commodity is in oversupply, depending on the factors they consider. As a result of such disagreements, some transactions at Ex-Im have “stopped in their tracks,” according to the Ex-Im official. This was illustrated in the case of a transaction that would have increased steel capacity in Saudi Arabia. Ex-Im and several agencies initially disagreed regarding oversupply on the steel project. Ex- Im’s final economic impact assessment concluded that the transaction would likely have a net negative impact on the U.S. economy, and Ex-Im’s board denied the transaction. An official with one of the agencies from which Ex-Im solicits comments also stated that oversupply has been an area of disagreement. Similarly, Ex-Im does not clearly define when the concept of “proportionality” would be used. An Ex-Im official noted that the bank included proportionality in its procedures after the 2002 reauthorization to retain some flexibility in how it analyzed the applications. A senior official stated that, in some cases, it is not reasonable for the bank to assume responsibility for all of a project’s increased production when it only finances a portion of the overall project. Instead, the concept of proportionality allows the bank to measure the potential for its financing to displace the production of U.S. competitors in proportion to its funding. Applying proportionality would reduce the estimated costs to U.S. producers. For example, if Ex-Im financed $100 million worth of U.S. exports associated with a larger $2 billion project, the bank would be supplying 5 percent of the total project cost. If the $2 billion facility produced 10,000 metric tons of an exportable good, Ex-Im would assess the impact of its financial support on U.S. competitors on the basis of only 5 percent of the output—in proportion with its funding—or 500 metric tons. Using proportionality can change a net negative determination to a net positive determination. For example, Ex-Im applied the proportionality concept to the estimate of displaced production regarding a project that would allow a Chinese company to increase production of petrochemicals. According to documents provided by other government officials, Ex-Im’s analysis of a petrochemical project noted approximately $170 million in expected benefits from the U.S. export sale, but approximately $750 million in potential indirect “lost opportunity” costs. Using standard calculations, the analysis would have yielded a net negative impact of over $580 million. However, Ex-Im applied proportionality and found that its share of the project financing equaled only 4.5 percent of project costs— therefore, Ex-Im financing was associated with about $34 million in potential indirect lost opportunity costs. This use of proportionality yielded a net positive impact of $134 million. Ex-Im approved the project in fiscal year 2003. Ex-Im also has not systematically used the proportionality concept or specified when it would be applicable. For example, in an application to finance an ethanol facility in Trinidad, Ex-Im argued that the equipment they financed did not allow the company to produce ethanol, but rather to introduce a “simple refinement step”—that is, dehydration. At that time, the price of hydrous ethanol (the input) was 10 percent lower than anhydrous ethanol (the end product). Therefore, using the proportionality approach, Ex-Im asserted that its financing was only responsible for 10 percent of the output. Using proportionality, Ex-Im concluded that the project would increase foreign production by 0.35 percent of U.S. production. Using standard calculations, foreign production would have increased by 3.5 percent of U.S. production. Ex-Im asserts that its decision to use proportionality when equipment refines a product rather than produces a new product is fair and reasonable. However, in a similar project involving the refinement of hot-rolled steel to galvanized steel, Ex-Im did not apply proportionality. In addition, several reviewing agencies have expressed concerns about the use of proportionality when determining a project’s economic impact. Without knowing the conditions under which Ex-Im would apply proportionality, interested parties do not have a sense of the viability of their proposed project. Ex-Im acknowledged that both the oversupply and proportionality language in the procedures is confusing. A senior Ex-Im official also noted that the bank struggles with determining when to use the proportionality concept. The bank also acknowledged that it should create more specific guidelines in their procedures for defining oversupply and proportionality. Specific criteria would make the process more transparent. However, Ex-Im has not altered the language in its most recent procedures. Ex-Im does not regularly include the full text of the public comments that they receive. Ex-Im’s economic impact procedures require it to attach the full set of comments as an appendix to the economic impact memorandums. In some cases, staff members attached selected communications. There were seven cases that received public comments and went to the board for decision but only two included copies of all of the comments received. According to Ex-Im, the Policy Analysis Division does not append copies of all public comments received because they are sometimes too numerous. Instead, the policy division summarizes the main arguments and often includes a representative letter. An Ex-Im official noted that the division retains all public comments and would make them available to the directors if requested. However, Ex-Im does not note in its procedures what criteria it uses for deciding which comments not to include, nor does it note in the memorandums that the letters were available for perusal upon request. The 2006 reauthorization now requires Ex-Im to provide in writing the views of all people who submit comments. We identified two practices that Ex-Im does not currently incorporate into their economic impact procedures that would increase the predictability of the process’s outcomes—namely, referencing the sensitive sector list and publishing detailed analyses results. First, in its revised procedures, Ex-Im does not reference its list of industries unlikely to be financed for economic impact reasons. In the 2006 reauthorization legislation, Congress required Ex-Im to create a “sensitive sectors list” denoting sectors that are unlikely to receive Ex-Im financing. Ex-Im has created this list and makes it publicly available on its Web site. However, Ex-Im’s updated procedures do not specify the list’s implications and indicate that requests for financing projects in sectors on the list will receive close scrutiny during the economic impact process. In contrast, the procedures do list “undersupplied” sectors (oil and gas and diamonds) that will not be denied on economic impact grounds. A direct reference to the sensitive sectors list would enable interested parties to quickly identify whether their projects were viable. Second, Ex-Im does not currently publicize the results of its detailed economic impact analyses. Ex-Im publicly announces when it begins a detailed analysis. It also posts minutes of board meetings on its Web site that announce ultimate financing decisions. However, the financing decisions do not include statements regarding whether the project was subject to an economic impact analysis, or the determination regarding whether there would be a net negative or net positive impact on the economy. Publicizing such information would provide interested parties with a record of what types of projects passed the detailed analysis. While many requests for Ex-Im’s financing do not require economic impact analysis, the bank often faces the difficult task of balancing the interests of different industries while working to achieve its broad mission to promote U.S. exports and increase U.S. jobs. Ex-Im’s board of directors must consider the economic impact of proposed projects while also weighing other factors, such as creditworthiness, environmental impact, and small business participation. Congress has given Ex-Im’s board wide discretion in how it implements the economic impact requirements specified in the bank’s charter. It directs Ex-Im to examine certain factors, such as whether products are in surplus on global markets (or in “oversupply,” according to Ex-Im), but gives the board the authority to approve applications that it believes will have an overall benefit on U.S. production and employment, despite some negative impacts. Determining the various economic impact aspects that weigh into the board’s decision can be challenging, requiring Ex-Im to identify what international markets are likely to be involved and to quantify how economic trends may play out in the future. While Ex-Im’s board of directors may sometimes have to consider economic impact in the face of imperfect information, it needs to be able to rely on a process that involves sound methodology and consistent application of procedures, and to understand key assumptions and areas of uncertainty. Moreover, Ex-Im clients and affected U.S. industries need a process that is transparent and, where possible, predictable. Although Ex-Im generally follows its broad economic impact procedures, we identified several areas for improvement related to the screening of applications for economic impact, the analysis methodology, and the transparency of the overall process. First, while Ex-Im has the discretion to use screens to identify applications for further review and to allocate its staff resources effectively, the effectiveness of Ex-Im’s $10 million screen is uncertain because Ex-Im has not conducted an analysis to determine the extent to which it identifies projects that could meet the statutory definition of substantial injury. Next, we identified limitations in certain assumptions Ex-Im makes to estimate economic impact in its detailed analyses. In some cases, these limitations had not been adequately disclosed nor had the sensitivity of economic impact conclusions to these assumptions been explored. In addition, while Ex-Im makes the economic impact procedures publicly available, the procedures do not provide adequate transparency and predictability. This has been noted by exporters, industry, and U.S. government agency officials. Ex-Im’s own competitiveness survey cites one respondent as saying that the unpredictability of the economic impact process hurts U.S. sourcing in projects. Congress demonstrated in Ex-Im’s 2006 reauthorization its continuing interest in Ex-Im having a sound and transparent economic impact process, and addressed certain transparency concerns. We believe that several improvements in Ex-Im’s process are still needed to ensure that its decisions stand up to the inevitable scrutiny of interested and affected parties. To improve Ex-Im’s identification and analysis of applications for economic impact, we recommend that the Chairman of the Export-Import Bank of the United States take the following three steps: Review the $10 million threshold to determine whether additional steps are needed to mitigate the risk of exempting from more detailed review applications that could meet the definition of substantial injury. The additional steps could include, for example, selectively reviewing transactions that would affect relatively small U.S. industries or sensitive sectors. Create specific methodological guidelines for staff analyzing applications for economic impact, bearing in mind relevant OMB guidance where appropriate. Review and strengthen internal controls concerning the economic impact analysis to ensure, for example, that staff members conducting the analyses have sufficient training and guidance in Ex-Im’s economic impact methodology, that relevant Ex-Im staff verify and approve the analyses, and that sufficient documentation is maintained to record key information. To improve the public transparency of the economic impact process for interested and affected parties, we also recommend that the Chairman of the Export-Import Bank of the United States take the following three steps: Clarify publicly available procedures by including more information regarding Ex-Im’s methodology for analyzing applications, such as defining how it incorporates “oversupply” determinations in its analysis and what measures it uses and specifying under what conditions “proportionality” would be used. Inform interested parties about the sensitive sector list by including a reference to the list in the economic impact procedures. Publish either individually, or in the publicly available board minutes, the final determinations regarding whether a project would have a positive or negative impact. We provided a draft of this report to the Export-Import Bank of the United States. Ex-Im generally concurred with our recommendations and stated that it will continue to explore feasible ways to improve the economic impact procedures and make the process more consistent and user- friendly. Ex-Im stated that it will (1) review the $10 million threshold to ensure that it satisfies its intended function; (2) enhance existing quality assurance measures by attempting to standardize staff training and to expand document maintenance; and (3) clarify the basis for an assessment of “oversupply,” and create criteria for using “proportionality.” In addition, Ex-Im agreed to seek to incorporate our suggestions as it refines its analytic methodology, but the bank noted that a single approach would not address the diversity of transactions it considers. We acknowledge that a single approach is not necessarily appropriate for all analyses, but we believe that a consistent set of methodological principles, such as those embodied in OMB guidance, would nevertheless enhance the economic impact analysis process. Lastly, Ex-Im agreed that increased transparency and predictability will improve the economic impact process and notes that it has amended its economic impact procedures to reflect increased transparency requirements laid out in the Export-Import Bank Reauthorization Act of 2006. We believe that the process’s transparency and predictability can be further improved by several practices, such as referring to the sensitive sector list in the procedures and publishing the bank’s determination regarding whether a project will have a positive or negative net impact. Ex-Im also provided technical comments, which we have incorporated where appropriate. Ex-Im’s official comments are reprinted in appendix III. As agreed with your offices, unless you publicly announce the contents of this report, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to other interested congressional committees. We also will provide copies of this report to the Chairman of the Export-Import Bank of the United States. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-4347 or YagerL@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other GAO contacts and staff acknowledgments are listed in appendix IV. The Ranking Member of the Senate Committee on Finance and a member of the Senate Committee on Banking, Housing, and Urban Affairs requested that we review the Export-Import Bank of the United States’ (Ex-Im) economic impact analysis process. In this report, we reviewed (1) Ex-Im’s overall policies and procedures for determining economic impact; (2) the extent to which Ex-Im’s procedures provide for the identification and appropriate analysis of applications that could potentially cause adverse economic impact; and (3) the extent to which its policies, procedures, and decisions are transparent to interested and affected parties. Country. Type of industry/commodity. Finance amount. Final board decision. Staff members conducting analyses. Methodological issues. Given the small universe (17) of detailed economic impact analyses conducted by Ex-Im from fiscal years 2002 through 2006, we determined that selecting a random sample would not be necessary or appropriate. While we used these 5 case studies to guide some of our work, we reviewed all 17 detailed analyses because findings that are based solely on a judgmental sample would not necessarily be generalizable to all detailed economic impact analyses conducted. To describe Ex-Im’s legal interpretation of its statutory economic impact analysis mandate, we reviewed the statutory provision as it was written in the bank’s 2002 reauthorizing legislation; reviewed other relevant legal documents; and interviewed Ex-Im legal staff, including the General Counsel, regarding their interpretation. To describe Ex-Im’s economic impact analysis process, we reviewed Ex-Im’s economic impact analyses procedures published in March 2003 and compared them with the 2002 reauthorization legislation for consistency. To describe how Ex-Im’s 2006 reauthorization will impact the economic impact procedures, we reviewed the relevant legislation and the revised economic impact procedures, and spoke with cognizant Ex-Im officials. To describe how Ex-Im implements the economic impact procedures, we spoke with the analysts who analyzed our 5 case studies, the engineers who assisted with the analysis, and the supervising officials. To determine how many applications Ex-Im coded for economic impact, we reviewed data on all projects processed between fiscal years 2003 and 2005. Ex-Im did not use the same set of economic impact procedures when reviewing applications in fiscal year 2002; therefore, we did not use data from that fiscal year. In addition, Ex-Im did not have complete data for fiscal year 2006 projects at the time of our review. These data have some limitations that could result in small deviations from the values and quantities that we reported. Despite limitations, we determined that the transaction data provided by Ex-Im were sufficiently reliable for our purposes. To determine whether the exportable goods in each of our five case studies were subject to antidumping orders and countervailing duty orders, we reviewed the United States International Trade Commission’s (ITC) list of current antidumping and countervailing duty orders in place as of October 23, 2006, and February 15, 2007; the Federal Register from 1997 to the present for notices posted by ITC or the Department of Commerce’s International Trade Administration (ITA); and ITA’s AD/CVD Investigations Federal Register History. To determine whether the exportable goods in our five case studies were subject to “safeguards,” we searched the Federal Register from 1997 up to the date of the case for notices posted by ITC or ITA that mentioned the name of the product involved in our cases. To assess the extent to which Ex-Im’s procedures provide for the identification and appropriate analysis of requests to finance projects that could potentially cause adverse economic impact, we reviewed the economic impact provisions of Ex-Im’s charter and the procedures implementing those provisions. To determine the effectiveness of the $10 million threshold, we attempted to judgmentally sample 10 applications that requested financing for capital good exports between $5 and $10 million. However, our ability to do so was limited because Ex- Im could provide the relevant information for only 2 of the 10 projects. We reviewed 17 detailed economic impact analyses and documentation related to some applications that had not received a detailed analysis, and conducted interviews with Ex-Im officials on the 5 analyses that we chose as case studies. We also reviewed the case studies within a panel of Ph.D. economists in GAO. In addition, we interviewed officials from agencies that conduct similar analyses at the ITC and the Overseas Private Investment Corporation, and reviewed cost-benefit analysis guidance from the Office of Management and Budget. We also reviewed relevant reports from GAO, the Congressional Budget Office, and the Congressional Research Service. To assess the economic impact analysis process’s transparency, we reviewed the Federal Register to confirm that Ex-Im posted public notices for all detailed analyses it began. We reviewed Ex-Im’s Web site to establish what information Ex-Im made public (including current procedures and final transaction decisions). We also reviewed internal Ex-Im documents. We interviewed agency officials from the Departments of Commerce, State, and the Treasury and from the Office of the U.S. Trade Representative who formally review the economic impact memorandums. We compared draft analyses that the agencies received from Ex-Im with final analyses and reviewed communications between the agencies and Ex-Im. We also interviewed representatives from companies whose exports relied on Ex-Im financing, and representatives from organizations that expressed concern over the projects’ potential impact on the industries they represent. We used our 5 cases to determine which agency officials, exporters, and industry officials to interview. We conducted our work from October 2006 through August 2007 in accordance with generally accepted government auditing standards. We requested and received copies of all detailed economic impact analyses Ex-Im conducted in fiscal years 2002 through 2006. We also requested data on all transactions processed during the same fiscal years. However, Ex-Im only had complete and reliable data for fiscal years 2003 though 2005. Transactions processed in fiscal years 2002 were governed by a different charter than the other fiscal years, and data for transactions processed in 2006 were not available at the time of our review. Thus, the number of detailed analyses presented in this appendix does not correspond exactly with numbers cited in the report text. In addition to the person named above, the following people made key contributions to this report: Celia Thomas, Assistant Director; Miriam A. Carroll; Michael Hoffman; and Amber Simco. The following people provided technical assistance: Karen Deans, David Dornisch, Etana Finkler, Ernie Jackson, and Mark Speight.
Congress established the Export-Import Bank of the United States (Ex-Im) to encourage U.S. exports. Congress has directed Ex-Im to consider the economic impact of its work and not to fund activities that will adversely affect U.S. industry. In this context, GAO reviewed (1) Ex-Im's policies and procedures for determining economic impact, (2) the extent to which Ex-Im appropriately identifies and analyzes projects that could cause adverse economic impact, and (3) the extent to which Ex-Im's process is transparent. To conduct this work, GAO reviewed Ex-Im's procedures, data on projects applicable for the economic impact process, and detailed economic impact analyses. GAO also interviewed Ex-Im and reviewing agency officials and industry representatives. Congress requires Ex-Im to assess whether a project requesting its financial support will negatively impact U.S. industry. Ex-Im uses a screening process to identify projects with the most potential to have an adverse economic impact, and then subjects the identified projects to detailed analysis. A negative finding could result in a denial of Ex-Im support. The screens--either explicitly required by Ex-Im's charter or introduced under the bank's statutory authority--include whether (1) the financed project will increase foreign production, (2) there are trade measures against the resulting product, (3) the resulting product is "undersupplied," (4) the requested financing is over $10 million, and (5) the financed project will increase foreign production by 1 percent or more of U.S. production. Between fiscal years 2003 and 2005, this screening process identified 20 projects (out of 771 applicable) that required a detailed economic impact analysis. In the detailed analysis, Ex-Im assesses whether the resulting product would be in surplus on world markets or in competition with U.S. production. Between fiscal years 2003 and 2005, Ex-Im approved most projects applicable for economic impact analysis, totaling approximately $6.1 billion in approved financing. GAO found challenges and areas for improvement in the screening and detailed analysis of projects for economic impact. The effectiveness of the $10 million screen, introduced under Ex-Im's statutory authority, is uncertain. Ex-Im has not determined whether it removes from review those projects that could meet the statutory definition of substantial injury (producing 1 percent or more of U.S. production in an industry). For example, a $9.9 million financing request that would allow a foreign company to produce an estimated 3.5 percent of U.S. production was screened out of the analysis. GAO also found that Ex-Im could improve some methods it uses in its detailed analyses, such as how it estimates displaced production. In addition, GAO found that Ex-Im could clarify how it characterizes the effect of its financing on the U.S. trade balance. Finally, GAO found that Ex-Im could strengthen the internal controls it uses to ensure that the screening process and detailed analysis are conducted consistently and accurately. GAO also found limitations in the transparency of Ex-Im's economic impact process. While Ex-Im publicly posts its procedures, they contain areas of ambiguity. For example, the procedures do not define the term "oversupply." Also, Ex-Im has not provided all public comments to the board of directors. GAO identified two practices--referencing in the procedures the list of sectors likely to require extra scrutiny and publicizing final economic impact conclusions--that would increase the predictability of the process.
The QDR was required by the Military Force Structure Review Act, which was included in the National Defense Authorization Act for Fiscal Year 1997 (P.L. 104-201). The act directed the Secretary of Defense, in consultation with the Chairman, Joint Chiefs of Staff, to conduct a review of the defense needs from 1997 to 2015. Since its bottom-up review in 1993, DOD has repeatedly stated that it must reduce its infrastructure to offset the cost of future modern weapon systems. Our analysis of DOD’s FYDPs and infrastructure activities over the past several years showed that the infrastructure portion of DOD’s budget had not decreased as DOD planned. Further, planned funding increases for modern weapon systems have repeatedly been shifted further into the future with each succeeding FYDP. In May 1997, under the balanced budget agreement, the President and Congress set forth a budget blueprint for the national defense budget function. As part of the agreement, national defense funding levels were established for 1999-2002. The FYDP is an authoritative record of current and projected force structure, costs, and personnel levels that has been approved by the Secretary of Defense. In addition, it is used extensively throughout DOD for analytical purposes and for making programming and budgeting decisions. The 1998 FYDP supported the President’s 1998 budget and included budget estimates for 1998-2003. The 1999 FYDP supports the President’s 1999 budget and includes budget estimates for 1999-2003. A principal objective of the QDR was to understand and devise ways to manage the financial risk in DOD’s program. In the QDR, the Department acknowledges that it has a historic, serious problem—the postponement of procurement modernization plans to pay for current operating and support costs. DOD refers to this as migration of funds. According to DOD, the chronic erosion of procurement funding has three general sources: underestimated day-to-day operating costs, unrealized savings from initiatives such as outsourcing or business process reengineering, and new program demands. The QDR concluded that as much as $10 billion to $12 billion per year in future procurement funding could be redirected as a result of these three general sources. The QDR also identifies other areas of significant future cost risks. To address this financial instability, the QDR directed DOD to cut some force structure and personnel, eliminate additional excess facilities through more base closures and realignments, streamline infrastructure, and reduce quantities of some new weapon systems. By taking these actions, the Secretary of Defense intended that the 1999 budget and FYDP would be fiscally executable, modernization targets would be met, the overall defense program would be rebalanced, and the program would become more stable. During the QDR, DOD identified initiatives to reduce infrastructure costs and personnel. However, even as the QDR report was released, the Department acknowledged that more could be done. The Department’s November 1997 Defense Reform Initiative Report provided a second set of initiatives to streamline and improve DOD’s infrastructure and support activities. Money saved by these initiatives is to help fund weapons modernization. The Defense Management Council, chaired by the Deputy Secretary of Defense, was charged by the Secretary to ensure implementation of the reform decisions. The Council also was directed to examine similar reforms for each of the services and to negotiate an annual performance contract with the director of each defense agency. The 1999 FYDP reflects the budget blueprint outlined in the balanced budget agreement, and therefore, its total budget does not vary greatly from that in the 1998 FYDP. The common 5-year period of both FYDPs (1999-2003) shows that the 1998 FYDP totaled $1,355 billion and the 1999 FYDP totaled $1,356 billion. Table 1 compares the two plans, by primary appropriation account. As shown in table 1, DOD adjusted its three largest appropriations substantially in the 1999 FYDP. Specifically, DOD added $24.9 billion to operation and maintenance (O&M) accounts and decreased the procurement and military personnel accounts by $16.4 billion and $6.4 billion, respectively. (App. I shows the differences between accounts in the 1998 and 1999 FYDPs for each appropriation.) DOD made other adjustments in the 1999 FYDP to (1) meet unplanned operating expenses, such as medical care, or new program demands, such as the National Missile Defense System and (2) avoid disrupting or displacing other investment plans. In the 1999 FYDP, the Defense Health Program, which accounts for about 11 percent of annual O&M spending, is projected to receive higher funding in every year (1999-2003) when compared with the 1998 FYDP. The cumulative projected increase from the 1998 FYDP is $1.6 billion. According to a Defense Health Affairs official, the projected increase would adequately fund the core medical mission, which is comprised of 2 parts, direct care and managed care contracts. However, significant cost-saving initiatives will be necessary in the non-patient care areas of the program. The 1999 FYDP includes an acquisition program stability reserve to address unforeseeable cost growth that can result from technical risk and uncertainty associated with developing advanced technology for weapons systems, for example, unexpected engineering problems. Currently, cost growth in one program requires offsets from other programs, which in turn can disrupt the overall modernization program. DOD’s plan is to distribute the reserve among programs for the budget year before a President’s budget is submitted to Congress. The service acquisition executives centrally will manage the reserves, and the Under Secretary of Defense for Acquisition and Technology will provide oversight. These reserve funds total $2.4 billion for 2000-2003. Between 2000 and 2003, approximately $2.3 billion, or 97 percent, of the funding is programmed in procurement accounts for the Army, Air Force, Navy, and Marine Corps. The remaining 3 percent is programmed in the defense-wide research, development, test, and evaluation account. Table 2 shows the allocation of these funds, by year. As stated in a recent report on weapon acquisitions, we have not evaluated the program stability reserve or the way DOD plans to implement it. Nonetheless, DOD’s use of the reserve has the potential for communicating to program managers which practices will be encouraged and which ones will not. For example, if the reserve funds are used primarily to pay for problems that are revealed in late product development or early production, the fund could reinforce existing incentives for not dealing with problems until they occur. Conversely, if the fund is used to resolve and preclude problems, the fund could encourage problems to be revealed earlier in programs. The 1999 FYDP increased National Missile Defense System research, development, test, and evaluation funding by $1.4 billion, or 75 percent ($1.8 billion to $3.2 billion), from the 1998 FYDP. The program is to provide protection against a limited ballistic missile attack. DOD’s approach, commonly referred to as “3+3,” is to develop, within 3 years, elements of an initial system that can be deployed within 3 years of a deployment decision. The initial deployment decision review is scheduled for 2000. According to DOD, if a sufficient missile threat to the United States has not materialized at that time, development will continue, and the program will maintain a capability to deploy within 3 years. The QDR directed that some planned procurement be cut, in part to address overall affordability concerns. In the 1999 FYDP, DOD reduced quantities of some weapon systems from the 1998 FYDP. For example, DOD reduced the planned purchase of Joint Surveillance Target Attack Radar Systems’ aircraft from 8 to 2, F-22 fighters from 70 to 58, and F/A-18E/F fighters from 228 to 204. When comparing the 1999 FYDP with the 1998 FYDP, substantial planned funding appears in the 1999 FYDP outyears. OSD has programmed funds in two appropriation accounts without distributing the amounts to a DOD organization. Within the revolving and management funds, DOD working capital funds are anticipated to receive $450 million in 2000 to reduce advance billings. Moreover, in 2003, $700 million is programmed for potential purchases of war reserve materials. According to an Army official, the Army and an outside study group have verified requirement shortages in Army war reserve materials. If future DOD programming and budgeting cycles reveal that the programmed funds are needed, then the amounts would be requested in the applicable President’s budget. Accordingly, trade-offs within other DOD programs would not have to be made. Estimated costs associated with DOD’s request for the base closure and realignment round in 2001 appear in the defense-wide contingencies account. Net costs of $832 million and $1.45 billion are programmed in fiscal years 2002 and 2003, respectively. The costs represent a net amount, since DOD anticipates savings from the avoidance of military construction and the cessation of some O&M activities. If Congress does not give DOD new base closure authority, DOD could budget these funds for other activities. In 1997, infrastructure spending was 59 percent of DOD’s total budget, the same percentage that was reported in DOD’s bottom-up review report for 1994. Both the 1998 and the 1999 FYDPs projected that infrastructure spending would decline to 54 percent of DOD’s budget in 2003. To modernize the force, DOD plans to increase procurement funding to $60 billion per year. If DOD is to achieve a $60 billion budget goal, it must reduce funding for its infrastructure activities from the military personnel and O&M accounts. As explained in our previous reports and reflected in the 1999 FYDP, about 80 percent of DOD’s infrastructure activities are funded from these appropriation accounts. However, as discussed in the next section, our review of the 1999 FYDP found substantial risks that DOD’s plans may not occur, thereby jeopardizing DOD’s attempts at fixing the migration of funds problem and adhering to procurement plans. Although DOD made adjustments in the 1999 FYDP to decrease the risk that funds would migrate from procurement to unplanned operating expenses, we continue to see risks that DOD’s program may not be executable as planned. These risks involve unrealized savings and other program needs. We reported in May 1998 that as a result of lower projected inflation rates by the executive branch, DOD calculated that its goods and services over the 1999-2003 period would cost about $21.3 billion less than projected 1 year ago. In addition, DOD projected savings of about $2.8 billion as a result of lower projected fuel costs and favorable foreign currency exchange rates. DOD said that with these assumed savings, it can fund additional procurement items and civilian and military pay raises, which account for $15 billion of the $24.1 billion. The executive branch’s projection of DOD’s inflation rate for 1999 is 1.5 percent, which is a historically low rate of inflation. If the projected savings from lower inflation, lower fuel costs, and favorable foreign currency exchange rates materialize, DOD can fund the additional programs. However, if those savings do not materialize, DOD will have to adjust its future budgets by cutting programs and/or requesting additional budget authority from the President and Congress. DOD’s decision to reduce personnel as part of the QDR was driven largely by the objective of identifying dollar savings that could be used to increase modernization funding. We reported in April 1998 that considerable risk remains in some of the services’ plans to cut 175,000 personnel and save $3.7 billion annually by 2003. The projected cuts and savings are as a result of the QDR and are in addition to those previously planned. The 1999 FYDP does not include all the personnel cuts directed by the QDR. With the exception of the Air Force, the services have plans that should enable them to achieve the majority of their active military cuts by the end of 1999. OSD determined that some of the Air Force’s active military cuts announced in May 1997 to restructure fighter squadrons and consolidate bomber squadrons should not be included in the 1999 FYDP because the plans were not executable at this time. In addition, plans for some cuts included in the 1999 FYDP are still incomplete or based on optimistic assumptions. For example, there is no agreement within the Army on how 25,000 of the 45,000 reserve cuts will be allocated. This decision on how to allocate the reserve cuts will not be made before the next force structure review. Moreover, plans to achieve savings through outsourcing and reengineering may not be implemented by 2003 as originally anticipated. For example, the Army plans to compete 48,000 positions to achieve the majority of its civilian reductions. However, according to an Army official, those reductions cannot be completed by 2003. Although it announced studies covering about 14,000 positions, it has not identified the specific functions or location of the remaining positions to be studied. In addition, the Army’s plan to eliminate about 5,300 civilian personnel in the Army Materiel Command through reengineering efforts involves risk because the Command does not have specific plans to achieve these reductions. Although outsourcing is only a small part of the Navy’s QDR cuts, the Navy has an aggressive outsourcing program that involves risk. Specifically, the Navy has programmed savings of $2.5 billion in the 1999 FYDP based on plans to study 80,500 positions—10,000 military and 70,500 civilian—by 2003. Moreover, the Navy has not identified the majority of the specific functions that will be studied to achieve the expected savings. According to a Navy acquisition official, the Navy’s ambitious projected outsourcing savings may not materialize, thereby jeopardizing its long-term O&M and procurement plans. OSD recognizes that personnel cuts and the planned savings from those cuts have not always been achieved, which contribute to the migration of procurement funding. Therefore, OSD has established two principal mechanisms for monitoring the services’ progress in reducing personnel positions. First, it expects to review the services’ plans for reducing personnel positions during annual reviews of the services’ budgets. Second, the Defense Management Council will monitor the services’ progress in meeting outsourcing goals. DOD’s plans are based on the assumption that Congress will modify the permanent statutory minimum end-strength levels. These personnel levels, or floors, require the services to collectively employ at least 1,414,590 active duty military personnel. This assumption risks the execution of DOD plans. If Congress does not lower the floors, costs for military personnel will be substantially higher. Currently, DOD plans to have 1,396,000 active duty military personnel in 1999, but if the services must retain about 19,000 personnel to meet the floors, they would need about $1.1 billion more in 1999 military personnel funds. Furthermore, costs to meet the floors in 2000-2003 would be higher because DOD projects lower end-strength levels than currently permitted by law. Notably, in 2003, DOD projects 1,366,000 personnel—about 49,000 below current statutory floors. If DOD is precluded from implementing its planned personnel reductions, it would have to make other compensating adjustments to its overall program. The QDR reported that unprogrammed expenses arise that displace funding previously planned for procurement. The most predictable of these expenses are underestimated costs in day-to-day operations, especially for depot maintenance, real property maintenance, and medical care. The least predictable are unplanned deployments and smaller-scale contingencies. The services and defense agencies plan to obligate $73 billion for depot maintenance between 1999 and 2003. This estimate, despite its magnitude, does not allow the defense agencies and services to achieve OSD’s goal of funding 85 percent of their maintenance requirements during 1999-2003. According to DOD, the potential liability of unfunded depot maintenance in the 1999 FYDP is $300 million per year. For example, the Army—which added $362 million between 1999 and 2003—is projected to meet only 68 percent of its depot maintenance requirements in 1999 and 79 percent by 2003. Despite four base realignment and closure rounds, DOD still has excess, aging facilities and has not programmed sufficient funds for maintenance, repair, and upgrades. Each service has risk embedded in its real property maintenance program to the extent that validated real property needs are not met. For example, in the 1999 President’s budget submission, the Air Force plans to fund real property maintenance at the preventive or preservation maintenance level in 1999, which allows only for day-to-day recurring maintenance. This results in risk because the physical plant is degraded and the backlog of maintenance and repair requirements increases. Also, while the Marine Corps added funds during 1999-2003, the Commandant of the Marine Corps determined that the planned funding would merely minimize deterioration of its facilities. Further, although the Army added approximately $1 billion for real property maintenance in the 1999 FYDP, it was not projected to meet its funding goal until 2002. According to a Defense Health Affairs official, the cumulative O&M funding increase of $1.6 billion over the 1998 FYDP adequately funds the core medical mission, which is comprised of 2 parts, direct care and managed care contracts. However, the 1999 FYDP funding is contingent on several assumptions that contain risk. First, the Defense Health Program assumes program-related personnel reductions due to outsourcing and privatization initiatives. Estimated savings for these efforts grow to $131 million by 2003. Second, the program assumes a 1-percent savings from utilization management, such as reducing the length of hospital stays from 4 days to 3 days. Third, population adjustments due to force structure reductions play a pivotal role. The projected program assumes that Congress will authorize QDR recommended reductions of 61,700 active military personnel and the reductions will be a mix of retirements and nonretirement attrition. If end-strength reductions are not authorized or a higher percentage of the reduction stems from retirements than originally planned, the program will experience higher costs than estimated. Without the authority to reduce active duty end strengths, the beneficiary population of service personnel and their dependents will not decrease. In addition, retirements do not reduce costs because retirees and their dependents remain part of the beneficiary population. According to a Defense Health Affairs official, the funded program does not include an allowance for the impact of advances in medical technology and the intensity of treatment that was identified in a previous GAO report as a risk factor. Our recent work raises questions about whether cost savings and efficiencies in defense health care will materialize. In August 1997, we reported that a key cost-saving initiative of TRICARE, DOD’s new managed health care system, was returning substantially less savings than anticipated and the situation was not likely to improve. In our February 1998 testimony to Congress, we stated that implementation of TRICARE was proving complicated and difficult and that delays had occurred and may continue. Notwithstanding the historical costs of several, often overlapping contingency operations, the 1999 FYDP provides funds only for the projected “steady state” costs of Southwest Asia operations—$800 million in 1999. According to OSD officials, by design the FYDP does not include funds for (1) the sustainment of increased operations in the Persian Gulf to counter Iraq’s intransigence on United Nations inspections, (2) the President’s extension of the mission in Bosnia, or (3) unknown contingency operations. DOD’s position is that costs for the mission in Bosnia should be financed separately from planned DOD funding for 1999-2003. Further, the QDR concluded that contingency operations will likely occur frequently over the next 15 to 20 years and may require significant forces, given the national security strategy of engagement and the probable future international environment. Thus, it is likely that DOD will continue to have unplanned expenses to meet contingency operations. In reporting on lessons learned from prior base closure rounds, we noted that savings, though not well documented, are expected to be substantial.However, the precise amount and timing of net recurring savings realized from base closure actions is uncertain. For example, when compared with the 1998 FYDP, the Air Force has revised its 1999 FYDP O&M savings for the fourth round of base closures. In the 1998 FYDP, the Air Force estimated net savings at $253 million, whereas in the 1999 FYDP it projects net savings at $85 million—a difference of $167 million. This is the second consecutive FYDP that the Air Force has lowered its expectations of near-term savings. In our comparison of the 1997 and 1998 FYDPs, we reported that the Navy’s savings estimates for the fourth round of base closures were incorrect in that the savings were for outsourcing and competition initiatives. In the 1999 FYDP, the Navy continues to report estimated outsourcing savings incorrectly as base closure savings. According to a Chief of Naval Operations official, the Navy will work with appropriate budget and programming offices to correct the reported FYDP information. We reported that, since 1965, O&M spending has increased consistently with increases in procurement spending. However, in its 1998 FYDP, DOD was optimistic in projecting increases in procurement together with decreases in O&M. In the 1999 FYDP, DOD takes a more moderate position, projecting that O&M spending in real terms will remain relatively flat while procurement increases at a moderate rate. Figure 3 shows the historical relationship between O&M and procurement spending and compares the projections of the 1998 and the 1999 FYDPs. We reported that DOD’s plans for procurement spending also run counter to another historical trend. Specifically, DOD procurement spending rises and falls in nearly direct proportion to movements in its total budget; however, in the 1998 FYDP, DOD projected an increase in procurement of about 43 percent but a relatively flat total DOD budget. The 1999 FYDP procurement projections continue to run counter to the historical trend, although DOD has moderated its position. Specifically, DOD projects that procurement funding will rise in real terms during 1998-2003 by approximately 29 percent while the total DOD budget will remain relatively flat. Figure 4 shows the historical relationship between the total DOD budget and procurement spending and DOD’s 1999 FYDP projections. Over the 1995-98 FYDPs, DOD did not meet its plans to increase procurement. For example, since 1995, DOD has lowered the estimated funding for 1998 procurement from about $57 billion in the 1995 FYDP to about $43 billion in the 1998 FYDP. The 1999 FYDP continues this trend, as table 3 shows. In its QDR report, DOD recognized that these trends have longer-term implications. Specifically, “some of these reductions have accumulated into long-term projections, creating a so-called ’bow wave’ of demand for procurement funding in the middle of the next decade.” The QDR report concludes that “this bow wave would tend to disrupt planned modernization programs unless additional investment resources are made available in future years.” The bow wave is particularly evident when considering DOD’s aircraft modernization plans. In September 1997, we reported that DOD’s aircraft investment strategy involved the purchase or significant modification of at least 8,499 aircraft in 17 aircraft programs at a total procurement cost of $334.8 billion (in 1997 dollars) through the aircrafts’ planned completions.DOD’s planned funding for the 17 aircraft programs exceeds, in all but 1 year between fiscal year 2000 and 2015, the long-term historical average percentage of the budget devoted to aircraft purchases. Compounding these funding difficulties is the fact that these projections are very conservative. The projections do not allow for real program cost growth, which historically has averaged at least 20 percent, nor do the projections allow for the procurement of additional systems. However, as a result of the QDR, the 1999 FYDP service aircraft procurement accounts have been moderated. Compared with the 1998 FYDP, the 1999 FYDP reduces projected funding by $3.9 billion, or 4 percent. The QDR report cited cost growth of complex, technologically advanced programs and new program demands as two areas contributing to the migration of funds from procurement. For years, we have reported on the impact of cost growth in weapon systems and other programs such as environmental restoration. Specifically, we reported in 1994 that program cost increases of 20 to 40 percent have been common for major weapon programs and that numerous programs experienced increases much greater than that. We continue to find programs with optimistic cost projections. For example, we reported in June 1997 that we were skeptical the Air Force could achieve planned production cost reductions of $13 billion in its F-22 fighter aircraft program. Other DOD programs have also experienced cost growth. For example, DOD estimated in December 1997 that the projected life-cycle cost of the Chemical Demilitarization Program had increased by 27 percent over the previous year’s estimate. As stated earlier, DOD has established a reserve fund that can be used to help alleviate disruptions caused by cost growth in weapon systems and other programs due to technological problems. However, it remains to be seen whether the need will exceed available reserve funds. Policy decisions and new program demands can also cause perturbations in DOD’s funding plans, according to the QDR report. DOD has programmed $1.4 billion more for the National Missile Defense System in the 1999 FYDP than the 1998 FYDP. Despite the increase, considerable risk remains with the program’s funding. For example, technical and schedule risks are very high, according to the QDR, our analysis, and an independent panel. The panel noted that based on its experience, high technical risk is likely to cause increased costs and program delays and could cause program failure. In addition to the technical and schedule risks, the 1999 FYDP does not include funds to procure the missile system. If the decision is made in 2000 to deploy an initial missile system by 2003, billions of dollars of procurement funds would be required to augment the currently programmed research and development funds. As another example, the 1999 FYDP was predicated on the U.S. shifting to a Strategic Arms Reduction Treaty (START) II nuclear force posture. START II calls for further reductions in aggregate force levels, the elimination of multiple-warhead intercontinental ballistic missile launchers, the elimination of heavy intercontinental ballistic missiles, and a limit on the number of submarine-launched ballistic missile warheads. START II was approved by the U.S. Senate in January 1996 but is pending enforcement until ratification by Russia’s parliament. In the absence of START II enforcement, the United States may decide to sustain the option of continuing START I force levels. According to the Secretary of Defense’s 1998 Annual Report to the President and the Congress, the 1999 budget request includes an additional $57 million beyond what otherwise would have been requested to sustain the START I level. However, maintaining this force beyond 1999 will result in additional unplanned costs. Recently, DOD has found itself in a counterproductive cycle. Past attempts at streamlining infrastructure and/or reengineering business practices have not produced the anticipated savings programmed in recent FYDPs. Money saved from these initiatives was to help fund modernization. For the past 5 years, projected procurement funding has slipped with each succeeding FYDP. Moreover, unanticipated contingency costs, higher day-to-day operating expenses, and new program demands also have caused the migration of funds from procurement to operating and support costs. In the QDR, DOD specified this problem, identified its causes, and directed measures to make the program more stable. In the 1999 FYDP, DOD has taken a step toward abating this chronic migration of funds. However, even with a rebalanced program, DOD faces substantial risk in its execution of this first, post-QDR plan. We found that several of DOD’s projections are questionable and/or contain risk. Furthermore, as long as the national defense funding levels agreed to in the balanced budget agreement remain unaltered, solutions to DOD’s funding issues must be found within its current and projected budget. Therefore, it is critical that DOD continues to strive for realistic assumptions and plans in its future budget cycles. In commenting on a draft of this report, DOD took issue with some of our characterizations. DOD stated that the 1999 FYDP represents considerable progress toward the objectives of funding readiness-related O&M requirements, implementing plans to streamline and reduce infrastructure, and reducing the risk of migrating funds. DOD noted that our report does not acknowledge that the risks in achieving these objectives have been substantially reduced in the 1999 FYDP and the Department’s program is on a sounder financial footing. Moreover, DOD acknowledges that all risks have not been eliminated and intends to pursue future initiatives to address the remaining risks. We agree, as stated in our report, that DOD made adjustments from the 1998 FYDP to the 1999 FYDP to increase O&M and decrease the risk that funds would migrate from procurement to operating expenses. Moreover, it has plans to reduce infrastructure. However, as explained in this report, we continue to see risks that DOD’s program may not be executable as planned in part because the services’ plans to reduce military and civilian personnel are incomplete or based on optimistic assumptions. DOD said that it has been able to reduce the proportion of resources devoted to infrastructure activities from 47 percent in 1994 to 45 percent in 1997, as a percent of total obligation authority. Moreover, DOD states that we are inaccurate in reporting that infrastructure in 1997 remains at 59 percent of DOD’s total budget, the same percentage as in 1994. Our method of calculating the infrastructure in DOD’s budget is based on the methodology prescribed by OSD’s Office of Program Analysis and Evaluation (PA&E) in late 1995. We have consistently used this methodology since then to report on DOD’s progress in reducing its infrastructure and DOD has agreed with our prior reports, findings, and analysis. The methodology includes both direct infrastructure (that which can be clearly identified in the FYDP) and a percentage of the defense working capital funds that represents the infrastructure portion of these funds. In discussing DOD’s comments with PA&E officials, they stated that for a number of reasons, including the difficulty of estimating the infrastructure portion of defense working capital funds, PA&E is evaluating different methodologies to estimate infrastructure in its budget. One such measure, which DOD used to derive the percentages discussed in its letter, is to estimate only the direct infrastructure. However, there are important limitations in this methodology. While it is a simpler methodology, it excludes a significant part of the infrastructure, which PA&E previously considered important to capture. Moreover, DOD established an important baseline in its October 1993 bottom-up review report when it said that infrastructure activities in 1994 would account for approximately 59 percent of DOD’s total obligational authority (including revolving funds) and that it needed to reduce that infrastructure. We strongly believe that it is important to maintain a consistent baseline to measure changes in infrastructure over time. DOD believes that its assumptions of the savings rates for outsourcing personnel are conservative. According to DOD, the estimated savings reflected in the 1999 FYDP assume that the Department successfully executes the currently projected schedule of competitions. Moreover, DOD emphasizes that it needs to closely monitor implementation of projected competition schedules through the programming and budgeting cycles. We believe DOD has taken a step in the right direction to monitor implementation of the outsourcing process. However, considerable risk remains in the services’ plans to cut planned personnel by 2003 because the plans are still incomplete or based on optimistic assumptions. DOD suggested several technical changes for clarification and accuracy, which we incorporated in the report where appropriate. DOD’s comments are reprinted in their entirety in appendix II. To determine the major program adjustments in DOD’s fiscal year 1999 FYDP, we interviewed officials in the Office of Under Secretary of Defense (Comptroller); the Office of Program Analysis and Evaluation; the Army, Navy, and Air Force program and budget offices; and Office of the Assistant Secretary of Defense (Health Affairs). We examined a variety of DOD planning and budget documents, including the 1998 and 1999 FYDPs and the QDR report. We also reviewed the President’s fiscal year 1999 budget submission; our prior reports; and pertinent reports by the Congressional Budget Office, the Congressional Research Service, and others. To determine the implications of program changes and underlying planning assumptions, we discussed the changes with DOD officials. We compared DOD’s automated data with published documents provided by DOD. Specifically, we compared total budget estimates, appropriation totals, military and civilian force levels, force structure levels, and some specific program information. Based on our comparisons, we were satisfied that DOD’s automated FYDP data and published data were in agreement. We did not test DOD’s management controls of the FYDP data. Our review was conducted from November 1997 through June 1998 in accordance with generally accepted government auditing standards. We are sending copies of this report to other appropriate Senate and House Committees; the Secretaries of Defense, the Air Force, the Army, and the Navy; and the Director, Office of Management and Budget. We will also provide copies to others upon request. If you have any questions concerning this report, please call me on (202) 512-3504. Major contributors to this report are listed in appendix III. The following tables show the differences between accounts in the 1998 and 1999 Future Years Defense Programs (FYDP) for each appropriation. Totals may not add due to rounding. In the 1998 FYDP for 1999, $965 million was programmed for the modernization reserve. Robert Pelletier Deborah Colantonio William Crocker Douglas Horner Shawn Bates Bob Kenyon Robert Henke The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of Defense's (DOD) Future Years Defense Program (FYDP) for fiscal year 1999, focusing on: (1) DOD's plans to address the financial and programmatic risk areas that the Quadrennial Defense Review (QDR) found in DOD's program; (2) comparing DOD's 1999 FYDP with its 1998 FYDP to identify major changes and adjustments to address these risks areas; and (3) whether there were risk areas in DOD's 1999 program. GAO noted that: (1) although DOD has reduced military and civilian personnel, force structure, and facilities over several years, DOD has been unable to shift funds from infrastructure to modernization; (2) in 1997, infrastructure spending was 59 percent of DOD's total budget, the same percentage as in 1994; (3) DOD acknowledged in the QDR that it has postponed procurement plans because funds were redirected to pay for underestimated operating costs and new program demands, and projected savings from outsourcing and other initiatives had not materialized; (4) to address this diversion of funds, the QDR directed DOD to cut some force structure and personnel, eliminate additional excess facilities through more base closures and realignments, streamline infrastructure, and reduce quantities of some new weapon systems; (5) DOD made adjustments in the 1999 FYDP to decrease the risk that funds would migrate from procurement to unplanned operating expenses; (6) DOD has programmed additional funds for new programs and has moderated its procurement plans; (7) as a result of these and other changes, DOD believes that its 1999 program is on a sounder financial footing; (8) although DOD made adjustments to the 1999 FYDP, GAO continues to see risks that DOD's program may not be executable as planned; (9) for example, DOD projects savings of $24.1 billion as a result of lower projected inflation rates and fuel costs and favorable foreign currency exchange rates; (10) if these rates and costs do not hold true to DOD's assumptions, projected savings will not materialize, and DOD will have to adjust future budgets by cutting programs and/or requesting additional budget authority; (11) further indication of risk can be found in DOD's procurement plans and additional proposed initiatives to reduce facilities; (12) DOD's estimates for procurement spending, in relation to DOD's total budget, run counter to DOD's experience over the last 32 years; (13) DOD procurement spending rises and falls in nearly direct proportion to movements in its total budget; (14) DOD projects that procurement funding will rise in real terms during 1998-2003 by approximately 29 percent while the total DOD budget will remain relatively flat; (15) on some important proposed initiatives, DOD will need congressional approval; and (16) as long as the funding levels agreed to in the balanced budget agreement for national defense remain unaltered, DOD must solve its funding issues within its current and projected total budget.
The transition to ICD-10 codes, which has widespread implications for health care transactions and quality measurement in the United States, offers the potential for several improvements over the current ICD-9 code set. Medicare and Medicaid will incorporate the ICD-10 codes into multiple program functions that currently use ICD-9 codes, including payment systems and quality measurement programs. ICD-9 codes were initially adopted in the United States as the standard for documenting morbidity and mortality information for statistical purposes, but was expanded and adopted in 2000 through HIPAA as the standard code set for use in all electronic transactions by covered entities. Specifically, ICD-9 codes are used in all U.S. health care settings to code diagnoses and are also used in all U.S. inpatient hospital settings to code procedures. Beginning on October 1, 2015, all health care transactions that include ICD codes must use ICD-10 codes for dates of service that occur on or after that date. Transactions with dates of service that occur prior to the transition date of October 1, 2015, must continue to be coded with ICD-9 codes. The vendors whose goods or services health care providers may utilize to help them code and process claims, such as electronic health record vendors and practice management system vendors, are not covered entities, but they must respond to HIPAA standards in order to support their HIPAA-covered customers. Figure 1 illustrates the flow of health care transactions that include ICD codes from the health care provider to payers, and identifies which types of organizations are and are not covered entities. The Centers for Disease Control and Prevention’s (CDC) National Center for Health Statistics is responsible for developing the ICD-10 diagnosis codes with input from medical specialty societies, and CMS is responsible for developing the ICD-10 procedure codes. Representatives from CMS and the National Center for Health Statistics comprise the Coordination and Maintenance Committee, which is responsible for approving coding changes and making modifications, based upon input from the public. CDC and CMS, assisted by the American Hospital Association (AHA) and the American Health Information Management Association (AHIMA)— collectively known as the Cooperating Parties—are responsible for supporting covered entities’ transitions to ICD-10. The Cooperating Parties’ responsibilities include developing and maintaining guidelines for ICD-10 codes and developing ICD-10-related educational programs. In addition to the Cooperating Parties, other organizations are helping to prepare covered entities for the ICD-10 transition. For example, officials with the Workgroup for Electronic Data Interchange (WEDI), a coalition of covered entities, vendors, and other members of the health care industry, hold regular meetings with industry members to discuss how to address ICD-10 transition issues. Additionally, WEDI and other stakeholders have made educational materials available on their websites.the Healthcare Information and Management Systems Society’s ICD-10 Playbook contains tools, guidelines, and information to help covered entities prepare for the transition to ICD-10 codes. Stakeholders have also held sessions on ICD-10 transition-related issues during their member conferences. There are several differences between ICD-9 and ICD-10 codes. For example, ICD-10 codes can include up to seven alphanumeric digits, while ICD-9 codes can only include up to five alphanumeric digits. The additional digits in ICD-10 codes allow for the inclusion of more codes. Specifically, there are approximately 15,000 ICD-9 diagnosis codes compared to approximately 70,000 ICD-10 diagnosis codes, and approximately 4,000 ICD-9 procedure codes compared to approximately 72,000 ICD-10 procedure codes. Despite the dramatic increase in the number of ICD-10 codes, according to CMS and others, most physician practices use a relatively small number of diagnosis codes that are generally related to a specific type of specialty. The additional number of ICD-10 codes enables providers and payers to capture greater specificity and clinical information in medical claims. For example, ICD-10 codes enable providers to report on the body part and the side of the body subject to the evaluation or procedure. More specifically, while there was 1 ICD-9 code for angioplasty—a procedure to restore blood flow through an artery—there are 854 ICD-10 codes for angioplasty, with codes including additional detail on the body part, approach, and device used for the procedure. Another difference between ICD-9 and ICD-10 codes is the terminology and disease classifications, which have been updated so that they are consistent with new technology and current clinical practice. For example, under ICD-9, there was a single code to reflect tobacco use or dependence. Under ICD-10, there is a category for nicotine dependence with subcategories to identify the specific tobacco product and nicotine-induced disorder. The updated disease classifications for nicotine disorders reflect the increased knowledge of the effects of nicotine. Other differences between ICD-9 and ICD-10 codes include the addition of new concepts that did not exist in ICD-9 diagnosis codes, such as the expansion of postoperative codes to distinguish between intraoperative and post-procedural complications; and the designation of trimester for pregnancy codes. The ICD codes are used in a variety of ways by payers, including Medicare and Medicaid. For example, payers generally use ICD diagnosis codes to determine whether the care provided by physicians is medically necessary and, therefore, eligible for reimbursement. Additionally, Medicare hospital inpatient payment rates are based on Medicare-Severity Diagnosis-Related Groups (MS-DRG), a system that classifies inpatient stays according to both patients’ diagnoses and the procedures the patients receive, both of which are identified using ICD codes. The MS-DRG signifies the average costliness of inpatient stays assigned to one MS-DRG category relative to another MS-DRG category. All payers will need to update all systems and processes that utilize ICD codes by October 1, 2015, to ensure they are ICD-10 compliant. In addition to claims processing, Medicare, Medicaid, and private payers conduct a variety of quality measurement activities that use quality measures, which will need to be updated to reflect the ICD-10 codes. For example, Medicare providers collect and report quality measures to CMS for the Hospital Inpatient Quality Reporting program, the Physician Quality Reporting System, the Physician Value-based Payment Modifier Program, and the Electronic Health Records program, and many private payers measure their performance using Healthcare Effectiveness Data and Information Set® (HEDIS) measures. In preparation for the transition from ICD-9 to ICD-10 codes, CMS developed various educational materials, conducted outreach, and monitored the readiness of covered entities and the vendors that support them for the transition. In addition, the agency reported modifying its Medicare systems and policies. CMS also provided technical assistance to Medicaid agencies and monitored their readiness for the ICD-10 transition. CMS developed a variety of educational materials for covered entities, available on the agency’s ICD-10 website,transition to ICD-10 codes. Each of the 28 stakeholders we contacted reported that the educational materials CMS made available have been helpful to preparing for the ICD-10 transition. Some of the materials CMS developed are specific to small and medium physician practices, large practices, or small hospitals. CMS officials told us that the agency developed these materials in response to feedback the agency received to help them prepare for the from stakeholders that indicated that these specific groups wanted materials targeted to them. The educational materials include documents that provide information about ICD-10 codes, including how they differ from ICD-9 codes, and explain why the transition is occurring; checklists and timelines that identify the steps necessary to prepare for the transition, including the associated timeframes; tip sheets on how providers should communicate with the vendors that supply their practices with products that utilize ICD coding—such as software vendors and billing services—and vice versa; videos and webinars; and links to stakeholder websites that also feature ICD-10 guidance and training materials. In addition, CMS officials told us that the agency partnered with organizations to enable providers to obtain continuing medical education credit for eight training modules as a way to incentivize providers to prepare for the transition. To help small practices prepare for the ICD-10 transition, and in response to focus group feedback, CMS launched a new website in March 2014 called “Road to 10.” According to CMS documentation dated March 2013, industry feedback received by the agency indicated that small physician practices lag behind other providers in preparing for the transition. Seventeen of the 28 stakeholders we contacted noted that this website was helpful in preparing covered entities for the transition. The Road to 10 website, which can be accessed through CMS’s main ICD-10 website, provides additional training materials not available through CMS’s main ICD-10 website, including training videos describing the clinical documentation needs for the following specialties: cardiology, family practice and internal medicine, obstetrics and gynecology, orthopedics, and pediatrics. The website also provides a customizable action plan based on several criteria: specialty; practice size; types of vendors supporting the practice, such as an electronic health record system vendor; payers to whom the clinician submits claims; and the level of readiness for the ICD-10 transition. Some of CMS’s educational materials are intended to help covered entities determine which ICD-10 codes they may need to use. Specifically, CMS and the other members of the Cooperating Parties developed a tool called the General Equivalence Mappings to assist covered entities in converting ICD-9 codes to ICD-10 codes. Covered entities can use the General Equivalence Mappings tool to identify ICD-10 codes that might be most relevant to them—a practice that CMS advocates in its checklists and action plans. Six of the 28 stakeholders we contacted described this tool as being helpful to their preparatory activities. CMS’s Road to 10 website also identifies common ICD-10 diagnosis codes associated with the following six physician specialties: cardiology, family practice, internal medicine, obstetrics and gynecology, orthopedics, and pediatrics. CMS also conducted a number of outreach activities in order to inform covered entities and others about the educational materials that are available, educate and engage covered entities, obtain real-time feedback on areas that may merit additional activities from CMS, and promote collaboration among stakeholders. Twenty-two of the 28 stakeholders we contacted reported that CMS’s outreach activities have been helpful in preparing covered entities for the transition to ICD-10 codes. Examples of the types of outreach CMS has conducted include the following. Email lists, social media, and advertisements. CMS communicated information related to the ICD-10 transition through several email lists, the primary one being the ICD-10 email list, which CMS officials said was distributed to 186,000 email addresses as of November 25, 2014. The emails communicated information related to the ICD-10 transition and generally directed recipients to CMS’s ICD-10 website, which agency officials told us receives approximately 184,000 page views per month. Our review of the emails sent from August 2013 to August 2014 indicated that CMS communicated a variety of information, including available trainings and separate specialty-specific trainings; best practices; and resources available to help covered entities prepare for the transition. Other CMS email lists also communicated information related to the ICD-10 transition, including eHealth and Medicare Learning Network® (MLN) ConnectsTM. CMS officials told us that other groups redistribute CMS’s emails to their members, which helped the agency reach additional covered entities. For example, officials said that approximately 140 national associations distribute the information to their members, which represent over 3 million individuals, and that the Medicare Administrative Contractors (MAC) forward these messages to their email lists, which include about 600,000 addresses. CMS’s regional offices also distribute materials through their local email lists, according to CMS officials. In addition, CMS’s Twitter account provided information about the ICD- 10 transition, such as the date by which covered entities must use ICD-10 codes, and provided links to educational materials and information about upcoming presentations. CMS officials told us that the agency has placed and plans to place additional advertisements about the transition in both print and online resources, such as journals and associations’ publications. National broadcasts. CMS hosted teleconferences that provided an overview of key transition issues, and an opportunity for participants to ask questions. In addition, according to the publisher, in 2013, CMS participated in four broadcasts of Talk 10 Tuesdays, a weekly 30-minute Internet radio broadcast directed to healthcare providers transitioning to ICD-10, which has about 6,800 registered listeners. Stakeholder collaboration. CMS has collaborated with stakeholders in various ways. For example, in 2013, CMS held two meetings with stakeholders that represented covered entities and vendors. In those meetings, stakeholders noted several concerns related to the ICD-10 transition and made recommendations to CMS. In addition, CMS officials reported holding 40 one-on-one meetings with 31 individual stakeholders between January 2013 and March 2014. Topics of stakeholder collaboration meetings included the effectiveness of existing educational materials and how to communicate the benefits of ICD-10 coding to the public. CMS officials also presented information about the ICD-10 transition at conferences held by stakeholders. In addition, CMS hosted two live events (in April 2011 and September 2014) where members of the American Academy of Professional Coders answered questions about the ICD-10 codes. Fifteen of the 28 stakeholders we contacted mentioned that CMS’s outreach to or collaboration with stakeholders has been helpful to preparing covered entities for the transition. Officials with one of these stakeholders noted that CMS’s participation in stakeholder meetings demonstrates that CMS is listening to the health care industry’s concerns. CMS has begun to conduct additional outreach to small primary care physician practices. First, CMS started to conduct in-person training for small physician practices in a number of states. According to CMS officials, between February and December 2014, CMS held 90 1-to- 2 hour trainings in 29 states and the District of Columbia; in each one, between 1 and 12 sessions were held. In January 2015, officials said the agency will begin scheduling trainings that will occur in 2015. CMS officials said that the content for these trainings was based on feedback from physician focus groups about what physicians are most interested in learning about during the sessions. In addition, CMS piloted a direct mail project to small primary care practices in four states—Arizona, Maryland, Ohio, and Texas—and CMS officials told us they planned to complete an assessment of the pilot in early December 2014. In responding to a draft of this report, CMS officials stated that the agency plans to expand the pilot to rural communities. CMS officials stated that the agency is working with stakeholders to identify specific practice locations to send direct mail, an activity they plan to begin in March 2015 and conclude in May 2015; however, CMS officials did not identify the number of practices the agency plans to target for this effort. In addition to developing educational materials and conducting outreach, CMS conducted activities to assess the readiness of covered entities and vendors. For example, to prepare for the original implementation date of October 1, 2013, CMS’s contractors conducted an assessment of the health care industry’s ICD-10 transition planning in 2009. Additionally, in 2011, CMS’s contractors interviewed 27 organizations representing vendors, payers, and small physician practices, and surveyed almost 600 organizations regarding their awareness of and preparation for the transition to ICD-10 codes. These activities revealed a number of things, including that covered entities wanted additional guidance on how to prepare for the transition, such as templates describing testing steps; that providers were concerned about the time and costs associated with the transition; and that 82 percent of providers, 88 percent of payers, and 75 percent of vendors contacted believed they would be ready to use ICD-10 codes by the original October 1, 2013, transition date. More recently, CMS has relied on its stakeholder collaboration meetings, focus group testing, and review of surveys conducted by the health care industry to gauge covered entities’ readiness for the transition, according to agency officials. During the course of our work, we learned that CMS planned to conduct a survey to assess current covered entity and vendor readiness. However, in commenting on a draft of this report, HHS told us that CMS had decided not to go forward with those plans, as CMS determined that the agency’s limited resources would be better spent continuing its outreach activities due to the rapidly approaching transition deadline. CMS reported that the agency has begun modifying Medicare’s systems and policies in preparation for the ICD-10 transition. Examples of these activities include the following: Coverage policies. CMS and its MACs have updated National Coverage Determination and Local Coverage Determination polices, which identify the items and services that are covered by Medicare to reflect the conversion to ICD-10 codes. Medicare fee-for-service (FFS) claims processing systems. CMS documentation states that the agency completed all ICD-10-related changes to its Medicare FFS claims processing systems as of October 1, 2014, and that the claims processing systems have been updated in response to the results of internal testing, but it is not yet known whether updates may be needed based upon the results of external testing. Internal testing. CMS has reported that its Medicare claims processing systems reflect different types of internal testing activities. For example, each MAC conducted testing to ensure that claims using ICD-10 coding complied with the Local Coverage Determination policies, and that the claims processing systems appropriately accepted or rejected and processed claims. External testing. CMS had not completed testing with external parties, and agency officials acknowledged that the agency would make additional changes to its systems if future testing identified any issues. Specifically, CMS conducted “acknowledgment testing”—that is, testing to determine whether claims submitted by providers and suppliers that contain ICD-10 codes were accepted or rejected—over two separate weeks in 2014 (one week in March and one in November). CMS plans to hold two additional weeks of acknowledgement testing in 2015 (one week in March and one in June). In addition, CMS has plans to conduct such testing with any covered entity that submits test claims on an ongoing basis until October 1, 2015. During CMS’s first acknowledgement testing week in March 2014, the agency reported that 2,600 covered entities submitted more than 127,000 claims—89 percent of which were accepted, with some regional variation in acceptance rates. During CMS’s second acknowledgement testing week in November 2014, the agency reported that 500 covered entities submitted about 13,700 claims—76 percent of which were accepted. CMS documentation indicates that testing during the March and November 2014 acknowledgment testing weeks did not identify any issues with the agency’s Medicare FFS claims processing systems. Additionally, CMS plans to conduct “end-to-end testing”—that is, testing to determine how claims submitted by providers and suppliers that contain ICD-10 codes would be adjudicated, and that accurate payments for these claims will be calculated—during three weeks in 2015 (in January, April, and July). CMS is planning to conduct end-to-end testing with a total of 2,550 covered entities, or 850 covered entities in each week-long testing period. We did not independently assess the extent to which CMS’s Medicare FFS claims processing systems have been updated or tested in preparation for the ICD-10 transition because we have separate ongoing work evaluating these activities. Hospital reimbursement. According to CMS documentation, the agency has converted MS-DRGs, which determine reimbursement rates to hospitals for inpatient hospital stays by Medicare beneficiaries, to reflect ICD-10 codes. CMS planned to continue making adjustments to the MS-DRGs, as appropriate, based upon input from the Coordination and Maintenance Committee and from public comments. The version of the MS-DRG that will be implemented on October 1, 2015, is to be made available to the public in the summer of 2015. CMS documentation suggests that until the new version of the MS-DRG is provided, hospitals may use the documentation and software CMS has already made available to analyze the effect the conversion to ICD-10 codes will have on hospital payments. CMS has employed different strategies to communicate the changes it has made to its systems and policies. The agency distributes educational materials through the MLN. For example, CMS has issued articles that instruct providers and suppliers (e.g., inpatient hospitals and home health agencies) on how to code claims that span a period of time that crosses the ICD-10 compliance date of October 1, 2015. In addition, the MACs help educate and provide information to Medicare providers and suppliers. For example, the MACs are to distribute information to providers and suppliers about the acknowledgement testing weeks. CMS provided technical assistance to Medicaid agencies in states and the District of Columbia to help them prepare for the ICD-10 transition. For example, CMS developed educational and guidance tools, such as an implementation handbook, which identified five implementation phases: (1) awareness, (2) assessment, (3) remediation, (4) testing, and (5) transition. In addition, according to the CMS official leading these technical assistance activities, CMS conducted about 60 onsite training sessions with Medicaid agencies. By January 2015, CMS officials said the agency will have conducted site visits to 12 Medicaid agencies that need additional assistance to prepare for the transition, and will conduct additional trainings as needed. CMS also provides technical assistance to Medicaid agencies in other ways. For example, a CMS official noted that the agency advised Medicaid agencies on their testing plans and worked with them on their development of risk mitigation plans related to provider readiness. In addition, CMS officials noted that the agency hosts bi- weekly meetings with Medicaid agencies, which include selected external stakeholders, such as other payers and health care providers, during which Medicaid agencies share information, lessons learned, and best practices related to the ICD-10 transition. CMS also monitors the readiness of Medicaid agencies for the ICD-10 transition. For example, CMS officials noted that the agency assesses the readiness of Medicaid agencies quarterly and holds conference calls with each one. According to CMS officials, as of October 2014, all states and the District of Columbia reported that they would be able to perform all of the activities that CMS has identified as critical to preparing for the ICD- 10 transition by the deadline. These critical success factors are the ability to accept electronic claims with ICD-10 codes; adjudicate claims; pay providers (institutional, professional, managed care); complete coordination of benefits with other insurers; and create and send Medicaid system reports to CMS. CMS officials stated that all Medicaid agencies must test each of the critical success factors and report back to However, as of November 26, 2014, CMS no later than June 30, 2015.not all Medicaid agencies had started to test their systems’ abilities to accept and adjudicate claims containing ICD-10 codes. Specifically, CMS officials told us that 2 states had completed internal and external testing, 9 states and the District of Columbia had started internal testing activities, and 16 states had started external testing activities. The remaining 23 states, according to the CMS officials, were in the process of updating their policies and systems, which needs to occur before testing begins. Therefore, Medicaid agencies may need to make system changes if testing identifies issues. Stakeholders we contacted identified several areas of concern about the ICD-10 transition, including that CMS needed to expand the number of ICD-10 testing activities, with some of those stakeholders commenting that CMS’s ICD-10 testing has not been sufficiently comprehensive. Stakeholders we contacted also noted areas of concern and made recommendations regarding CMS’s ICD-10 education and outreach efforts, and requested that the agency mitigate any additional provider burdens leading up to and following the ICD-10 transition. Testing. Twenty of the 28 stakeholders we contacted identified concerns or made recommendations related to CMS’s ICD-10 testing activities. Their comments focused on CMS’s lack of comprehensive ICD-10 testing, as well as the need to communicate the future test results to covered entities. Lack of comprehensive testing. Seventeen stakeholders raised concerns that CMS’s ICD-10 testing was not comprehensive. Specifically, some of these stakeholders were concerned that CMS has not yet conducted Medicare FFS end-to-end testing. Additionally, some of these stakeholders were concerned that CMS would not include enough covered entities in its testing. For example, one stakeholder expressed concern that not all provider types, such as small providers, would be represented in the planned testing. Another stakeholder was concerned that the number of testing participants may not be large enough to get a true sense of industry readiness or the ability of CMS to properly process the full range of ICD-10 codes. As previously noted, CMS officials said that the agency has scheduled Medicare FFS end-to-end testing with a total of 2,550 covered entities during three separate weeks in 2015, and identified staffing and financial constraints as the reason for limiting the number of covered entities participating in the scheduled testing. However, agency officials indicated that the number of covered entities they plan to test with will exceed the number requested by some industry groups. In addition, CMS officials said they are committed to ensuring that the testing participants are representative of the health care industry. Communicate test results. Seven stakeholders we contacted recommended CMS better communicate the agency’s readiness for the ICD-10 transition, by, for example, improving communication of test results. Two of these stakeholders indicated that doing a better job communicating test results would not only increase confidence that CMS will be prepared to process claims, but also would help providers identify modifications needed in their own coding or billing practices. CMS officials noted that the agency intends to publicly release the results of Medicare FFS end-to-end testing once the agency has completed its analysis of each of the three scheduled testing periods. Specifically, CMS’s communications plan indicates that the agency intends to report on the results of each testing period within a month of when the testing is completed. CMS officials told us that the report will provide details about the types and numbers of testing participants, technical challenges that arise during testing, and CMS’s plans for fixing them. Education. Twenty of the 28 stakeholders we contacted identified concerns or recommendations related to CMS’s covered entity education efforts. Specifically, these stakeholders’ comments focused on whether covered entities were aware of CMS’s educational materials to help them prepare for the ICD-10 transition. These stakeholders suggested CMS emphasize benefits from transitioning to ICD-10, as well as best practices and success stories, expand in-person training, and develop more specialty-specific materials. Covered entity awareness of educational materials. Eleven stakeholders we contacted expressed concerns about the extent to which the covered entities they represent were aware of and using the educational materials developed by CMS. In particular, while all 28 stakeholders we contacted indicated that CMS’s educational materials have been helpful to covered entities, some of them were concerned that the materials may not be reaching the covered entities most in need of them, such as solo or small physician practices, rural and critical access hospitals, nursing homes, and home health agencies. CMS officials indicated that all of the agency’s outreach efforts—as described earlier in this report—have been intended to work in concert to promote awareness of the ICD-10 transition and direct covered entities, especially hard-to-reach entities, to helpful educational materials. CMS officials stated that the agency has partnered with a number of organizations to reach covered entities, including those covered entities that some stakeholders indicated are most in need of the materials. Specifically, CMS partnered with WEDI to create the “ICD-10 Implementation Success Initiative,” a partnership between payers, providers, coding organizations, and other organizations to promote awareness of the ICD-10 transition by directing users to available CMS and industry educational resources. In addition, CMS officials indicated that the agency tracks the use of its educational materials by, for example, monitoring the number of documents downloaded or videos viewed, and uses the tracking information to customize and develop new information as needed. However, the agency’s monitoring activities do not provide specific information on whether the providers most in need of these materials—which stakeholders identified as solo or small physician practices, rural and critical access hospitals, nursing homes, and home health agencies— are accessing and using them. Place greater emphasis on sharing ICD-10 benefits, best practices, and success stories. Seven stakeholders we contacted suggested that CMS put greater emphasis on sharing ICD-10 benefits, best practices, and success stories in order to increase support among providers for the transition. Specifically, one stakeholder said that it would be helpful if CMS could identify “physician champions” who could discuss the benefits of transitioning to ICD-10, walk other physicians through the steps needed to prepare for the transition, and reassure them that they will not suffer financially in the process of preparing for the transition. Similarly, another stakeholder suggested that success stories could illustrate that the effort to comply with the ICD-10 transition may not be as difficult as anticipated. A third stakeholder mentioned that CMS could do more to explain how the transition to ICD-10 can create value in delivering patient care. CMS officials highlighted agency materials that describe benefits, best practices, and success stories that are currently available on the Road to 10 website, and also described materials they are developing. For example, CMS officials identified website materials that describe clinical, operational, professional, and financial benefits of using ICD- 10 codes, which are topics that physicians identified as resonating with them; and video testimonials from physician champions. CMS officials also noted that the agency is developing additional positive testimonials and best practice resources from providers and payers, as well as ICD-10 “use cases” that will provide practical examples of how ICD-10 codes will be used in a clinical setting. Officials noted that the development of these materials is part of an effort to share positive physician experiences as a way to re-engage physicians and other covered entities following the transition delay to October 1, 2015. CMS officials indicated that this information will be posted to the CMS website in December 2014, but did not provide additional details on the specific materials they plan to develop during the period of our review. Expand in-person training and provide more advance notice of those events. Six stakeholders we contacted recommended that CMS expand its in-person training for physician practices to additional states. Initially, CMS officials indicated that they planned to hold these in-person training events in 18 states.that every state has small or rural practices that are struggling to make the ICD-10 transition and could benefit from CMS’s training One stakeholder remarked activities. Another stakeholder indicated that CMS had initially only provided a few days’ advance notice for scheduled training, and requested that CMS provide more advance notice. CMS officials said the agency is expanding the in-person trainings to additional states, beyond the 18 states noted above, where resources allow. Specifically, as of January 2015, officials said that they had held trainings in 11 additional states. CMS officials also indicated that the agency is collaborating with nationally and locally recognized organizations to expand training to additional states. Officials said that where resources are not available for in-person training, CMS is reviewing options to offer more video training through the ICD-10 website. In response to concerns about the notice provided for these events, CMS officials said that the Road to 10 website identifies scheduled in-person training events by location, and that the agency is working closely with the CMS regional offices, medical specialty associations, and other state and local partners to raise awareness of these events. Develop additional specialty-specific materials. Four stakeholders we contacted requested that CMS continue developing additional physician specialty-specific educational materials. For example, one stakeholder suggested that CMS develop more materials that focus on specific, practical examples of how the ICD-10 codes would be used in a clinical setting. CMS officials noted that the agency has made various specialty- specific materials available, and stated that the agency plans to add more specialty-specific educational materials to its Road to 10 website, and, as requested, will partner with stakeholders to develop materials targeted to their providers. In commenting on a draft of this report, CMS officials noted that the agency plans to develop materials for anesthesia, bariatric, general surgery, pulmonary, and renal specialties; however, they did not indicate when those materials will be made available on the website. Outreach. Nineteen of 28 stakeholders we contacted recommended that CMS take additional actions that could improve its outreach efforts. Specifically, stakeholders recommended that the agency communicate plans to ensure that Medicare FFS providers would be reimbursed in a timely manner; provide information on the effect of the ICD-10 transition on CMS’s quality measurement activities; contact providers through non- electronic methods, such as print media and mail; promote a greater sense of immediacy in preparing for the transition; provide information on alternative methods for Medicare claims submission; and make public CMS’s Medicare FFS contingency plans. Communicate plans to ensure Medicare FFS payment. Seven stakeholders we contacted recommended that CMS take action to ensure that providers would be reimbursed in a timely manner if CMS’s Medicare FFS claims processing systems are unable to accept and correctly process claims. These recommendations included the following: (1) expand the use of the agency’s Medicare Part B advance payment policy to account for instances where MACs are unable to receive and, therefore, pay providers’ claims;(2) reimburse Medicare providers’ claims even if there are problems with the ICD diagnosis codes submitted; and (3) allow Medicare providers to submit either ICD-9 or ICD-10 codes—referred to as dual coding—for a period of time following the October 1, 2015, transition deadline. CMS officials stated that the agency understands the importance of paying claims on time during the ICD-10 transition, and is committed to working closely with providers to ensure a smooth transition and responded to each of the recommendations: CMS officials indicated that the agency’s current authority permits CMS to determine circumstances that warrant the issuance of advance payments to affected physicians and suppliers providing Medicare Part B services, and that this authority could be used should CMS systems be unable to process valid Part B claims that contain ICD-10 codes beginning October 1, 2015. Under these circumstances, no action would need to be taken by the physician or supplier, nor would the agency need to publish additional criteria or modify the existing advance payment policy, according to CMS officials. CMS officials stated that the submission of valid ICD-10 codes is a requirement for payment; however, when the presence of a specific diagnosis code is not required for payment then the claim would be paid even if a more appropriate ICD-10 code should have been used on the claim. For example, CMS officials told us that, because there are many reasons why an individual would need to go in for an office visit, office visits do not require the claim to include specific ICD-10 codes; therefore, as long as a claim for an office visit includes a valid ICD-10 code, it would be paid. Additionally, CMS officials indicated that, absent indications of potential fraud or intent to purposefully bill incorrectly, CMS will not instruct its contractors to audit claims specifically to verify that the most appropriate ICD-10 code was used. However, audits will continue to occur and could identify ICD-10 codes included erroneously on claims which could lead to claims denials, according to CMS officials. CMS officials said that dual processing of ICD-9 and ICD-10 codes on Medicare claims is not possible given that HIPAA does not allow for the use of two different code sets at the same time. Communicate how the ICD-10 transition affects CMS programs that use clinical quality measures. Six stakeholders we contacted expressed a need for more information on how the ICD-10 transition will affect CMS programs that make use of clinical quality measures. One stakeholder suggested that there is a lack of understanding about how the ICD-10 transition will affect quality measurement reporting. See http://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment- Instruments/HomeHealthQualityInits/OASIS-C1.html. transition deadline. In commenting on a draft of this report, CMS officials stated that CMS had made available a crosswalk of ICD-9 and ICD-10 codes for quality measures in the hospital inpatient and hospital outpatient quality reporting programs. Engage through non-electronic methods. Five stakeholders we contacted recommended that CMS do more to engage with covered entities through non-electronic methods. For example, one stakeholder indicated that not all of its members rely on electronic communications, instead relying on more traditional forms of receiving information—such as print media and mail—and suggested that CMS expand the methods it uses to engage with covered entities. Other stakeholders recommended that CMS work with local or regional resources, such as the Regional Extension Centers (REC), as part of a strategy to reach a broader audience. Beyond the agency’s electronic outreach efforts, CMS officials indicated that the agency employs various methods, including bi- weekly stakeholder collaboration meetings, in-person training, and print advertisements, to engage covered entities. Another activity officials noted as responsive to stakeholder feedback is the direct mail pilot project that began in August 2014, and which CMS officials said the agency plans to expand in 2015. CMS officials noted that CMS is able to track whether recipients of direct mail have accessed the agency’s ICD-10 website. Additionally, CMS officials said that, in 2012, the agency began conducting multiple trainings with the RECs on the ICD-10 transition in partnership with the Office of the National Coordinator for Health Information Technology. Promote a greater sense of immediacy. Four stakeholders we contacted recommended that CMS’s outreach efforts foster a greater sense of “immediacy” in order to convince covered entities that they should begin preparing for the transition; the amount of time necessary to properly prepare is significant. For example, one stakeholder urged CMS to strengthen its message to providers by encouraging providers to conduct specific transition-related activities, such as a systems remediation assessment. CMS officials noted that the agency has taken steps to modify the types of messages they send covered entities as the transition deadline approaches. After the most recent delay in the transition date, officials said that CMS’s messages began highlighting the practical steps covered entities could take to get started with their transition to ICD-10. For example, officials noted that CMS has issued “one year out” messages intended to help covered entities follow a one-year plan to transition to ICD-10, as well as messages that direct covered entities to detailed ICD-10 transition guidance and resource materials. CMS officials said that the agency’s messaging in 2015 will continue to focus on encouraging covered entities to begin specific, technical activities, such as by providing guidance on how to conduct end-to-end tests for ICD-10 readiness. Communicate Medicare FFS claims submission alternatives. Four stakeholders we contacted expressed concern that providers could face delays in reimbursement if they have problems making changes to their practice management or electronic health record systems to enable the electronic submission of claims with ICD-10 codes by the transition deadline; therefore, they suggested that CMS do more to communicate alternatives for submitting claims. CMS officials noted that the agency has already made available information on alternatives for claims submissions to covered entities through an MLN Matters article. These alternatives, according to the article, consist of free billing software available from the MACs’ websites or MACs’ Internet portals if the portal offers claims submissions. In addition, providers and suppliers may submit paper claim forms. However, this information is included in an article primarily addressing CMS’s Medicare FFS ICD-10 testing approach, and it may not be clear to covered entities that this document also communicates Medicare FFS claim submission alternatives. Officials indicated that CMS plans to publish a claims submission alternative educational product in September 2015, but if the agency learns that providers need the information sooner, they will issue the document earlier. Communicate Medicare FFS contingency plans. Four stakeholders we contacted suggested that CMS should make public the agency’s Medicare FFS contingency plans that address potential post-transition issues. Two stakeholders suggested that without a contingency plan, covered entities may doubt whether CMS is ready for the transition deadline, and that making such plans public would demonstrate CMS’s commitment to the transition date and instill confidence that CMS has a clear strategy for addressing any issues that may arise. CMS officials developed a draft contingency plan that outlines the steps CMS will take to address specific issues affecting Medicare FFS claims processing if they were to arise after the transition, but this plan has not been shared with the public. Officials indicated that they do not intend to make the contingency plan public because the information relevant to providers—that is, claims submission alternatives—has already been made available in an MLN article. CMS’s contingency plan addresses the agency’s plans in the following scenarios: if covered entities are unable to submit ICD-10 codes, if covered entities are submitting incorrect ICD-10 codes, and if CMS’s Medicare FFS claims processing systems are unable to accept and correctly process claims. To help prepare in the case of the latter scenario, the plan indicates that the agency would hold an exercise— which occurred in December 2014, according to CMS officials—to simulate the actions that could be taken in such an event. Officials said that if there are issues that occur on or after October 1, 2015, CMS will use its regular communication channels to educate the provider community about what is happening and what, if anything, providers need to do. Provider Burden. Seven of the 28 stakeholders we contacted expressed concerns that the burden of participating in CMS audits and various other concurrent programs is limiting and will continue to limit health care providers’ ability to focus on ICD-10 transition preparedness, and requested that CMS mitigate any additional provider burdens leading up to and following the ICD-10 transition. For example, one stakeholder suggested that CMS delay implementing any new audits, because the individuals responsible for preparing for the transition to ICD-10 codes are often the same individuals involved in responding to CMS’s audit activities. Another stakeholder indicated that a lack of staff is the greatest barrier to a successful ICD-10 transition, as providers are also trying to simultaneously comply with a number of competing health reform priorities, such as the Medicare Electronic Health Records program. In written responses to us, CMS officials stated that the agency understands the effect new audit activities have on providers. However, officials also indicated that some audits may have the potential to decrease provider burden, and that it would not be appropriate for CMS to delay all new audits. Additionally, while CMS officials did not identify specific actions the agency could take to address stakeholders’ concerns about the burden of participating in various other concurrent programs, they noted that the transition to ICD-10 is foundational to advancing health care. Specifically, CMS officials stated that the granularity of ICD- 10 codes will improve data capture and data analysis, which can be used to improve patient care, and inform health care delivery and health policy. A successful transition to ICD-10 codes requires every health care provider, clearinghouse, and payer to prepare in advance of the October 1, 2015, transition deadline. CMS has taken multiple steps to help prepare covered entities for the transition, including developing educational materials and conducting outreach, and the majority of the stakeholders we contacted reported that both of those activities have been helpful to preparing covered entities for the ICD-10 transition. With respect to Medicare, CMS reported that the agency’s Medicare FFS claims processing systems have been updated to reflect ICD-10 codes, and it is not yet known whether any changes might be necessary based upon the agency’s ongoing external testing activities. CMS has also worked with the states to help ensure that their Medicaid systems are ready for the ICD-10 transition, but, in many states, work remains to complete testing by the transition deadline. We provided a draft of this report to HHS for comment. HHS concurred with our findings. In its written comments, reproduced in appendix I, HHS stated that it is committed to helping address stakeholders needs and in working with those that need additional assistance to prepare for the transition. The Department detailed various methods it has used and is using to prepare stakeholders, Medicare FFS claims processing systems, and state Medicaid agencies for the transition. HHS also provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services, the Administrator of CMS, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or dsouzav@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. In addition to the contact named above, Gregory Giusto, Assistant Director; Nick Bartine; Shannon Legeer; Drew Long; and Jennifer Whitworth made key contributions to this report.
In the United States, every claim submitted by health care providers to payers—including Medicare and Medicaid—for reimbursement includes ICD codes. On October 1, 2015, all covered entities will be required to transition to the 10th revision of the codes, requiring entities to develop, test, and implement updated information technology systems. Entities must also train staff in using the new codes, and may need to modify internal business processes. CMS has a role in preparing covered entities for the transition. GAO was asked to review the transition to ICD-10 codes. GAO (1) evaluated the status of CMS's activities to support covered entities in the transition from ICD-9 to ICD-10 coding; and (2) described stakeholders' most significant concerns and recommendations regarding CMS's activities to prepare covered entities for the ICD-10 transition, and how CMS has addressed those concerns and recommendations. GAO reviewed CMS documentation, interviewed CMS officials, and analyzed information from a non-probability sample of 28 stakeholder organizations representing covered entities and their support vendors, which GAO selected because they participated in meetings CMS held in 2013 or met GAO's other selection criteria. GAO provided a draft of this report to HHS. HHS concurred with GAO's findings and provided technical comments, which GAO has incorporated, as appropriate. The Centers for Medicare & Medicaid Services (CMS), within the Department of Health and Human Services (HHS), has undertaken a number of efforts to prepare for the October 1, 2015, transition to the 10th revision of the International Classification of Diseases (ICD-10) codes, which are used for documenting patient medical diagnoses and inpatient medical procedures. CMS has developed educational materials, such as checklists and timelines, for entities covered by the Health Insurance Portability and Accountability Act of 1996 (HIPAA)—that is, health care providers, clearinghouses, and health plans, which GAO refers to as “payers”—and their support vendors. In addition, CMS has conducted outreach to prepare covered entities for the transition by, for example, holding in-person training for small physician practices in some states. CMS officials have also monitored covered entity and vendor readiness through stakeholder collaboration meetings, focus group testing, and review of surveys conducted by the health care industry. CMS also reported modifying its Medicare systems and policies. For example, CMS documentation states that the agency completed all ICD-10-related changes to its Medicare fee-for-service (FFS) claims processing systems, which reflect the results of internal testing. At this time, it is not known what, if any, changes might be necessary based upon the agency's ongoing external testing activities. CMS has also provided technical assistance to Medicaid agencies and monitored their readiness for the transition. For example, all Medicaid agencies reported that they would be able to perform all of the activities that CMS has identified as critical by the transition deadline; however, as of November 2014, not all agencies have started to test their systems' abilities to accept and adjudicate claims containing ICD-10 codes. Stakeholder organizations identified several areas of concern about the ICD-10 transition and made several recommendations, which CMS has taken steps to address. For example, stakeholders expressed concerns that CMS's testing activities have not been comprehensive. In addition, while all 28 stakeholders GAO contacted indicated that CMS's educational materials have been helpful to covered entities, stakeholders were concerned about the extent to which those entities were aware of and using those materials. In response, CMS officials said that the agency has scheduled end-to-end testing with 2,550 covered entities during three weeks in 2015 (in January, April, and July), and has promoted awareness of its educational materials by, for example, partnering with payers, providers, and others to direct users to available CMS and industry educational resources. Stakeholders also recommended that CMS expand its in-person training and develop additional specialty-specific materials. CMS officials said the agency has added in-person training in additional states with plans to also offer more video trainings, and planned to develop additional specialty-specific materials. Additionally, stakeholders recommended that CMS do more to engage covered entities through non-electronic methods and to make its Medicare FFS contingency plans public. CMS officials indicated that the agency employs various methods to engage covered entities—including bi-weekly stakeholder collaboration meetings and print advertisements—and also conducted a direct mail pilot project to primary care practices in four states, and plans to expand the pilot. CMS officials also indicated the information in the agency's contingency plans that are relevant to providers is currently publicly available.
FTAs—which phase out barriers to trade in goods with particular countries or groups of countries and contain rules designed to improve access to foreign markets for U.S. goods, services, and investment— remain a major component of U.S. trade policy. Collectively, according to USTR, FTAs with 20 countries accounted for about 40 percent of U.S. trade in goods in 2013. Eleven of these 20 FTAs were negotiated under trade promotion authority in the Trade Act of 2002, went into effect from 2003 through 2013, and involved 16 partner countries. The 20 FTAs involve partners ranging from high income countries such as Australia, Bahrain, Singapore, Chile, Korea, Oman, and Canada to lower middle income countries such as Morocco, El Salvador, Guatemala, Honduras, and Nicaragua. Colombia, Costa Rica, the Dominican Republic, Jordan, Mexico, Panama, and Peru are upper middle income countries. The two CAFTA-DR countries (El Salvador and Guatemala) and the two countries with bilateral FTAs (Peru and Chile) that we visited reflect a range of national per capita incomes. See fig.1. Chile, El Salvador, Guatemala, and Peru also face a range of environmental challenges. Chile—According to Chilean officials, the country’s environmental challenges (described later in this report) relate to natural resource and extractive industries, including mining, fishing, forestry, and agriculture. For example, Chile’s agriculture and mining sectors are water-intensive, placing strains on its water supply. Chile is a top producer and exporter of fish, and mitigating the fishing sector’s water pollution and impact on ecosystems is a challenge.States and Chile have been working together under the framework of the FTA as well as a June 2003 environmental cooperation agreement and related work program to assist Chile in addressing its environmental challenges and FTA commitments. El Salvador and Guatemala—countries under CAFTA-DR. According to Salvadoran and Guatemalan officials, environmental challenges facing one or both of these countries include preservation of biodiversity and threats to endangered species, water and air pollution, deforestation, and degradation of marine resources. To assist these countries in addressing their environmental challenges and FTA commitments, the United States and these countries have been working together under the framework of the FTA as well as a February 2005 environmental cooperation agreement and related work programs. Peru—Peru has about 13 percent of the world’s tropical forests. These forestry resources include forestry concessions used for logging, natural protected areas, and reservations for indigenous communities. Peru’s myriad environmental challenges include deforestation and water contamination from small mining operations. Because of concerns related to illegal logging, the FTA included an annex on forest sector governance. To address Peru’s environmental challenges and FTA commitments, the United States and Peru have been working together under the framework of the FTA as well as a July 2006 environmental cooperation agreement and related program. The 11 agreements negotiated under trade promotion authority include environmental chapters as part of the agreements. The environmental chapters contain several provisions that call for strengthened environmental protection and allow for increased public participation in the implementation of environmental provisions. For example, the 11 FTAs include provisions for formal opportunities and mechanisms for public participation in government environmental decision making. Four of the 11 FTAs additionally provide for submissions to a secretariat by persons or organizations of an FTA partner country asserting that the FTA partner country is failing to effectively enforce its environmental laws; the secretariat is to consider these claims and may develop a report, known as a “factual record,” about them. CAFTA-DR, which entered into force in 2006, was the first to include such a mechanism, and the Peru FTA also includes one. The three FTAs that were the focus of our review (Chile FTA, CAFTA-DR, and Peru FTA) contain provisions regarding partner countries’ levels of protection and strengthening of their environmental laws. For example, these three FTAs commit each partner country to ensure that its laws provide for high levels of environmental protection and to strive to continue to improve those laws or levels of protection; to strive to ensure that it does not waive or otherwise derogate from such laws in a manner that weakens or reduces the protections afforded in those laws as an encouragement for trade with another party; and to ensure that judicial, quasi-judicial, or administrative proceedings are available under its laws to sanction or remedy violations of its environmental laws. Finally, each of the three FTAs commits the parties “to not fail to effectively enforce its own environmental laws, through a sustained or recurring course of action, in a manner affecting trade between the parties.” This provision is enforceable, meaning a party that fails to comply is subject to dispute settlement, under the respective FTA. The Peru FTA, which entered into force following the May 10, 2007 Bipartisan Agreement, also includes commitments that each party shall adopt, maintain, and implement laws, regulations, and other measures to fulfill certain obligations under seven listed multilateral environmental agreements, and that the parties will not fail to enforce their laws, regulations, and other measures to fulfill those multilateral environmental agreement obligations. Unique among FTAs, it also contains an annex on forest sector governance with detailed requirements pertaining to improved forest management and protection of CITES-listed endangered species, including big-leaf mahogany and Spanish cedar. According to USTR officials, an Environmental Affairs Council, involving both trade and international environment ministries of the FTA parties (State and USTR in the United States), is charged with overseeing the FTA environmental chapter. Each of the 11 FTAs includes an article on environmental cooperation that provides for, among other things, negotiation of a separate agreement on environmental cooperation among the parties to the FTA. State has the lead in negotiating and administering these agreements, which have typically been concluded about a year after the FTAs enter into effect; implementation is overseen by an Environmental Cooperation Commission. Environmental cooperation is programmed through agreed-upon work plans covering specified periods, and involves such activities as professional exchanges, workshops, and in-country projects aimed at improving laws, institutions, and practices that may involve U.S. funding. State is primarily responsible for overseeing implementation of environmental cooperation activities, while USTR is responsible for the negotiation and administration of trade agreements and has lead responsibility for monitoring and enforcing compliance with FTA commitments. State supports cooperation activities that enable FTA countries to develop capacity to meet their FTA environmental commitments.areas carry out cooperation activities. According to USTR officials, they rely on a network of other agencies to support their monitoring and enforcement functions. Accurate monitoring is necessary to track the extent to which partners are complying with FTA commitments and ensure that resources are targeted to areas where they can be most effective, according to U.S. officials. According to officials in Chile, Peru, El Salvador, and Guatemala, their countries passed or made changes to their environmental laws and established or strengthened environmental institutions since signing their respective FTAs. U.S. agencies worked with these partners under environmental cooperation agreements to help them build capacity to meet FTA environmental commitments. However, these countries continue to face challenges, including limited technical capacity and inadequate resources for enforcing environmental protection. According to State, Chile has taken steps to strengthen environmental standards since signing the FTA and improved enforcement of its environmental laws. In addition, according to Chilean officials, the country continues to take steps to increase transparency of government information on the environment and has made progress in increasing public participation in government environmental decision making. According to U.S. and Chilean officials, much of the progress was the result of a new environmental law, approved in 2010 that they said reformed the country’s environmental framework by establishing new agencies. For example: Environment Ministry—along with the president, is responsible for designing and implementing environmental policies, plans, and programs. Environmental Superintendency—responsible for enforcing environmental laws and regulations, including assessing fines and sanctions for violations. Environmental Evaluation Service—responsible for managing and modernizing Chile’s environmental impact assessments for both public and private sector projects to ensure compliance with applicable environmental standards. Environmental tribunals—responsible for judicial review of Environmental Superintendency decisions and settling environmental challenges or resolution of environmental damage claims, and hearing other environmental cases. According to Salvadoran officials, a key step in El Salvador’s meeting its CAFTA-DR environmental commitments was the 2012 reform of its 1998 Environmental Law. The law established the Ministry of Environment and Natural Resources, with the authority to protect the environment and conserve, restore, and promote the use of natural resources. The ministry is responsible for implementing the environmental law as well as the Natural Protected Areas and Wildlife Conservation laws. According to Salvadoran officials, the 2012 reform allowed for the creation of environmental tribunals and establishment of the tribunals as legal authorities responsible for determining the civil liability for acts against the environment. According to State officials, another important step was taken in 2013, when the ministry launched its National Environment Policy and Strategy to address sanitation, biodiversity, climate change, and water resources. In addition, according to a Salvadoran official, the establishment of an environmental “hotline” in 2011 increased the opportunity for public participation in oversight. According to a Guatemalan official, Guatemala’s legal reforms established a Ministry of Environment and Natural Resources in 2000. The ministry is responsible for formulating and implementing environmental policies and approves and oversees compliance with environmental impact assessments. Guatemala joined the CITES convention in 1980. According to a Guatemalan Environment Ministry official, its 2013 climate change legislation was another important step in meeting the CAFTA-DR environmental commitments.regulations are included in other sector-specific laws and regulations, including the Forest, Hydrocarbons, and General Fishing and Aquaculture laws, according to a ministry official. According to this official, other key steps include the ministry’s creation in 2011 of the Environmental Auditing Unit and the setting up of the environmental hotline for persons to make environmental submissions. Although we focused on El Salvador and Guatemala for our site visits, CAFTA-DR officials we met with in Washington, D.C., provided examples of steps their countries have taken to improve environmental protection. For example, according to these officials, Honduras improved waste water treatment; the Dominican Republic created programs for issuing environmental permits; and Costa Rica increased its use of renewable energy. U.S., Peruvian, and NGO officials credit the FTA with helping the country take steps to improve environmental protection. Peruvian officials cited actions that include establishing the Environment Ministry, with an investigative arm, to verify compliance with environmental legislation and oversee the process of obtaining environmental impact assessments; establishing an independent forestry oversight body to conduct audits of forestry concessions and take administrative enforcement actions, assess monetary fines, and cancel concessions for noncompliance; and enacting a new Forestry and Wildlife Law in 2011. According to State officials, Peru also undertook a public consultations process to collect input from all interested stakeholders, including indigenous communities, on draft implementing regulations for the new law. In addition, according to USTR, Peru did the following: adopted laws and administrative procedures for managing and supervising the issuance of export permits for big-leaf mahogany and Spanish cedar, two endangered CITES-listed timber species; strengthened the Organismo de Supervision de los Recursos Forestales y de Fauna Silvestre (OSINFOR), an independent forestry oversight body that conducts post-harvest audits to limit illegal logging; amended the criminal code to include substantial penalties for illegal activities related to the environment, such as illegal logging and wildlife trafficking; developed and began implementing a national anti-corruption plan for forestry and wildlife. According to U.S., Peruvian, and NGO officials, the FTA helped motivate Peru to establish its National Forest Service and increase staff and capacity of OSINFOR. For example, according to Peruvian officials, post- harvest verifications of concessions increased from 49 in 2009 to 1,255 in 2013. According to the Ministry of Culture, which is responsible for addressing issues that affect indigenous people, since entry into force of the FTA and enactment of the new environmental laws, Peru has taken steps to increase consultation with indigenous groups during the drafting of regulations to help them benefit from resources located on their traditional lands.implementation of the new environmental law and sustainable actions for environment protection will require finalization and implementation of regulations, and setting up viable mechanisms to assure the law is implemented. The overall objective of U.S. environmental cooperation with Chile was to establish a framework for cooperation to promote the conservation and protection of the environment, the prevention of pollution and degradation of natural resources and ecosystems, and the rational use of natural resources in support of sustainable development. According to U.S. and Chilean officials, Chile benefited from cooperation activities, including exchange programs with EPA, Interior, the Park Service and NOAA. For example, in 2010, Chilean officials traveled to the United States to observe how EPA addresses environmental disasters. According to officials from the Ministry of Environment, assistance provided by Interior and EPA in the areas of biodiversity, oversight of protected areas, conservation of endangered species, and implementation of multilateral environmental agreements has been valuable. Chile continues to receive help in meeting FTA commitments. For example, a World Wildlife Fund official described the fund’s work with the Chilean government, under a grant from State, to help Chile implement its 2013 fishing and aquaculture law, which aims to protect vulnerable marine ecosystems and bring Chile’s practices in line with international standards. Chilean officials stated that EPA and Interior helped them establish the new Risk Assessment Unit at the Ministry of Environment, focusing on hazardous chemicals and pesticides and their impact on biodiversity. As part of this effort, EPA and Interior hosted Chilean officials in the United States to share best environmental practices for mining, strengthen processes for mine closure, and improve risk assessment and evaluation through workshops and a study tour, according to EPA officials. EPA also provided assistance for managing risk assessment related to working and closed mines. According to Chilean Ministry of Environment officials, in 2014, the Risk Assessment Unit plans to complete a map of contaminated areas and implement the National Environment Risk Assessment Plan that it developed in 2012, with assistance from EPA. Since 2004, the government of Chile has worked in partnership with NOAA to develop a cooperation program on marine protected areas that includes training. Since 2009, NOAA’s National Ocean Service has worked with the National Park Service to establish an interagency cooperation program on marine and terrestrial protected areas with the government of Chile. Representatives of Chile’s Forest Service we met with in Santiago indicated their agency has had a very good relationships with several national parks, with Interior’s National Park Service, and the U.S. Forest Service under “sister park-to-park memorandums of understanding.” For example, there is a 5-year agreement with Yosemite National Park and an arrangement with Redwood National Park in California that have enabled Chilean Forest Service officials to make several U.S. visits to observe how to manage wildfires. According to Chilean Environment Ministry officials, in 2009, NOAA, with the support of the National Park Service, organized a Chilean delegation visit to Glacier Bay National Park and Preserve, and 2 years later, the United States and Chile developed the first bilateral marine-related sister park agreement. According to State, the CAFTA-DR parties identified the following four thematic areas for environmental cooperation: (1) institutional strengthening; (2) biodiversity and conservation; (3) market-based conservation; and (4) private sector performance. Government and NGO officials we met with in El Salvador and Guatemala confirmed cooperation activities with U.S. agencies have helped these countries improve their capacity to meet CAFTA-DR environmental commitments. In addition, according to U.S. officials, State and USAID have supported the operations of the CAFTA-DR Secretariat for Environmental Matters. USAID and other agencies, including Interior, EPA, Justice, and NOAA, supported capacity building activities to help strengthen institutions to improve environmental protection.example, according to USAID officials, USAID supported the Central American Commission on Environment and Development by providing equipment and software to strengthen information and management For systems in El Salvador. To support biodiversity and natural resource conservation, Interior helped El Salvador draft and adopt new legislation to achieve CITES category 1 status, indicating that El Salvador’s legislation fully implements the CITES agreement, according to Interior officials. These officials stated that Interior supported the creation of the Central American Wildlife Enforcement Network, of which El Salvador and Guatemala are participants. The network brings together government officials across relevant agencies to combat the illegal wildlife trade at national and regional levels. The wide range of activities Interior worked on for the network included arranging technical advice by the U.S. Fish and Wildlife Service and supporting countries in developing studies to update scientific information. According to USAID officials, to improve private sector environmental performance and cleaner production, USAID and the Central American Commission on Environment and Development completed cleaner production assessments in the pig, poultry, and dairy sectors to identify areas for savings by implementation of improved production practices. At one dairy production facility that we visited, USAID and its partner, the World Environment Center, provided assistance that helped decrease the cost of production by reducing inputs. World Environment Center consultants pointed out several instances in which the company was wasting raw materials, energy, and other inputs such as water, and helped the firm streamline operations, according to an official at the facility. (See fig. 2.) According to the Salvadoran Minister of Environment, EPA provided expertise in areas where El Salvador could not have addressed issues alone; for example, by providing assistance for setting standards for lead contamination and equipment, and advice to help set up a national water quality reference laboratory. (See fig. 3.) EPA and Justice participated in discussions with Salvadoran officials in 2008 regarding El Salvador’s efforts to establish environmental tribunals and conducted a workshop on environmental adjudication for judges from El Salvador in 2009. The Minister of Environment credited assistance from EPA and Justice with helping the National Police and Attorney General establish environmental units to address violations of environmental laws and regulations. This assistance included attorneys from Justice and EPA giving presentations and providing training in El Salvador, Guatemala, and Costa Rica, according to Justice officials. According to U.S. and Guatemalan officials, environmental cooperation activities in Guatemala have helped strengthen Guatemalan institutions and capacity to meet environmental standards and enforce its environmental laws. According to U.S. and Guatemalan officials, USAID- supported activities helped the Guatemalan government draft a cleaner production policy and encouraged market-based conservation. For example, a related State-sponsored program also supported a number of training activities and has helped introduce cleaner production into the curriculum in universities in the CAFTA-DR countries. At the Guatemalan Cleaner Production Center, an official told us that the project was instrumental in expanding their work with the Ministry of Environment and Natural Resources and private enterprises, and that it had a major impact in both strengthening institutions and helping small- and medium-sized enterprises participate in the cleaner production program that decreased According to this official, U.S. support helped electricity and water use.the center and the Ministry of Environment and Natural Resources decrease the cost of production by reducing the waste of raw materials, energy, water, and other inputs and helped the ministry achieve the following key steps: Issuing a National Cleaner Production Policy in 2010—In 2011, Guatemala established a National Cleaner Production Committee to implement the policy. Conducting environmental diagnostics by sector—The center is partnering with industry associations to help companies implement recommended improvements. Working in rural Guatemala—The center provided training to familiarize 10 small- and medium-sized restaurants and hotels in the Panajachel region with legal requirements for environmental protection and helped them implement cleaner technology. According to a ministry official, EPA assistance helped Guatemala set up a reference laboratory for testing waste water by providing training, books, manuals, and equipment that enable them to analyze chemical, physical, and microbiological samples. According to this official, the waste water reference laboratory is recognized as one of the best in the region. According to Guatemalan officials, assistance from Justice and EPA helped improve the capacity of judges and adjudicators to address environmental issues. For example, Justice and EPA officials spoke at two workshops for Guatemalan judges on environmental adjudication that addressed subjects including principles of environmental law; regulations; permits; health and ambient based standards; public participation in environmental decision making; case management by judges. According to these officials, workshops also addressed civil and criminal penalties. Under a State-funded cooperative agreement (grant), the Humane Society International conducted a number of outreach and public awareness projects centered on biodiversity and endangered species conservation in Guatemala and other CAFTA-DR partner countries, according to State officials. For example, the Humane Society International awarded grants to two NGOs in Guatemala for public outreach campaigns that have reached 195,000 people, including students and farmers. According to a Humane Society International official, getting people from the public and private sectors to recognize the importance of environmental protection and compliance is challenging, but many, including regional government officials, are beginning to value environmental preservation. We visited a wildlife rescue center run by a Humane Society International grant recipient. The grant was used to help maintain the center for endangered species, including construction of a quarantine center for birds and mammals. (See fig. 4.) We also visited a USAID-supported community forestry enterprise sawmill, run by a timber industry consortium representing a group of companies that have logging concessions in the protected area. USAID funding to the Rainforest Alliance helped the group improve certification of logging for timber production, according to an executive at the sawmill. The objective of the U.S. environmental cooperation agreement with Peru was to establish a framework for enhancing bilateral and regional environmental cooperation between the parties to protect, improve, and preserve the environment, including the conservation and sustainable use of their natural resources. According to USTR officials, the focus of cooperation activities was on commitments included in the FTA forest sector annex and improvement of forest sector governance. Key themes and objectives outlined in the work program include: institutional and policy strengthening for effective implementation and enforcement of environmental laws; transparency and public participation in environmental decision making and enforcement; community and market-based activities; and improved environmental performance in the productive sector. To support these objectives and help Peru build capacity to combat trade associated with illegal logging, enhance forest governance, and promote legal trade in timber products, U.S. agencies supported a variety of cooperation activities. For example, according to officials, Interior provided assistance to help develop the institutional framework for effective management of indigenous territories, as well as sustainable forest, fisheries, and wildlife management. A USAID project helped Peru develop regulations for implementing its new forestry and wildlife law and enhance public participation in environmental decision making. In addition, a representative from an organization representing indigenous communities said that USAID helped his group recognize the threat that subsistence farming by poor indigenous people poses to the forest and identify options that are less damaging. The U.S. Forest Service is also helping Peru build technical capacity for forest management and monitoring, and improve institutional capacity for forest administration, according to its officials. Forest Service officials said a key activity to help Peru increase effective implementation and enforcement of its environmental laws and multilateral commitments was a project to develop a prototype for an information and control system for forest and wildlife resources. The system tracks timber that originates from the Amazon forest in Peru by creating electronic records of harvested timber that allows officials to monitor the flow of the timber, including CITES- listed species. Forest Service officials also said the prototype is designed to replace a paper-based system, which is subject to fraud and falsification of documents, and allow operators to limit opportunities for illegally harvested timber to enter the chain of custody. The U.S. Forest Service has also helped Peru provide training and outreach to civil society—particularly indigenous communities—in order to solicit comments on the draft regulations and enhance public participation in environmental decision making. In addition, the U.S. Forest Service helped Peru implement an online portal to gather public comment on the regulations, and provided training to Peru on management and analysis of public comment. Finally, the U.S. Forest Service supported Peru in developing the methodology, management, dissemination, and legal framework for field-based forest inventory data as well as remote sensing data for improved coordination and management of the Amazon forest, according to officials. (See fig. 5.) Chile continues to face environmental challenges, including water contamination due to mining and fish farming, according to U.S. officials. In addition, threats to biodiversity and marine biodiversity are ongoing concerns, according to U.S. and Chilean officials. Officials also identified the challenge of the government’s limited technical capacity and resources in the regions to conduct investigations and enforce environmental laws and standards. In January 2013, the United States and Chile approved a work program that established priorities for cooperation activities. These priorities include strengthening implementation and enforcement of environmental laws and encouraging development of sound environmental practices, according to State. El Salvador and Guatemala continue to face environmental challenges, including water pollution, deforestation, threats to endangered species, and limited capacity and resources to enforce environmental laws, according to U.S. officials. According to Salvadoran officials, water quality remains a concern in El Salvador, where only 5 percent of surface water meets water quality standards. Other challenges include toxic waste from abandoned industrial facilities, destruction of forests and habitat for endangered species, air pollution, urban sprawl, solid waste pollution, coastal pollution, and depletion of marine resources, according to officials. U.S. officials cited limited resources and technical capacity at the Ministry of Environment and Natural Resources to enforce environmental laws and regulations as an ongoing challenge. Similarly, Guatemala faces challenges including water pollution, deforestation, and illicit trade of endangered species, according to Guatemalan officials. According to an official from the Environment Ministry, of key concern are water contamination and degradation of the country’s watersheds and the impact of climate change. Guatemalan officials stated that limited capacity and resources continue to challenge Guatemala’s ability to enforce environmental laws. Guatemalan officials also cited the high turnover in ministries and agencies responsible for environmental protection as a challenge. Peru continues to face environmental challenges, including illegal activities on protected lands, illegal logging and mining, and limited resources and enforcement capacity, according to U.S. officials. Deforestation for agricultural production, including production of palm oil and illegal cultivation of coca, was also cited by an industry group and NGOs as a growing threat. They said that while the focus of the implementation of the FTA forest sector annex is on illegal logging of CITES-listed species, deforestation and transformation of the forest into agriculture production, including for illegal production of coca and expanded production of palm oil and coffee cultivation, is a significant threat to conservation of forest resources. Timber officials said that small- scale gold mining in the forest is also contributing to deforestation in the Peruvian Amazon forest. In addition, according to NGO officials, lack of resources and low wages for government officials limit environmental enforcement. An NGO official also noted that cuts in funding for OSINFOR, the agency responsible for auditing forest concessions, has led to a drop in inspections and limits training opportunities for regional staff and restricts its ability to enforce sanctions. According to this official, decentralization of authority to regional governments presents strains on resources for environmental protection, because funds have not been allocated to support delegation of functions from the central government to the regions. In addition, NGO officials noted the lack of coordination between the Ministries of Environment and Agriculture. For example, according to an NGO official, the Ministry of Agriculture is allowing the expansion of deforestation for agricultural production, while the Ministry of Environment’s mission is to protect the forest from clear-cutting, which removes all trees from a given tract of forest, threatening the area’s ecological integrity. Representatives from the United States and other CAFTA-DR countries have established a Secretariat for Environmental Matters. The United States is also working with Peru, Panama, and Colombia to establish similar institutions. The secretariat provides a means for members of the public to submit allegations that the country is failing to enforce its environmental laws and, by investigating and publishing the findings, can bring pressure on the FTA countries to increase enforcement. The United States, Peru, and the Organization of American States continued to negotiate the establishment of the Peru Secretariat for Submissions on Environmental Matters as of July 2014, more than 5 years after the FTA entered into force. The FTA commits the United States and Peru to establish a secretariat to receive submissions about a party’s effective enforcement of environmental laws. In 2010, the Environmental Cooperation Commission agreed to an Environment Cooperation Work Program for 2011-2014, which listed establishing a secretariat to receive submissions on environmental matters as an objective. According to USTR and State officials, the negotiations have included a number of complex issues, such as the structure of the secretariat’s substantive functions and financial issues arising from the secretariat’s operations. As of June 2014, USTR and State officials informed us that the United States and Peru had concluded negotiations on a bilateral Secretariat Agreement and on three Environmental Affairs Council decisions pertaining to issues such as staffing. USTR also indicated that the United States, Peru, and the Organization of American States agreed to the final terms of an agreement. According to USTR and State officials, final documents are all expected to be signed by the parties and approved by the council before the end of 2014. In the absence of an operational Secretariat for Submissions on Environmental Enforcement Matters, the United States and Peru established an interim procedure to receive environmental submissions from the public, consistent with the FTA. According to these procedures, both governments shall provide written responses to submissions addressed to them, and shall make such submissions and responses publicly available in a “timely and accessible manner.” In June and July 2013, USTR received submissions—in the form of two letters—from an environmental organization based in Lima expressing concerns over palm oil cultivation in Peru to the detriment of Peru’s forests and wildlife. On July 31, 2013, in its response to the organization, USTR indicated that United States officials had been in touch with their governmental counterparts in Peru and would consider raising the matter with Peru at the Environmental Affairs Council. USTR also referred the representative from the environmental organization to a June 2013 joint communiqué that highlighted the interim procedure for submissions, and recommended that the environmental organization engage with the Peru Ministry of Trade. The organization sent updates to the June and July letters, and USTR officials told us that they met with representatives from the environmental organization on two occasions after providing the July response, in November 2013 and March 2014. U.S. funding resources for FTA-related cooperation activities have declined since fiscal year 2009, because of a decline in CAFTA-DR funding and shifting budget priorities. In fiscal year 2013, funding for cooperation activities to countries under CAFTA-DR was 18 percent of its fiscal year 2009 level, while funding under the Peru FTA was 41 percent of its fiscal year 2009 level. In general, funding levels for cooperation activities have varied for partner countries under FTAs for fiscal years 2004 through 2013. For example, cooperation activities in Peru and CAFTA-DR partner countries received over 90 percent of the almost $151 million of funding for cooperation activities from fiscal years 2004 through 2013. More specifically, countries under CAFTA-DR received over $87 million in funding for cooperation activities from fiscal years 2004 through 2013, and Peru received nearly $49 million from fiscal years 2009 through Cooperation activities in Chile received over $4 million in funding 2013.since fiscal year 2009, as well. Funding for environmental cooperation to other FTA countries—including Morocco, Oman, Panama, and Jordan— totaled almost $10 million from fiscal years 2004 through 2013. (See fig. 7.) State and USAID provide funds for cooperation activities, spelled out in the environmental cooperation agreement work plans between the United States and an FTA country in order to achieve specific long-term goals. For example, the CAFTA-DR environmental cooperation program has structured cooperation activities under the following five objectives: (1) institutional strengthening for effective implementation and enforcement of environmental laws; (2) multilateral environmental agreements, biodiversity, and conservation; (3) market-based conservation; (4) improved private sector environmental performance; and (5) implementation of specific CAFTA-DR commitments. For CAFTA-DR, State and USAID provided the largest share of funding (45 percent) for strengthening environment-related legal institutions. For the Peru FTA, USAID provided nearly all of the $49 million funding to support forest conservation activities, of which 51 percent was used to implement activities through a contractor and 49 percent through the U.S. Forest Service. For the Chile FTA, State provided the largest portion of funding (35 percent) for development of environmental practices and technologies. Since 2009, State has improved its management and monitoring of U.S.- funded FTA cooperation activities, by working with the Organization of American States and a private firm that specializes in monitoring and evaluation. According to USTR officials, a focus of its efforts to monitor compliance and implementation of environmental commitments has been on the Peru FTA and its Annex for Forest Sector Governance. In terms of the remaining 19 FTA partners, USTR has taken initial steps to improve monitoring FTA partner country compliance with environmental commitments—such as developing a monitoring plan. However, its monitoring plan lacks key elements, such as indicators and time frames, to effectively track progress and partner countries’ compliance with their FTA commitments. Internal control standards require establishing and reviewing performance measures and indicators and conducting ongoing monitoring to assess the quality of performance over time. Since our 2009 report, State has improved its monitoring and reporting of the results from environmental cooperation activities. In 2009, we found that State lacked mechanisms that would allow it to assess the effectiveness or efficiency of cooperation activities, among other things. Since then, State has funded contracts with the Organization of American States and a private firm to improve monitoring and evaluation, and reporting of cooperation activities in CAFTA-DR countries, and bilateral FTA partner countries such as Chile, Morocco, and Oman, according to State officials. USAID manages and funds cooperative efforts related to improving Peru’s forest sector governance and therefore is responsible for monitoring activities in the Peru Forest Sector Initiative, according to State and USAID officials. State officials told us that USAID program managers in Peru provide them with quarterly reports. Since 2009, the Organization of American States has published four reports on monitoring and evaluation of cooperation activities in CAFTA- DR countries. Implementing agencies that provide assistance to the FTA countries, such as Interior, provide data and information on the results of the cooperation activities, according to Organization of American States officials. These officials then analyze the information and publish a public report, on behalf of State, on the extent to which the cooperation activities are achieving the broader objectives of the environmental cooperation agreement. Organization of American States officials added that, initially, implementing agencies did not align their performance indicators with the objectives of the environmental cooperation agreement. As a result, sometimes the activities did not reflect the needs of the partner country, and it was difficult to measure progress that resulted from the activities. According to Organization of American States officials, they conducted workshops for officials in the implementing agencies on developing indicators that focused on results of the cooperation activities. In addition, these officials assisted implementing agencies by providing them with the performance management framework that the Organization of American States created to guide the efforts of the implementing agencies, which reduced the amount of duplicative activities implementing agencies conducted with partner countries in the region, according to Organization of American State officials. In addition, Organization of American State officials developed a reporting template and distributed the template to implementing agencies to streamline data collection from the implementing agencies. This addressed a problem identified in the Organization of American States’ first evaluation report of cooperation activities—that aggregating information to assess the extent to which cooperation activities achieved results was difficult because of the lack of a standardized reporting format. In addition, Organization of American States officials worked with officials from the partner countries to get them to agree on broad environmental principles and priorities and have them develop a vision of what environmental improvements they wanted to achieve. Next, the Organization of American States officials linked specific projects to those broad goals of the partner countries and the environmental cooperation agreement, according to Organization of American States officials. Similarly, since 2010, State has contracted with a private sector firm to monitor and evaluate cooperation activities in Chile, Morocco, and Oman. According to a representative of the firm, these efforts included: Developing indicators that measured the results of the cooperation activities, rather than measuring the amount and types of activities conducted. Conducting workshops with officials from implementing agencies to assist them with developing indicators suited to measuring the impact of their activities. In these workshops, representatives from the private sector firm also provided training to officials from implementing agencies on ways to harmonize indicators to streamline the reporting. This included developing harmonized indicators based on State’s standard foreign assistance indicators, known as F-indicators. For State and some implementing agencies, using the F-indicators as a basis to develop additional indicators for measuring outcomes of the cooperation activities was important, because State and other agencies are required to report on them separately. Assisting implementing agencies with developing a reporting template, using the harmonized indicators, to streamline the data collection process. According to implementing agencies, the reporting template has simplified processes, giving them a more defined scope when developing cooperation activities, and has enabled them to report with targeted, specific indicators and outcomes. In addition, State has created formal and informal mechanisms to share information with stakeholders involved with conducting cooperation activities and officials in the partner countries. For example, State established a website to provide information to stakeholders involved in implementing cooperation activities in CAFTA-DR countries. In addition, State has instituted quarterly conference calls between agencies managing projects in the region to support coordination and avoid duplication of efforts. USTR has pledged to vigorously monitor and enforce trade agreements, and included specific goals related to implementation of FTAs and FTA environmental provisions in its recent strategic and performance plans. USTR’s 2014-17 Strategic Plan states that USTR will establish and lead a robust interagency program for monitoring implementation of FTA labor and environment obligations across all FTA partners. It also states that USTR will promptly analyze issues identified through monitoring and develop appropriate strategies to resolve them. In addition, its 2015 Performance Budget sets a performance goal to monitor implementation of each of the FTAs to ensure full compliance with all FTA and related commitments. The Government Performance and Results Act Modernization Act of 2010 sets several requirements for performance plans, including that they provide a basis for comparing results. For example, performance plans should include a basis for comparing actual program results with the established performance goals and a description of how the goals are to be achieved, including defined milestones. Some FTAs provide for establishment of a body under a different name; such bodies function in the same manner as an Environmental Affairs Council. agreements. In some instances, officials from USTR meet with officials from FTA partner countries several times within a year, according to USTR officials. For example, USTR officials told us that they have met— either through the council, or informally—with cabinet-level officials from CAFTA-DR countries to discuss implementation and environmental challenges, and to coordinate activities by country, or regionally. Most FTAs in our review also require that the councils host a public session, unless the United States and the FTA partner agree otherwise. USTR officials stated that the public sessions provide an opportunity for citizens and groups to introduce environmental issues to USTR and the FTA partner country, and raise implementation issues; these issues can become part of ongoing discussions or future council meetings. USTR officials told us that they have focused much of their efforts to improve FTA monitoring and enforcement since our 2009 report on Peru because of the extensive environmental commitments contained in the Peru FTA and the forest sector annex, and associated work underway to enable Peru’s compliance. The officials stated that their efforts were in response to inputs from U.S. environmental stakeholders about what is important, as well as resource limitations. USTR developed several mechanisms for monitoring compliance with, and implementation of, commitments in the environmental chapter and Annex for Forest Sector Governance of the Peru FTA. For example, the Subcommittee on Forest Sector Governance was established under the annex to facilitate cooperation for activities specified in the annex and provide a forum for the United States and Peru to share views and information on any matter arising under the annex. For example, the forest sector annex includes specific commitments, such as the development of a system that will track and verify the chain of custody for wood harvested in Peru´s forests, and the supervision and issuance of permits for timber species covered by CITES. The subcommittee has met on several occasions. USTR formally reviews the progress that Peru and the United States have made in ensuring effective implementation of, and compliance with, commitments in the environmental chapter of the Peru FTA in the Environmental Affairs Council, according to USTR. The environmental chapter established the Environmental Affairs Council, and states that it should convene at least yearly, unless otherwise determined by the United States and Peru. Officials in Peru created a matrix that tracks the status of implementation of commitments from the environmental chapter and the annex, and shares this information with the public and with USTR. According to USTR officials, they use information from this matrix to help them oversee how Peru is addressing issues outlined in the forest sector annex of the FTA. In addition, according to these officials, USTR coordinates with State, USAID, and U.S. Forest Service staff in Peru to corroborate the information contained in the matrix. Stakeholders in the environmental and forestry sectors can access the matrix to verify the validity of the information published by the Peruvian government. Stakeholders, including environmental groups, with which GAO met in Peru said that they had taken such opportunities to seek clarification, request more robust steps, and encourage progress. Nevertheless, an NGO official told us that the timetable for action is not precisely defined and that progress is not meeting officials’ expectations. Additional information on the lack of time frames is further discussed later in this report. Following the entry into force of the FTA, the President established the Interagency Committee on Trade in Timber Products from Peru (Timber Committee). The Timber Committee has authority to oversee operation of the agreement’s forest sector annex, including requests to the government of Peru to conduct audits of particular exports and producers in Peru and verifications of particular export shipments to determine compliance with Peruvian law, according to U.S. agencies. The Timber Committee is chaired by USTR and composed of senior officials from the Departments of State, Justice, Interior, and Agriculture. Representatives from the Department of Homeland Security and USAID serve as observers. In April 2012, an environmental group petitioned USTR to exercise its authority under the Peru FTA Annex on Forest Sector Governance to request Peru to conduct an audit or verification of certain timber shipments, producers, and exporters of big-leaf mahogany and Spanish cedar that it considered to have been illegally harvested and exported to the United States. The environmental group also released an investigative report alleging the systematic export of timber from Peru to the United States it claimed was illegally harvested, transported, and traded. According to USTR, the Timber Committee investigated the allegations, but declined to exercise its authority under provisions in the forest sector annex to request the government of Peru to conduct audits and verifications of certain exporters, citing a significant decline in reported exports of these species and actions taken by the government of Peru in response to the investigation, among other things. However, in order to address challenges highlighted by the review and contribute to ongoing reform efforts undertaken by Peru, the committee decided to take the following actions: seek agreement with Peru on specific actions it will undertake to address challenges Peru faces regarding management of big-leaf mahogany and Spanish cedar; target U.S. capacity-building resources to assist Peru to carry out such actions; and regularly monitor Peru’s progress. In January 2013, less than 1 month after the Timber Committee issued its decision, USTR staff traveled to Peru and reached agreement on a bilateral plan to address specific challenges in Peru’s forestry sector. The bilateral plan covers areas addressed in the forest sector annex, but with greater specificity, reflecting challenges identified by the Timber Committee in its review of the petition. For example, the bilateral plan includes details not included in the forest sector annex, such as expanding Internet connectivity of transportation check points to improve systems to track and verify the chain of custody of timber products. According to the plan, the targeted set of actions includes: strengthening physical inspections of big-leaf mahogany and Spanish cedar contained in harvesting plans prior to their approval, in accordance with Peru’s forest and wildlife laws and regulations; strengthening accurate harvest plan development and implementation by improving capacity of forest sector stakeholders, such as forest engineers, and native communities, among other stakeholders; ensuring timely criminal and administrative proceedings to sanction any party who violates Peru’s forestry and wildlife laws; improving systems to track and verify the chain of custody of timber exports of big-leaf mahogany and Spanish cedar; and strengthening the implementation of Peru’s National Anti-Corruption Forest and Wildlife Sector Plan. Federal internal controls standards call for ongoing monitoring. This includes establishing performance indicators and time frames to track performance over time. Although the bilateral plan identifies a targeted set of actions for Peru to undertake to address challenges in its forestry sector, the lack of performance indicators and time frames for completing action precludes USTR from having a clear understanding of the extent to which Peru is meeting its commitments in the bilateral plan. For example, USTR was unable to provide us data on the extent to which Peru has improved the verification of the chain of custody of timber exports. Moreover, an NGO official stated that the stakeholders and the public do not know the extent to which Peru is meeting its commitments because there are no time frames or performance indicators included in the plan. USTR has taken some important steps to strengthen monitoring. In 2012, USTR established a subcommittee for monitoring partner countries’ compliance with, and implementation of, their FTA commitments. In our 2009 report, we said that USTR lacked plans to enforce, monitor, and report on progress under the FTAs, and recommended that USTR take steps to improve its plans. In April 2013, the subcommittee developed a plan for monitoring partner countries’ compliance with commitments across all agreements, according to USTR. More specifically, the goal of the plan is to assist the monitoring subcommittee in identifying any positive or negative developments, and to develop strategies to address any concerns, according to USTR. The monitoring plan consists of four elements: (1) gathering facts from relevant and reliable sources, (2) assessing and evaluating the information obtained, (3) identifying and prioritizing issues, and (4) developing a strategy for addressing priority issues. USTR began implementing the first element of its monitoring plan— gathering facts from relevant and reliable sources—in 2013, after it requested that U.S. embassies in partner countries provide information, in reporting cables, on the major environmental developments that have occurred in those countries in the past 5 years. USTR officials stated that the subcommittee meetings, held in December 2013 and April 2014, featured an initial assessment of information collected from all FTA partners in the reporting cables prepared in response to USTR’s request (step 2). USTR indicated that it has begun to identify issues (step 3) and in some cases begun to take steps to address concerns (step 4), such as by gathering additional information, engaging in discussions with FTA partners, raising issues in meetings of institutional bodies, or working on targeted capacity building. According to USTR officials, they are executing their monitoring plan on a partner-by-partner basis. For example, USTR indicated that based on information learned through fact gathering, it has raised several issues in recent Environmental Affairs Council meetings with Colombia, Panama, and the CAFTA-DR partner countries, as well as technical discussions with Peru. Further, it is working to develop an updated environmental cooperation work plan with Oman. Although USTR’s plan for monitoring FTA environmental chapter implementation calls for processes, such as developing strategies and solutions on a partner-by-partner basis, as previously discussed, federal internal control standards call for monitoring that includes establishing indicators and time frames to track performance over time and ensure that the priorities identified as a result of its monitoring processes are USTR’s monitoring process is missing these key promptly resolved.elements. USTR officials told us that FTA-related environmental issues do not lend themselves to monitoring in the form of identifying performance indicators or designating a specific completion date for measuring progress. However, as noted above, the Government Performance and Results Act Modernization Act of 2010 requires agencies to provide a means for assessing performance. Although the matrix Peru has developed is useful for tracking the status of steps Peru is supposed to take under its FTA environmental chapter and forest sector annex, use of the matrix does not preclude the internal control criteria requiring performance indicators and time frames. Establishing performance indicators and time frames would improve USTR’s ability to monitor progress and ensure that each partner country is meeting its FTA environmental commitments. Chile, El Salvador, Guatemala and Peru continue to face environmental challenges, including limited technical capacity and enforcement resources. Chile has taken significant steps to meet its FTA obligations since 2009, while concerns remain about Peru’s capacity to enforce protection of endangered timber species and address emerging deforestation threats to the Amazon. Given the decline in funding for environmental cooperation activities in recent years, it is particularly important that State and other U.S. agencies target resources in ways that will most effectively assist partner countries in addressing continued and emerging environmental challenges. USTR has taken several steps to improve monitoring since we issued our 2009 report, but its plan for monitoring all 20 FTA partner countries’ compliance, and the U.S.–Peru bilateral plan to address specific challenges in Peru’s forestry sector, lack time frames and performance indicators to assess progress. Notably, USTR is unable to determine the impact of the verification system in Peru and assure stakeholders and the public that Peru is meeting its commitments in the bilateral action plan. The lack of time frames and indicators also limits USTR’s ability to provide accurate information on the extent to which all FTA partners are meeting their environmental commitments. In addition, information on the status of each FTA partner’s progress in meeting its environmental commitments will enable USTR to provide guidance on where State and other implementing agencies should best target resources. Improved monitoring would help ensure that expanded trade does not come at the expense of environmental protection. To enhance its ability to monitor partner compliance with FTA environmental commitments and provide timely and useful information to help target assistance where it is most needed, USTR should: establish time frames and develop performance indicators to assess the extent to which Peru’s actions are meeting the commitments of the U.S.–Peru bilateral action plan to address specific challenges in Peru’s forestry sector, and work with its interagency monitoring subcommittee to establish time frames and performance indicators to implement its plan for enhanced monitoring of implementation of FTA environmental commitments across all FTA partner countries. We provided a draft of this report to State and USTR, as well as USAID, Interior, the Department of Agriculture’s U.S. Forest Service, EPA, the Department of Commerce’s NOAA, and Justice for review and comment. We received technical comments from State, USTR, Interior, Justice, and NOAA, which we incorporated, as appropriate. The Department of Agriculture’s U.S. Forest Service provided written comments which we have reprinted in appendix II. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of State, the U.S. Trade Representative, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff has any questions about this report, please contact Kimberly M. Gianopoulos at (202) 512-8612 or GianopoulosK@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. To examine steps selected partner countries have taken with U.S. assistance to implement free trade agreement (FTA) environmental commitments, we analyzed the structure and provisions of the environmental chapters of the 11 FTAs that entered into force from 2003 through 2013 to identify the range of their provisions and the procedures, if any, for the receipt and consideration of public environmental submissions under those FTAs. We also examined in detail the environmental chapters in the Chile FTA, Dominican Republic–Central America–United States Free Trade Agreement (CAFTA-DR), and Peru Trade Promotion Agreement to identify similarities and differences among their key environmental provisions. We interviewed officials from the Office of the United States Trade Representative (USTR), Department of State (State), and United States Agency for International Development (USAID) in Washington, D.C., and State, USAID, and host government and nongovernmental organization (NGO) officials in two CAFTA-DR countries, El Salvador and Guatemala, as well as in Chile and Peru. To discuss illustrative examples of assistance provided by U.S. agencies to help partners meet FTA environmental commitments, we interviewed officials from State, USTR, the Interior Department (Interior), the Environmental Protection agency (EPA), the Department of Justice (Justice), the National Oceanic and Atmospheric Administration (NOAA), and the U.S. Forest Service prior to making field visits to El Salvador, Guatemala, Chile, and Peru, where we interviewed State, USAID, and U.S. Forest Service staff and host government, NGO, and private sector officials. In addition, in the four countries we visited a number of project sites. For example, in El Salvador we visited the wastewater reference laboratory that EPA helped set up, as well as several private firms that participated in cleaner production activities, including a boutique hotel, a producer of dairy products; and a dairy and pig farm. Among the project sites we visited in Guatemala were the Cleaner Production Center, the ARCAS Wildlife Rescue Center, and a sawmill in the Petén region. In Chile, we met with World Wildlife Fund officials and a group of olive oil producers participating in a clean production program. Finally, we visited a number of project sites in the Ucayali province in Peru. For example, we visited a site where logs are transported from the Amazon region down the river to sawmills for processing. We witnessed a demonstration of USAID’s and U.S. Forest Service’s prototype of an electronic tracking system. In addition, we met with regional government officials and officials implementing the USAID/ Peru-Bosque project to enhance public participation by helping local groups and indigenous people comment on draft regulations for the new Forestry and Wildlife Law. To examine selected partner mechanisms to process public environmental submissions, we reviewed the environmental chapters of FTAs, which entered into force from 2003 through 2013, to determine which agreements required the establishment of an independent body, such as a secretariat, to receive and respond to citizen submissions. We interviewed officials at State, USTR, and USAID. In addition, we interviewed officials at the CAFTA-DR Secretariat for Environmental Matters in Guatemala City, Guatemala; and analyzed data obtained from the CAFTA-DR Secretariat to determine the number of citizen submissions it had received, and the number of factual records published since the establishment of the CAFTA-DR Secretariat. To determine the status of negotiations for establishing the Peru Secretariat for Environmental Matters, we interviewed officials from USTR, State, and the Organization of American States; and submitted questions to officials in the government of Peru. To examine U.S. resources to assist partners in implementing environmental commitments, we collected budget data on funding of FTA environmental activities from State and USAID. To show trends in total funding for environmental activities by FTA, we aggregated funding for each FTA across agencies and plotted total funding over the years in which the FTA was in effect. We depicted total resources for each FTA by adding available funding data across both agencies and years in which the FTA was active. We also used FTA-level budget data to show the allocation of funding across programs. We asked USTR officials for estimates of staff resources dedicated to FTA implementation and monitoring. We assessed the reliability of the data by (1) performing electronic testing for errors in accuracy and completeness and (2) interviewing agency officials knowledgeable about the data sources. We determined that the data were sufficiently reliable for illustrating trends in FTA-related environmental assistance across agencies. To examine how State monitors FTA environmental cooperation activities and USTR monitors partner compliance with FTA environmental commitments, we interviewed officials at State and USTR and implementing agencies, including EPA, Justice, Interior, USAID, the U.S. Forest Service and NOAA. In addition, we interviewed officials from the Organization of American States and Le Groupe-conseil Baastel (Baastel), a private firm, which have assisted State with collecting, analyzing, and reporting the results of nine partner countries under four agreements. Furthermore, we reviewed documentation, such as data collection templates, monitoring reports, and performance management plans from State, the Organization of American States, Baastel, and supporting U.S. agencies, such as EPA and USAID. We interviewed officials at USTR, and reviewed documentation, such as its plan to monitor compliance and implementation. Furthermore, we reviewed cables detailing the status of the environment in partner countries over the past 5 years. USTR requested this information—through State—from U.S. embassies in partner countries, to collect information as part of its plan to monitor progress under the provisions. We interviewed officials at USTR to identify mechanisms that it uses—such as the Interagency Timber Committee—to monitor compliance and implementation of provisions in the environmental chapter of the Peru Trade Promotion Agreement, and its Annex on Forest Sector Governance. Also, we reviewed documentation, such as the environmental chapter of the Peru Trade Promotion Agreement, and its Annex on Forest Sector Governance, the matrix detailing the status of implementing provisions provided to USTR and the public by the government of Peru, and the Joint Communiqué of the Meetings of the Governments of the United States and Peru Regarding Forest Sector Governance. We conducted this performance audit from May 2013 to November 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Kim Frankena, (Assistant Director), Tom Zingale (Analyst-in-Charge), Kyerion Printup and John O’trakoun made key contributions to this report. Sada Aksartova, Juan P. Avila, Karen Deans, David Dornisch, Ernie Jackson, and Oziel Trevino provided additional assistance.
The United States has signed free trade agreements that lower barriers to trade with 20 countries, including 5 Central American countries and the Dominican Republic. Reflecting Congress's interest in balancing commercial interests with environmental protection, the United States and FTA partners have agreed to strengthen environmental protection. In 2009, GAO recommended improved FTA monitoring. GAO was asked for an update. This report examines, among other things: (1) steps selected partners have taken, with U.S. assistance, to implement FTA environmental commitments; (2) resources to assist partners in implementing environmental commitments; and (3) U.S. agency monitoring of cooperation activities and partner compliance with their FTA environmental commitments. GAO reviewed FTA environmental provisions and cooperation agreements; analyzed U.S. funding data for cooperation activities from fiscal years 2003 through 2013; and evaluated documentary and testimonial evidence. GAO visited Guatemala and El Salvador, two of six CAFTA-DR countries, and Peru and Chile, and met with U.S., host government, private sector, and NGO officials. GAO selected these countries because they reflect a range of per capita income, U.S. assistance, environmental progress, and challenges. The four free trade agreement (FTA) partners that GAO selected for this review all passed environmental laws and established institutions to improve environmental protection, in line with their FTA commitments to strive to improve their laws on and levels of environmental protection. For example, Chile created enforcement agencies and modernized its system for evaluating the environmental impact of projects; El Salvador launched a National Environmental Strategy; and Guatemala created a unit to verify compliance with natural resource protections. According to U.S., Peruvian, and nongovernmental organization (NGO) officials, U.S. assistance has helped Peru improve management and monitoring of its forest resources. However, each FTA partner continues to face challenges in capacity and enforcement of environmental protection. Peruvian Officials Conduct Timber Inspection U.S. resources for cooperation activities have declined since 2009 because of shifting priorities. Peru and countries in the Dominican Republic–Central America–United States Free Trade Agreement (CAFTA-DR) received 90 percent of the roughly $151 million of total funding for FTA cooperation activities from fiscal years 2004 through 2013. CAFTA-DR countries received over $87 million from fiscal years 2004 through 2013, and Peru received nearly $49 million from fiscal years 2009 through 2013. However, in fiscal year 2013, U.S. funding for environmental cooperation activities to CAFTA-DR countries was 18 percent of its 2009 level, and funding for Peru FTA activities was 41 percent of its 2009 level. The Department of State has improved monitoring of environmental cooperation activities since 2009, and the Office of the U.S. Trade Representative (USTR) developed a plan for monitoring partner compliance with FTA environmental commitments. However, USTR's monitoring lacks timeframes and performance indicators to measure partner progress in meeting FTA environmental commitments. In addition, the U.S.-Peru bilateral action plan addresses specific challenges in Peru's forestry sector and identifies actions for Peru to take, but does not include timeframes and indicators. Lack of timeframes and performance indicators precludes stakeholders and the public from having a clear understanding of the extent to which Peru is meeting its commitments since agreeing to the terms of the bilateral action plan. GAO recommends that USTR establish timeframes and indicators to assess the extent to which Peru is meeting commitments in the bilateral action plan and establish timeframes and indicators to implement its plan for enhanced monitoring across all FTA partner countries.
Medicare traditionally has paid DMEPOS suppliers through fee schedule amounts based on suppliers’ historical charges to Medicare. The purpose of CBP is to improve how Medicare payment amounts are set by paying only competitively selected contract suppliers amounts based on competitive bids, thereby providing Medicare program savings and reducing Medicare beneficiaries’ out-of-pocket expenses for DMEPOS items and services. CMS and its CBP implementation contractor—Palmetto GBA—administer and implement the CBP and its bidding rounds. In each competitive bidding area included in a CBP bidding round, suppliers can bid for one or more product categories’ CBP-covered items.evaluated based on the supplier’s eligibility, financial status, and bid prices. From this evaluation, the CBP payments—referred to as single payment amounts—are determined for each CBP-covered item in each The suppliers’ bids are competitive bidding area, and the winning suppliers are selected. Winning suppliers are then offered CBP contracts. If the supplier accepts its contract offer, it agrees to accept Medicare assignment on the CBP- covered items for the product category and in the competitive bidding area involved,amounts. and to be paid the relevant CBP single payment CMS was also required to take steps to ensure that small suppliers could be awarded CBP contracts, and accordingly set a target that 30 percent of the qualified suppliers in each product category in each competitive bidding area would be small suppliers as defined for CBP. where the small supplier target is not initially met, CMS may award additional small suppliers CBP contracts after the agency has determined the number of suppliers needed to meet or exceed CMS’s estimated beneficiary demand. To help ensure beneficiary access and choice, CMS tries to award at least five contracts in each product category in each CMS is required by law to recompete the CBP competitive bidding area.contracts at least once every three years. (See fig. 1 for CBP’s legislative history and program implementation timeline.) For CBP, CMS has defined a small supplier as one that generates gross revenue of $3.5 million or less in annual receipts including both Medicare and non-Medicare revenue. A qualified supplier is a bidder that has met certain requirements, including having been found financially sound, and its bids will be used to determine the single payment amounts and to select the contract suppliers. Pub. L. No. 105-33, § 4319(a), 111 Stat. 251, 392-4 (1997) (codified, as amended, at 42 U.S.C. § 1395w-3). Pub. L. No. 108-173 § (302)(b), 117 Stat. 2066, 2224-30 (2003) (codified, as amended, at 42 U.S.C. § 1395w-3). Items and services covered by the competition were durable medical equipment (DME) and related supplies, off-the-shelf orthotics, and enteral nutrients and related equipment and supplies. Pub. L. No. 110-275, § 154(a)(2), 122 Stat. 2494, 2560-3 (2008) (codified, as amended, at 42 U.S.C. § 1395w-3). The Medicare Improvements for Patients and Providers Act of 2008 and implementing regulations require CMS to notify suppliers of missing financial documentation if their bids are submitted within the covered document review date, which is the later of: (1) 30 days before the final date for the close of the bid window; or (2) 30 days after the bid window opens. For the round 1 rebid, CMS was required to notify eligible suppliers of missing financial documentation within 45 days after the end of the covered document review date. For other competitive bidding program (CBP) rounds, CMS is required to notify eligible suppliers of missing financial documentation within 90 days after the end of the covered document review date. The national mail-order competition includes all parts of the United States, including the 50 states, the District of Columbia, Puerto Rico, the U.S. Virgin Islands, Guam, and American Samoa. The CBP rounds include: Round 1 rebid. CMS awarded contracts to 356 contract suppliers for the provision of DME items and services in nine product categories in nine competitive bidding areas. The contracts took effect on January 1, 2011 and expired after three years on December 31, 2013, except for the mail-order diabetic testing supplies contracts, which expired on December 31, 2012. Round 2. Round 2 expands CBP to another 100 competitive bidding areas in 91 metropolitan statistical areas. The single payment amounts for covered items were effective July 1, 2013 under round 2 contracts. The round 2 product categories are the same as the round 1 rebid except for the addition of the negative pressure wound therapy (NPWT) category, the deletion of the complex power wheelchairs and mail-order diabetic supplies categories, and the extension of the support surfaces category to all round 2 competitive bidding areas.The round 2 contracts are for a term of 3 years. National mail-order diabetic testing supplies program. The CBP national mail-order diabetic testing supplies program competition was conducted at the same time as round 2, and its competitively determined single payment amounts were effective July 1, 2013. Unlike in the round 1 rebid, suppliers bidding for the national program had to demonstrate that their bids would cover at least 50 percent, by sales volume, of all types of diabetic testing strips on the market. These contracts are for a term of 3 years. Non-mail order Medicare payments are the same as the mail-order single payment amounts for the CBP-covered items. Round 1 recompete. In anticipation of the expiration of the round 1 rebid contracts, in 2012 CMS recompeted contracts for the nine round 1 rebid competitive bidding areas, referred to as the round 1 recompete. The round 1 recompete’s six product categories differ from the round 1 rebid categories by adding infusion pumps, NPWT pumps, deleting complex wheelchairs, and creating a new category that includes home equipment, such as hospital beds and commode chairs. The recompete contracts have a 3-year term, with an effective date of January 1, 2014. A contract supplier may no longer participate in CBP if CMS terminates its contract, if it voluntarily withdraws from Medicare, or if it has experienced a certain type of change of ownership. CMS can terminate a contract supplier’s CBP contract if a supplier fails to meet its contractual obligations. In that case, CMS may request that the supplier submit a corrective action plan, suspend or terminate the contract, preclude it from participating in CBP, or revoke its billing number. A contract supplier that has its CBP contract terminated may continue to operate as a Medicare supplier and submit Medicare claims for non-CBP covered items and services. Contract suppliers may choose to voluntarily withdraw from Medicare, and thus no longer be a Medicare supplier. Contract suppliers may also have a change in ownership that impacts their participation in CBP, but their CBP contracts may be transferred only under certain circumstances. A change in ownership, also referred to as a CHOW, may result in either (1) a new entity or company that did not exist before the merger or acquisition transaction; or (2) a successor entity or company that exists before the transaction, merges or acquires a contract supplier, and continues to exist as it did before the transaction. If a contract supplier is negotiating an ownership change, the supplier must notify CMS in advance and CMS may award the CBP contract to the entity that merges with or acquires the contract supplier in certain circumstances. These circumstances include when the successor entity is acquiring the assets of the contract supplier and submits a signed agreement to CMS in advance of the acquisition stating that it will assume all obligations under the contract. Medicare beneficiaries residing in competitive bidding areas have several sources available to help them locate contract suppliers and receive assistance for CBP-related issues. CBP Online Contract Supplier Locator. To locate a CBP contract supplier, beneficiaries can use the CMS online supplier locator tool on CMS’s Medicare website. The contract locator tool contains the names of the contract suppliers in each competitive bidding area, as well as the product categories for which they furnish CBP-covered items. Contract suppliers are responsible for submitting information regarding the specific brands of items they furnish in the upcoming quarter, and CMS uses this information to update the supplier locator. 1-800-MEDICARE Inquiries. CMS has directed beneficiaries to call its 1-800-MEDICARE beneficiary help line with CBP questions— referred to by CMS as inquiries. Callers are assisted by trained CBP customer service representatives (CSR) who use several scripts to answer general questions about CBP and specific product categories and assist beneficiaries in finding CBP suppliers. If a beneficiary’s inquiry cannot be addressed by the scripts, the CSR will forward the inquiry to an advanced-level CSR who will research the issue and respond to the beneficiary’s inquiry. Palmetto GBA and CMS regional offices. Palmetto GBA, the CBP implementation contractor, investigates all beneficiary or supplier complaints related to alleged CBP contract violations. In addition, Palmetto GBA provides CBP-related information and updates through its website. Local Palmetto GBA staff are stationed in the competitive bidding areas and work with CMS regional staff to monitor CBP activities and identify and address any emerging issues. CMS also uses its regional offices as the focal point for calls that cannot be resolved by 1-800-MEDICARE; for example, the offices may assist when a CSR is unable to help a beneficiary find a contract supplier. Competitive Acquisition Ombudsman (CAO). The CMS CAO was created to respond to CBP-related complaints and inquiries made by suppliers and individuals, and works with CMS officials and contractors and Palmetto GBA to resolve them. The CAO is required to submit an annual report detailing CBP-related activities to Congress. CMS has implemented several activities to monitor whether beneficiary access or satisfaction have been affected by the implementation of CBP. Inquiries and Complaints to 1-800-MEDICARE. CMS tracks all CBP-related inquiries to 1-800-MEDICARE. All calls are first classified as inquiries and CMS defines as a CBP complaint only those inquiries that cannot be resolved by any 1-800-MEDICARE CSR and is elevated to another entity, such as Palmetto GBA, CMS’s regional offices, or the CAO for resolution. Beneficiary Satisfaction Surveys. CMS conducted pre and post- implementation surveys to measure beneficiary satisfaction with CBP’s round 1 rebid. The pre-implementation survey was conducted from June to August 2010, the first post-implementation survey was conducted from August to October 2011, and the second post- implementation survey was conducted in June 2013. CMS surveyed beneficiaries in the nine round 1 rebid competitive bidding areas, as well as the nine comparator areas. National Claims History. CMS conducts daily monitoring of national Medicare claims data to identify utilization trends, monitor beneficiary access, address aberrations in services, and target potential fraud and abuse. CMS tracks health outcomes—such as hospitalizations, emergency room visits, physician visits, admissions to skilled nursing facilities, and deaths—for beneficiaries likely to use a CBP-covered product and who have used a CBP-covered product, in both competitive bidding areas and comparator areas to determine whether health outcomes in the competitive bidding areas remain consistent with national trends. CMS posts quarterly reports of these health outcomes on its website. Form C. Each quarter, CMS requires contract suppliers to submit a Form C that lists the specific CBP-covered DME items they plan to furnish the following quarter—including the brand names and equipment models. According to Palmetto GBA, this information is used to update the Medicare supplier directory tool and to evaluate beneficiary access to competitively bid items, as well as the quality of items and services. Secret shopping. CMS has conducted secret shopping calls in which individuals posed as beneficiaries and requested items, such as specific diabetic supplies from contract suppliers to determine whether the suppliers offer the supplies they claim to furnish. Secret shopping is conducted on a limited ad-hoc basis and may be done in response to specific complaints received, or to evaluate certain contract suppliers and monitor their performance and compliance with the terms of their CBP contracts. Our analysis of Medicare claims data found that for five of the six product categories we examined, the number of distinct beneficiaries furnished CBP-covered items generally decreased more in the competitive bidding areas than in the comparator areas in each month of 2011 and 2012 compared to the same month of 2010. CMS continued several ongoing monitoring activities and reported that the CBP round 1 rebid did not affect beneficiary access and satisfaction in its second year. In addition, several Medicare beneficiary advocacy groups that we interviewed did not report widespread access issues among their members. Our analysis of Medicare claims data found that the number of distinct beneficiaries furnished CBP-covered items generally decreased more in the nine competitive bidding areas than in the nine comparator areas in each month of 2011 and 2012 compared to the same month of 2010 for five of the six product categories we analyzed. However, the larger decreases in the number of beneficiaries furnished CBP-covered items in the competitive bidding areas do not necessarily indicate that CBP- covered beneficiaries did not receive needed DME. As CMS has reported, the CBP may have curbed previous inappropriate distribution of some CBP-covered items in competitive bidding areas. Our analysis found that fewer beneficiaries received one or more enteral product category items in each month of 2011 and 2012 compared to the same month of 2010. In 2012, the declines were roughly equivalent in the competitive bidding areas and the comparator areas. (See fig. 2.) For example, in the competitive bidding areas, the number of CBP-covered beneficiaries who received one or more items in May 2012 decreased by about 4 percent compared to May 2010. In the comparator areas, the number of beneficiaries who received one or more items was about 6 percent lower in May 2012 compared to May 2010. Our analysis found that fewer beneficiaries received one or more hospital bed product category items in all months of 2011 and 2012 compared to the same month of 2010 in the competitive bidding areas, with consistently lesser declines in the comparator areas over the same period. (See fig 3.) Our analysis of Medicare claims data found that the number of beneficiaries who received one or more oxygen product category items each month of 2011 and 2012 compared to the same month of 2010 decreased more in the competitive bidding areas than in the comparator areas, although there were substantial declines in both types of areas. (See fig. 4.) For example, compared to May 2010, the number of beneficiaries furnished one or more oxygen product category items decreased by about 9 percent in May 2011 and by about 22 percent in May 2012 in the competitive bidding areas. For the comparator areas, the number of beneficiaries furnished one or more items decreased by about 5 percent in May 2011 and by about 16 percent in May 2012. Our analysis found that substantially fewer beneficiaries in the competitive bidding areas received one or more walkers product category items in each month of 2011 and 2012 compared to the same month of 2010, although there were also declines in the comparator areas. (See fig. 5.) For example, compared to May 2010, the number of CBP-covered beneficiaries who received one or more walkers product category items was about 26 percent lower in May 2011 and about 24 percent lower in May 2012. In the comparator areas, compared to May 2010, 6 percent fewer beneficiaries received one or more walkers product category items in May 2011 and about 5 percent fewer beneficiaries received one or more of these items in May 2012. We found that fewer beneficiaries in the competitive bidding areas received one or more standard power wheelchair product category items each month of 2011 and 2012 compared to the same month of 2010. (See fig. 6.) For example, compared to May 2010, about 16 percent fewer beneficiaries in the competitive bidding areas received one or more standard power wheelchair product category items in May 2011 and about 15 percent fewer in May 2012. We did not include like information for the comparator areas because CMS changed the payment policy for standard power wheelchairs in non-competitive bidding areas only, making a comparison to the competitive bidding areas difficult. This change in payment policy, which was effective January 1, 2011, eliminated the option for the lump sum purchase payment for standard power wheelchairs in all non-competitive bidding areas. As it did in 2011, CMS continued several ongoing activities to monitor CBP’s effects on beneficiaries in 2012. CMS’s activities included monitoring the number of CBP-related inquiries and complaints made to 1-800-MEDICARE and the health outcomes of CBP-covered beneficiaries in competitive bidding areas. CMS reported that the implementation of the CBP round 1 rebid did not result in beneficiary access issues in the first two years of the program. In addition, representatives of several Medicare beneficiary advocacy groups that we interviewed did not report that widespread access issues occurred. According to data provided by CMS, 1-800-MEDICARE received a total of 44,249 CBP-related questions—referred to by CMS as inquiries—in 2012, which was fewer than the 127,466 CBP-related inquiries reported in 2011. The total number of quarterly CBP-related inquiries to 1-800-MEDICARE ranged from a high of 56,941 in the first quarter of 2011 (17,672 product- related inquiries plus 39,269 general CBP inquiries) to a low of 7,969 in the fourth quarter of 2012 (4,119 product-related inquiries plus 3,850 general CBP inquiries). (See fig. 8.) The majority of total inquiries for both 2011 and 2012 were general in nature; for example, CMS officials told us that inquiries were related to questions about the program or finding a contract supplier. About 2 million beneficiaries reside in CBP round 1 rebid competitive bidding areas; the ratio of inquiries to 1-800-MEDICARE compared with CBP beneficiaries is approximately 1 inquiry for every 45 beneficiaries. As was also the case in 2011, CMS data showed that the majority of all CBP product-category specific inquiries to 1-800-MEDICARE—over 13,000 in 2012—were related to mail-order diabetic supplies. The enteral product category and support surfaces product category received the fewest number of inquiries. (See fig. 9.) All calls to 1-800-MEDICARE are initially classified as inquiries and only recorded as complaints if they cannot be resolved by a CSR. In 2012, CMS classified 43 CBP-related calls to 1-800-MEDICARE as complaints, which was a decline from 151 complaints in 2011. Among the 43 complaints, 13 complaints were specific to the walkers product category, which was almost more than twice the number of complaints associated with any of the other product categories. Twelve of the 13 complaints were related to a specific walker brand and model that can be billed under HCPCS code E0147, which has the highest single payment amount of all CBP-covered HCPCS codes included in the walkers product category. Some complainants reported that contract suppliers would not provide the specific walker brand and model prescribed by beneficiaries’ physicians because the CBP single payment amount is lower than the cost of the item. According to Palmetto GBA data, in response to one complaint, it conducted secret shopping calls to two contract suppliers and was told by both that they did not carry the specific walker brand and model and could not obtain it. After Palmetto GBA explained the terms of their contracts, both contract suppliers then agreed to provide it. Half of these 12 complaints originated in the Miami competitive bidding area, where there was a decline in utilization for these walkers. According to CMS, the agency continues to monitor national Medicare claims data to identify utilization trends, monitor health outcomes and beneficiary access, address aberrations in services, and target potential fraud and abuse. As part of this effort, CMS monitors a range of health outcomes—including deaths, hospitalizations, emergency room visits, physician visits, and admissions to skilled nursing facilities—for beneficiaries likely to use a CBP-covered item or who have used a CBP- covered item, in both competitive bidding areas and their comparator areas. In both 2011 and 2012, CMS’s monitoring of health outcomes from national claims data indicated that CBP-covered beneficiaries continued to have access to necessary and appropriate CBP-covered items and supplies, and that health outcomes in the competitive bidding areas were consistent with national trends. However, as we previously reported,while these outcomes are reassuring, they may not reflect other outcomes that did not require physician, hospital, or emergency room visits, such as whether beneficiaries received the DME item they needed on time, or whether health outcomes were caused by problems accessing CBP-covered DME. CMS data show that the agency monitored beneficiary access by conducting more secret shopping calls in 2012 than it did in 2011—300 versus 32. According to that data, the highest number of secret shopping calls in 2012 involved the oxygen product category (109) and the second highest number of calls involved the walker product category (58). According to CMS officials, secret shopping calls were prompted by beneficiary and industry concerns expressed to CMS. For example, CMS officials told us that the agency received complaints that contract suppliers were not providing liquid oxygen equipment and specific walker models. According to these officials, when conducting secret shopper calls, CMS provides contract suppliers additional education on competitive bidding program and supplier quality standard requirements. CMS then conducts subsequent secret shopper calls to verify that the contract suppliers are adhering to the requirements. CMS conducted a pre-CBP implementation survey in 2010 and post-CBP implementation survey in 2011 to measure beneficiary satisfaction with the CBP round 1 rebid’s first year. According to CMS data, the agency obtained responses from at least 400 beneficiaries in each of the nine competitive bidding areas and nine comparator areas to collect beneficiary satisfaction ratings for six questions related to the beneficiary’s initial interaction with DME suppliers, the training received regarding DME items, the delivery of the DME item, the quality of service provided by the supplier, the customer service provided by the supplier, and the supplier’s overall complaint handling. According to CMS data, results of the pre-2010 and post-CBP 2011 implementation surveys showed that responses from beneficiaries were similar and generally positive in both the competitive bidding areas and comparator areas.CMS officials told us that CMS conducted a follow-up beneficiary satisfaction survey in June 2013 using the original survey questions and methodology, but as of November 20, 2013, survey results were not yet available. We interviewed representatives from several beneficiary advocacy groups about their members’ experiences with CBP, and whether they were aware of any CBP-related beneficiary access and choice issues that may have occurred among their members. The beneficiary groups represent beneficiaries with specific issues, such as those with diabetes and disabilities requiring wheelchairs. In general, these representatives either reported no or few concerns, or provided anecdotal examples of beneficiary access issues, such as difficulty obtaining wheelchair repairs, or difficulty locating contract suppliers. They did not indicate that their CBP-covered beneficiary members had been negatively affected by widespread access issues or concerns in the first two years of the CBP round 1 rebid. For the round 1 rebid product categories we examined, a small number of contract suppliers accounted for a large portion of Medicare total allowed charges across 2011 and 2012. One contract supplier had a high percentage of the total market share for the standard power wheelchair product category across 2011 and 2012, but was terminated as a contract supplier in 2013. Few contract suppliers left the CBP through contract terminations, voluntarily withdrawing from Medicare, or having had a change in ownership. We examined the contract supplier market share development for six product categories in 2011 and 2012 and found that the trends for each product category were relatively consistent across the nine competitive bidding areas. For each product category, we illustrate typical market share development trends by showing examples from two competitive bidding areas. (See fig. 10 through fig. 21.) For five of the six product categories, we found that, in general, the top 4 suppliers—those with the highest individual Medicare total allowed charges across all quarters of 2011 and 2012—accounted for a large portion of the market in all competitive bidding areas, although the top 4 suppliers for each product In our examples, the category could vary by competitive bidding area.top 4 suppliers’ combined market share in the fourth quarter of 2012 ranged from 50 percent for the enteral product category in the Dallas competitive bidding area to 86 percent for the walkers product category in the Orlando competitive bidding area. Our analysis of Medicare claims data for the CPAP/RAD product category indicates that, in general, the market share among the top 4 contract suppliers increased steadily, the combined market share for the other contract suppliers’ remained relatively consistent with some small increases, and the non-contract suppliers’ combined market share decreased throughout 2011 and 2012. For example, in the Pittsburgh competitive bidding area, by the fourth quarter of 2012, the top 4 contract suppliers combined had about 63 percent of the market, while the other 10 contract suppliers combined had 35 percent of the market. (See fig. 10.) This is fairly similar to the contract supplier market share trend in the Cleveland competitive bidding area, where the top 4 contract suppliers combined had about 73 percent of the market and the other 8 contract suppliers combined had about 26 percent in the fourth quarter of 2012. (See fig. 11.) Our analysis of Medicare claims data for the enteral product category indicates that, in general, the market share of the top 4 contract suppliers and all other contract suppliers combined remained relatively consistent or increased throughout 2011 and 2012. For example, in the Cincinnati competitive bidding area, the top 4 contract suppliers combined had about 70 percent or more of the market share throughout 2011 and 2012. For that same time period, the other 10 contract suppliers combined generally had about 20 percent of the market in that area. (See fig. 12.) In the Dallas competitive bidding area, the top 4 contract suppliers combined had less of the market share—between about 43 to 55 percent each quarter of 2011 and 2012—while the other 20 contract suppliers had more of the market share each quarter. (See fig. 13.) Our analysis of Medicare claims data for the hospital bed product category indicates that contract suppliers’ percentages of Medicare total allowed charges increased steadily throughout 2011 and 2012 as non- contract suppliers’ percentages of Medicare total allowed charges substantially decreased. In both the Riverside and Orlando competitive bidding areas, the top 4 contract suppliers accounted for more than 80 percent of the market by the fourth quarter of 2012, with the other contract suppliers totaling about 10 percent of Medicare total allowed charges in each of the areas. (See fig.14 and fig. 15.) Our analysis of Medicare claims data for the oxygen product category indicates that the market share of the top 4 contract suppliers and all other contract suppliers combined remained relatively consistent or increased from the first quarter of 2011 to the fourth quarter of 2012. For example, in the Cleveland competitive bidding area, the top 4 suppliers had about 65 percent of the market in the first quarter of 2011 and about 71 percent of the market in the fourth quarter of 2012. (See fig. 16.) In the Kansas City competitive bidding area, the top 4 suppliers had about 71 percent of the market in the first quarter of 2011 and about 83 percent of the market in the fourth quarter of 2012. (See fig. 17.) Our analysis of Medicare claims data for the walkers product category indicates that the market share of the top 4 contract suppliers and all other contract suppliers combined remained relatively consistent throughout 2011 and 2012. For example, in the Pittsburgh competitive bidding area, the top 4 contract suppliers combined had at least 65 percent of the total market share each quarter of 2011 and 2012. The other 11 contract suppliers combined had between about 20 to 30 percent of the market share each quarter over that time period. (See fig. 18.) The Orlando competitive bidding area showed a similar market share trend where the top 4 contract suppliers combined maintained at least 72 percent of the total market share each quarter of 2011 and 2012. The other 13 contract suppliers combined consistently had about 20 percent each quarter. (See fig. 19.) Our analysis of Medicare claims data for the standard power wheelchair product category indicates that the market share for the top contract supplier, The Scooter Store’s Alliance Seating & Mobility Division (The Scooter Store), was very high in all quarters of 2011 and 2012 across all competitive bidding areas. Specifically, The Scooter Store had the highest individual supplier percent of all CBP-covered Medicare total allowed charges across all quarters of 2011 and 2012 combined for the standard power wheelchair product category in eight of the nine competitive bidding areas, and the second highest in the ninth competitive bidding area. The Scooter Store’s individual percentages of all Medicare total allowed charges in the fourth quarter of 2012 were: Pittsburgh (82 percent), Orlando (81 percent), Miami (75 percent), Riverside (72 percent), Cleveland (62 percent), Dallas (60 percent), Charlotte (48 percent), Kansas City (41 percent), and Cincinnati (37 percent). In the Miami competitive bidding area, Medicare claims data show that The Scooter Store’s highest individual percentage of all Medicare total allowed charges was about 84 percent in the second quarter of 2011. (See fig. 20.) In the Riverside competitive bidding area, The Scooter Store’s highest individual percentage of all Medicare total allowed charges was about 72 percent in the fourth quarter of 2012. (See fig. 21.) In September 2013, CMS issued a termination notice, with an effective date of October 26, 2013, for The Scooter Store’s CBP round 1 rebid contract in all competitive bidding areas. Prior to issuing the termination notice, CMS removed all references to both The Scooter Store and its Alliance Seating & Mobility Division from all CBP round 1 rebid contract supplier lists on its website in March 2013. A CMS official told us that the removal occurred because of compliance issues identified with The Scooter Store’s round 1 rebid contract and that CMS began initiating the contract termination process at that time. According to CMS, it carefully scrutinizes CBP bidders to ensure that only qualified suppliers are selected to participate in the program; however, The Scooter Store had been the subject of allegations of fraud prior to being awarded a contract in both CBP’s round 1 and round 1 rebid. Specifically, in 2007, The Scooter Store entered into a civil settlement agreement with the U.S. Government to resolve several lawsuits and agreed to pay $4 million and relinquish its right to receive reimbursement for pending Medicare claims.violated the civil False Claims Act and defrauded the United States by, among other things, enticing some beneficiaries to obtain power scooters covered by Medicare and Medicaid and then supplying more costly power wheelchairs that beneficiaries did not want, did not need, or could not use. In one of the lawsuits, the Government alleged that the company Although too soon to determine the full effects, The Scooter Store’s 2013 termination as a contract supplier could potentially result in access issues for beneficiaries residing in the CBP round 1 rebid competitive bidding areas. For example, one round 1 rebid contract supplier we interviewed told us that her company received calls from some of The Scooter Store’s beneficiaries seeking wheelchair repairs. However, this contract supplier and two others told us that some contract suppliers are reluctant or unwilling to repair a wheelchair that they did not originally provide because if the contract suppliers did the repairs, and CMS later determined that The Scooter Store had furnished a wheelchair that did not meet documentation requirements, CMS could recover payments made to the repairing contract suppliers. In the round 1 rebid’s second year, a few contract suppliers—8 percent— had their contracts terminated by CMS or voluntarily withdrew from Medicare, and some had an ownership change. Contract suppliers continued to use subcontractors to provide certain services to beneficiaries in the round 1 rebid competitive bidding areas, but no new agreements were disclosed in 2012. The number of grandfathered suppliers decreased in 2012 to the point that CMS discontinued its monitoring as rental agreements expired. By the end of the CBP’s second year, 27 of the original 356 contract suppliers—about 8 percent—had been terminated by CMS or had voluntarily withdrawn from Medicare, according to CMS data. Eleven contract suppliers were terminated—4 in 2012 and 7 in 2011. Nine terminated suppliers were small suppliers as defined for CBP. One terminated supplier was not experienced in one of its competitive bidding areas; all were experienced in their product categories. The 11 terminated contract suppliers had a total of 22 round 1 rebid product category and competitive bidding area combinations. (See table 1.) The oxygen product category and the Miami competitive bidding area were the most affected. Sixteen contract suppliers withdrew voluntarily from Medicare, 7 in 2012 and 9 in 2011. Of these 16 withdrawn suppliers, 13 were small suppliers. Two suppliers that withdrew had no experience in 1 of their product categories; and all 16 were experienced in their competitive bidding areas. The 16 suppliers that withdrew had a total of 37 round 1 rebid product category and competitive bidding area combinations. (See table 2.) The oxygen product category and the Miami competitive bidding area were the most affected. During the round 1 rebid’s first two years, 12 of the original 356 round 1 rebid contract suppliers—3 percent—had a change in ownership. (See table 3.) For 11 of the 12 changes, CMS awarded the round 1 rebid contracts to the acquiring entity as the entity assumed the obligations under these contracts. In 2012, the only ownership change involved a contract supplier that purchased another contract supplier, but did not assume the purchased supplier’s CBP contracts. In this ownership change, the purchasing contract supplier already had CBP contracts in the same competitive bidding areas for the same product categories, and began serving the purchased contract supplier’s Medicare beneficiaries including its grandfathered beneficiaries. While contract suppliers have continued to use subcontractor suppliers to assist them in furnishing items to CBP-covered beneficiaries, CMS officials told us no contract suppliers disclosed any new subcontracting agreements in 2012 or during the first three months of 2013. As of April 2013, CMS data indicated that 116 distinct contract suppliers have had at least one subcontracting agreement; in total, there were 730 agreements involving 228 distinct subcontractor suppliers. Forty-seven percent (55) of the 116 contract suppliers had one subcontract agreement. The other 61 contract suppliers had multiple subcontracts, including one contract supplier with 50 agreements. Eight of the 116 contract suppliers had subcontract agreements that ended in 2011, 2012, or early 2013. CMS officials also told us that the number of grandfathered suppliers had so diminished that the agency was no longer monitoring them after the second quarter 2012. As we previously reported, the number of grandfathered suppliers had declined steadily during the rebid’s first year (2011); in December 2011, 22 percent (575 of 2,594) of the grandfathered suppliers were still billing Medicare for CBP-covered beneficiaries they had at the end of December 2010, the year before the CBP began. Comparing the third quarters of 2010, before the rebid began, and 2012, the rebid’s second year, both the number of suppliers and their Medicare allowed charges generally decreased more in the competitive bidding areas than in the comparator areas. (See app. I.) The number of suppliers with Medicare allowed charge amounts of $2,500 or more per quarter decreased an average of 27 percent in the competitive bidding areas, and 5 percent in the comparator areas. (See table 4.) All nine competitive bidding areas and six of the nine comparator areas experienced decreases in those supplier numbers. The Miami competitive bidding area experienced the greatest change in suppliers—decreasing by 227 suppliers—a 32 percent change. The number of large suppliers, which we define as having quarterly allowed Medicare charges of $100,000 or more, decreased an average of 18 percent in the competitive bidding areas, while there was essentially no change in the average number of large suppliers in the comparator areas. All nine competitive bidding areas and three of the nine comparator areas had decreases in these large suppliers. The Cincinnati competitive bidding area had the greatest percentage decrease in suppliers at this level—32 percent. The total Medicare allowed charges for the same time period also decreased for all nine competitive bidding areas and all nine comparator areas. (See app. II.) The average decrease for the competitive bidding areas was 28 percent, and 7 percent for the comparator areas. (See table 5.) There were three competitive bidding areas with the highest total charge decrease of 32 percent, including for example, the Cincinnati area’s charges that decreased about $4.2 million from third quarter 2010 ($12.9 million) to third quarter 2012 ($8.7 million). The Orlando competitive bidding area had the lowest percentage decrease change— 22 percent—a decrease in total charges of about $3.3 million. Among the comparator areas, the highest total charge decrease was San Diego— 15 percent—a decrease of about $2 million (from $13.7 million to $11.6 million), while Virginia Beach had the lowest decrease— 0.1 percent—or $17,077 (from $12,603,542 to $12,586,465). The CBP round 1 rebid’s savings for both the Medicare program and the rebid-covered beneficiaries continued in the second year, with CMS reporting total savings of more than $400 million in the rebid’s first two years due to its lower payments, decreased utilization, and lower beneficiary coinsurance. In the rebid’s second year, beneficiary utilization of CBP-covered DME items continued to decrease—more in the rebid’s competitive bidding areas than the comparator areas. CMS’s monitoring activities, however, did not indicate beneficiary access issues. As we reported in 2012, we do not assume that all pre-CBP utilization was appropriate, and CBP may be continuing to reduce unnecessary utilization. CMS’s fraud prevention efforts may also be affecting DME utilization. Continued monitoring of CBP experience is important to determine the full effects it may have on Medicare beneficiaries and DME suppliers. It will be important to determine whether the DME utilization trends in the round 1 rebid’s first two years are similar to those in CBP’s other rounds. With the CBP’s 3-year round 1 rebid complete, the CBP’s 2013 round 2 expansion into an additional 100 competitive bidding areas, the 2013 implementation of the national mail-order diabetic testing supplies program, and the 2013 selection of the new contract suppliers in the original nine areas for the next 3-year contracts beginning in 2014, significant new data will soon be available to further assess the impact of the program. HHS reviewed a draft of this report and provided written comments which are reprinted in appendix III. HHS also provided technical comments, which we incorporated as appropriate. In its general comments, HHS stated that CMS will continue monitoring the CBP to ensure Medicare beneficiaries are not adversely affected by the program, including continuing to use its real-time claims monitoring system. The monitoring activities are important as the CBP has expanded to include 100 additional competitive bidding areas and a national mail-order program for diabetic testing supplies. HHS also stated that it anticipates CBP will provide substantial savings for both the Medicare Part B Trust Fund and Medicare beneficiaries. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Health and Human Services and appropriate congressional committees. The report will also be available at no charge on our website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or kingk@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. In addition to the contact named above, key contributors to this report were Martin T. Gahart, Assistant Director; Yesook Merrill, Assistant Director; Todd Anderson; Dan Lee; Drew Long; Michelle Paluga; Hemi Tewarson; and Opal Winebrenner. Medicare: Review of the First Year of CMS’s Durable Medical Equipment Competitive Bidding Program’s Round 1 Rebid. GAO-12-693. Washington, D.C.: May 9, 2012. Medicare: The First Year of the Durable Medical Equipment Competitive Bidding Program Round 1 Rebid. GAO-12-733T. Washington, D.C.: May 9, 2012. Medicare: Issues for Manufacturer-level Competitive Bidding for Durable Medical Equipment. GAO-11-337R. Washington, D.C.: May 31, 2011. Medicare: CMS Has Addressed Some Implementation Problems from Round 1 of the Durable Medical Equipment Competitive Bidding Program for the Round 1 Rebid. GAO-10-1057T. Washington, D.C.: Sept. 15, 2010. Medicare: CMS Working to Address Problems from Round 1 of the Durable Medical Equipment Competitive Bidding Program. GAO-10-27. Washington, D.C.: Nov. 6, 2009. Medicare: Covert Testing Exposes Weaknesses in the Durable Medical Equipment Supplier Screening Process. GAO-08-955. Washington, D.C.: July 3, 2008. Medicare: Competitive Bidding for Medical Equipment and Supplies Could Reduce Program Payments, but Adequate Oversight Is Critical. GAO-08-767T. Washington, D.C.: May 6, 2008. Medicare: Improvements Needed to Address Improper Payments for Medical Equipment and Supplies. GAO-07-59. Washington, D.C.: Jan. 31, 2007. Medicare Payment: CMS Methodology Adequate to Estimate National Error Rate. GAO-06-300. Washington, D.C.: March 24, 2006. Medicare Durable Medical Equipment: Class III Devices Do Not Warrant a Distinct Annual Payment Update. GAO-06-62. Washington, D.C.: March 1, 2006. Medicare: More Effective Screening and Stronger Enrollment Standards Needed for Medical Equipment Suppliers. GAO-05-656. Washington, D.C.: Sept. 22, 2005. Medicare: CMS’s Program Safeguards Did Not Deter Growth in Spending for Power Wheelchairs. GAO-05-43. Washington, D.C.: Nov. 17, 2004. Medicare: Past Experience Can Guide Future Competitive Bidding for Medical Equipment and Supplies. GAO-04-765. Washington, D.C.: Sept. 7, 2004. Medicare: CMS Did Not Control Rising Power Wheelchair Spending. GAO-04-716T. Washington, D.C.: April 28, 2004.
To achieve Medicare savings for DME, the Medicare Prescription Drug, Improvement, and Modernization Act of 2003 required that CMS implement the CBP for certain DME. In 2008, the Medicare Improvements for Patients and Providers Act terminated the first round of supplier contracts and required CMS to repeat the CBP round 1—referred to as the round 1 rebid, resulting in the award of contracts to suppliers with CBP payments that began January 1, 2011. GAO was asked to review issues concerning the rebid's second year of operation—2012. This report reviews the round 1 rebid's effects on (1) Medicare beneficiaries, (2) the market share of contract suppliers, and (3) all suppliers, including both contract and non-contract suppliers (the suppliers not awarded rebid contracts.) To examine the effects on Medicare beneficiaries, GAO compared Medicare claims data for 2011 and 2012 with that for 2010, the year before the round 1 rebid. GAO also examined other information about CMS's efforts to monitor the effects of the CBP, and interviewed DME industry representatives and officials from Medicare beneficiary organizations. To examine the effects on both contract and non-contract suppliers, GAO compared Medicare claims data for 2012 with that for 2010 and analyzed other data provided by CMS. The Medicare competitive bidding program (CBP) for durable medical equipment (DME) is administered by the Centers for Medicare & Medicaid Services (CMS) within the Department of Health and Human Services. Under the CBP, only competitively selected contract suppliers can furnish certain DME product categories (such as oxygen supplies and hospital beds) at competitively determined prices to Medicare beneficiaries in designated competitive bidding areas. The CBP's round 1 rebid was in effect for a 3-year period, from 2011 through 2013. It included nine DME product categories in nine geographic areas. For CBP monitoring purposes, CMS also selected nine comparator areas that were demographically similar to the rebid areas. GAO's analysis found that in 2012, the second year of the round 1 rebid: The number of beneficiaries furnished DME items included in the CBP generally decreased more in the CBP areas than in the comparator areas. For example, the number of beneficiaries furnished oxygen supplies decreased by about 22 percent in the CBP areas and by about 16 percent in the comparator areas. According to CMS, CBP may have reduced inappropriate usage of DME and these decreases do not necessarily reflect beneficiary access issues. Based on its monitoring tools, which include comparing changes in the health outcomes of beneficiaries in the CBP areas to those in the comparator areas, CMS has concluded that beneficiaries have not been affected adversely by the CBP. In general, a small number of contract suppliers had a large proportion of the market share in the nine competitive bidding areas. The top four contract suppliers generally accounted for a large proportion of the market in all CBP areas, although the top four suppliers for each product category were not the same in every competitive bidding area. CMS has reported that few contract suppliers had contracts terminated by the agency or voluntarily withdrew from Medicare. The total number of DME suppliers and Medicare allowed charges decreased more in the CBP areas than in the comparator areas. For example, the number of suppliers in the CBP areas with Medicare allowed charges of $2,500 or more decreased, on average, 27 percent. In the comparator areas, supplier numbers decreased by 5 percent. The decreases in supplier numbers may reflect other factors, such as CMS's efforts to reduce Medicare DME fraud. The round 1 rebid's first 2 years achieved Medicare cost savings of about $400 million as estimated by CMS, and did not appear to have adversely affected beneficiary access to CBP-covered items. However, with CBP's national mail-order diabetic testing supplies program and expansion into an additional 100 bidding areas in July 2013, it will be important for CMS to continue its efforts to monitor the effects of the CBP. In commenting on a draft of this report, HHS cited the results of CMS's monitoring of beneficiaries' access to CBP items as evidence that CBP has not adversely affected beneficiaries.
Since 2000, Congress and OPM have gradually shifted to performance- based pay for senior executives through legislative and regulatory changes. In October 2000, OPM amended its senior executive performance management regulations requiring agencies to (1) hold senior executives accountable for their individual and organizational performance by linking performance management with the results- oriented goals of the Government Performance and Results Act of 1993; (2) evaluate senior executive performance using measures that balance organizational results with customer satisfaction, employee perspectives, and any other measures agencies decide are appropriate; and (3) use performance results as a basis for pay, awards, and other personnel decisions. While emphasizing the use of performance results as the basis for pay and other awards, members of the SES still received the annual across-the-board and locality pay adjustments. In 2002, Congress raised the total annual compensation limit—from Executive Schedule (EX) level I to the total annual compensation payable to the Vice President—for senior executives and other senior professionals in agencies with systems that have been certified by OPM with OMB concurrence as having performance appraisal systems which as designed and applied make meaningful distinctions based on relative performance. The act instructed OPM and OMB to promulgate regulations regarding certification that, if met by an agency, would allow it to access the higher total compensation cap, which includes bonuses and other forms of compensation. In 2003, Congress changed the basis for how agencies pay their senior executives and the overall SES pay structure. Beginning in January 2004, senior executives no longer received annual across-the-board or locality pay adjustments. Agencies are to base pay adjustments for senior executives on individual performance and contributions to the agency’s performance by considering the individual’s accomplishments and such things as unique skills, qualifications, or competencies of the individual and the individual’s significance to the agency’s mission and performance, as well as the individual’s current responsibilities. In addition, the SES pay structure changed from six pay levels to a single, open-range pay band with a higher basic pay cap—EX-level III for agencies without certified appraisal systems and EX-level II for agencies with such systems. For calendar year 2008, the pay caps are $158,500 for basic pay (EX-level III) with a senior executive’s total compensation not to exceed $191,300 (EX-level I). If an agency’s senior executive performance appraisal system is certified by OPM and OMB concurs, the caps are increased to $172,200 for basic pay (EX-level II) and $221,100 for total compensation (the total annual compensation payable to the Vice President). To qualify for senior executive pay flexibilities, agencies’ performance appraisal systems are evaluated against nine certification criteria and any additional information that OPM and OMB may require to make determinations regarding certification. OPM’s and OMB’s certification criteria are broad principles that position agencies to use their pay systems strategically to support the development of a stronger performance culture and the attainment of their mission, goals, and objectives. (See app. II for additional information on the certification criteria). Two levels of performance appraisal system certification are available to agencies—full and provisional. Through a law passed in October 2008, an agency’s certification now lasts for up to 24 months, with the possibility of a 6-month extension by the OPM Director, rather than a calendar-year-based coverage. Previously, an agency’s certification lasted for 2 calendar years for full certification and 1 calendar year for provisional certification. In addition to SES employees, many agencies use senior employees with scientific, technical, and professional expertise, commonly known as senior-level (SL) and scientific or professional (ST) positions. An agency may apply to OPM and OMB for certification of its SL/ST performance management system, and if its system is certified as making meaningful distinctions in relative performance, an agency may raise the total annual compensation maximum for SL/ST employees to the salary of the Vice President. Beginning in April 2009, the recently passed law allows certified agencies to raise the basic pay cap for SL/ST employees to EX-level II—the same maximum rate of basic pay as SES members, and also exempts SL/ST employees from receiving locality pay. Previously, SL/ST employees under certified appraisal systems had a maximum rate of basic pay equal to EX-level IV plus locality pay up to EX-level III. However, unlike the SES, their individual rate of pay does not necessarily have to be based on individual or agency performance. OPM has a key leadership and oversight role in the design and implementation of agencies’ SES performance-based pay systems by certifying that the agencies’ systems meet the certification criteria before they can receive the pay flexibilities. In our January 2007 report examining the senior executive performance-based pay system, we made a series of recommendations to OPM designed to address issues specific to the performance-based pay system, such as sharing best practices, tracking progress towards goals, and developing a timeline for issuance of certification guidance. We are following up on the status of these recommendations through this report. The selected agencies are generally addressing three key areas related to OPM’s and OMB’s certification criteria through their SES performance- based pay systems— factoring organizational performance into senior executive performance appraisal systems, making meaningful distinctions in senior executive performance, and building safeguards into senior executive performance appraisal and pay systems. However, USAID did not provide its PRB members and other reviewing officials with any specific information on organizational performance to help inform their senior executive appraisal recommendations. In our past work on performance management, we identified the alignment of individual performance expectations with organizational goals as a key practice for effective performance management systems. Having a performance management system that creates a “line of sight” showing how unit and individual performance can contribute to overall organizational goals helps individuals understand the connection between their daily activities and the organization’s success. To receive certification of their systems, agencies are to align senior executive performance expectations with the agency’s mission, strategic goals, program and policy objectives, or annual performance plan and budget priorities. While many agencies are doing a good job overall of aligning executive performance plans with agency mission and goals, according to OPM some of the plans do not fully identify the measures used to determine whether the executive is achieving the necessary results, which can affect the executive’s overall performance appraisal. This challenge of explicitly linking senior executive expectations to results-oriented organizational goals is consistent with findings from our past work on performance management. To help hold senior executives accountable for organizational results, beginning in 2007, OPM required agencies to demonstrate that at least 60 percent of each senior executive’s performance plan is focused on achieving results and has clear measures associated with those results to show whether the goals have been achieved to be certified. The selected agencies have designed their appraisal systems to address OPM’s requirement of aligning individual expectations with organizational goals. For example, in setting expectations for individual performance plans, DOE requires senior executives and supervisors to identify three to five key performance requirements with metrics that the executive must accomplish in order for the agency to achieve its strategic goals. Weighted at 60 percent of the summary rating, the performance requirements are to be specific to the executive’s position and described in terms of specific results with clear, credible measures (e.g., quality, quantity, timeliness, cost-effectiveness) of performance, rather than activities. For each performance requirement, the executive is to identify the applicable strategic goal in the performance plan. To ensure that agencies are implementing their policies for alignment of performance expectations with organizational goals, OPM requires agencies as part of their certification submissions to provide a sample of executive performance plans, the strategic plan or other organizational performance documents for establishing alignment, and a description of the appraisal system outlining the linkage of executive performance with organizational goals. Further, OPM requires agencies to factor organizational performance into senior executive performance appraisals to receive certification of their SES appraisal systems. According to OPM and OMB officials overseeing the certification review process, the main sources of organizational performance that agencies use are the performance and accountability reports (PAR); program assessment rating tool (PART) summaries, which capture agencywide as well as program- or office-specific performance; and the President’s Management Agenda (PMA) scorecards, as applicable. However, agencies have the flexibility to determine the format and type of organizational performance information for the performance appraisal process and certification submissions, according to OMB’s lead official for the certification review process. All of the selected agencies have policies in place for factoring organizational performance into senior executive appraisal decisions and have identified common organizational assessments—such as the PMA, PAR, or PART results—for highlighting organizational performance results. As a next step, a few of the agencies, such as NRC and Treasury, have developed customized tools summarizing organizational performance at different levels of the organization, such as the bureau, office, or program levels to help ensure that senior executive appraisal decisions are consistent with organizational performance. For example, NRC provides summary reports capturing office-level performance to rating and reviewing officials to ensure that these officials have the information they need to make consistent assessments between senior executive and organizational performance. At the midpoint and end of the appraisal cycle, NRC’s senior performance officials (SPO)—two top-level executives responsible for assessing organizational performance—conduct assessments for each office that take into account quarterly office performance reports on their operating plans, an interoffice survey on the office’s performance completed by the other directors as identified by NRC, as well as the office director’s self-assessment of the office’s performance. To assess bureau-level performance, Treasury uses a departmentwide organizational assessment tool that provides a “snapshot” of each bureau’s performance across various indicators of organizational performance, such as the PAR, PART results, PMA areas, OPM’s Federal Human Capital Survey results, budget data, and information on material weaknesses. PRB members and reviewing officials receive copies of the organizational performance assessments, which serve as a basic framework for reviewing and recommending senior executive ratings, pay, and bonuses to help ensure ratings and pay are consistent with the organization’s performance. According to Treasury’s Deputy Assistant Secretary for Human Resources and Chief Human Capital Officer (CHCO), the indicators of organizational performance are updated throughout the year as organizational performance is always changing and the senior executives need to have a sense of the organization’s performance at all times. Prior to the completion of individual performance ratings, agencies are to communicate organizational performance to senior executives, PRB members, and other reviewing officials—including supervisors who complete the ratings—involved in appraisal decisions to ensure they understand the effect organizational performance can have on rating distributions. Almost all of the selected agencies provided organizational performance assessments and communicated the importance of considering organizational performance in individual appraisals through briefings, training, or document packages for the PRB meetings. One agency, however, did not provide any specific information regarding organizational performance to PRB members and other reviewing officials. DOD provided the heads of its components with a departmentwide organizational assessment against its overall priorities for fiscal year 2007 that was to be used in appraising senior executive performance and, as a check across the components, asked for copies of the training given to PRB members and other reviewing officials on factoring organizational performance into senior executive appraisal recommendations. According to the Principal Director to the Deputy Under Secretary of Defense for Civilian Personnel Policy, the components had the flexibility to use the departmentwide assessment and to develop their own organizational assessments. Component organizational assessments were required to be linked to the departmentwide priorities and assessment. Component organizational assessments can provide a level of specificity that enables a clearer connection or “line of sight” between individual executive and organizational performance. Having the components provide the department with their communications of organizational performance and how it was used to inform executive rating decisions provides accountability across the components for the departmental performance management policies, according to this official. DOE provides its PRB members with snapshots of the Consolidated Quarterly Performance Reports relevant to the senior executives that measure how each departmental element performed respective to the goals and targets in its annual performance plan. According to the Director of the Office of Human Capital Management, the Deputy Secretary also verbally briefed PRB members on the importance of considering organizational performance in appraising executive performance. For its most recently completed appraisal cycle, State for the first time provided PRB members an organizational assessment composed of various indicators from the most recent PART, PMA scorecard, and PAR. For the previous appraisal cycle, PRB members received various documents, such as senior executives’ performance plans and appraisals and the performance management policy, but did not receive any specific assessments of organizational performance. According to a senior human resources official at State, based on OPM’s and OMB’s feedback for its 2008 certification submission, the agency has committed to providing organizational performance results in its guidance to the PRB members on how to consider organizational performance in making individual senior executive appraisal recommendations, among other things. In contrast, USAID did not provide its PRB members and other reviewing officials with any specific information on organizational performance to help inform their senior executive appraisal recommendations for the fiscal year 2007 appraisal cycle. According to a senior human resources official at USAID, the agency does not provide PRB members and reviewing officials with these organizational performance assessments because they know where to find the relevant information applicable for each senior executive’s performance appraisal given the small size of the agency. Nevertheless, providing and communicating uniform organizational performance assessments can help ensure consistency and clarity in how organizational performance is considered in appraising executive performance among PRB members, rating officials, and other reviewers. According to USAID’s Deputy Director for Human Resources, USAID has developed various indicators of organizational performance—such as individual operating unit reports, the Agency Financial Report, the PMA, PART results, and the Congressional Budget Justification outlining agency performance and other information—which are readily available for use by PRB members and other reviewing officials responsible for appraising senior executive performance. Effective performance management systems make meaningful distinctions between acceptable and outstanding performance of individuals and appropriately reward those who perform at the highest level. In order to receive OPM certification and OMB concurrence, agencies are to design and administer performance appraisal systems that make meaningful distinctions based on relative performance through performance rating and resulting performance payouts (e.g., bonuses and pay adjustments). To address the certification criteria of performance and pay differentiation, agencies are to use multiple rating levels—four or five levels including a level for outstanding performance—and recognize the highest performing executives with the highest ratings and largest pay adjustments and bonuses, among other things. Five of the selected agencies designed their appraisal systems to help allow for differentiations when assessing and rewarding executive performance by establishing tier structures or prescribed performance payout ranges based on the resulting performance rating. For example, NRC uses three tiers called position groups to differentiate its senior executives’ basic pay and the resulting bonus amounts based on ratings received at the end of the appraisal cycle. NRC divides its executives into three groups (A, B, and C) based on the position’s difficulty of assignment and the scope of responsibilities and annually sets basic pay ceilings for each of the groups tied to the EX pay levels. NRC uses the position groups and resulting performance ratings as the basis for its bonus structure to help ensure that executives in the higher position groups with the higher performance ratings receive the larger bonuses, as shown in table 1. In fiscal year 2007, an executive in the highest position group A who received an outstanding rating was to receive a $30,000 bonus, while an executive in the lowest group C with the same rating was to receive a $20,000 bonus. According to a senior human resources official at NRC, the bonus range for executives in group C with excellent ratings was intended to help allow for meaningful distinctions in performance to be made within that group, as well as to give the agency flexibility in the monetary amounts of the bonuses awarded. State uses a six-tier structure to help differentiate executive performance based on the ratings and bonuses and allocate pay adjustment amounts for its senior executives, with senior executives who are placed in the highest tier (I) receiving a larger percentage pay adjustment than those in a lower tier (V), who received the annual percentage adjustment to the EX pay schedule—2.5 percent in 2008. In 2008, DOD implemented a departmentwide tier structure to help ensure comparability and transparency in SES position and compensation management with pay ceilings for each of the tiers tied to EX-level II and III pay rates. Specifically, DOD assigned SES positions to three tiers based on the position’s impact on mission, level of complexity, span of control, and influence in joint, national security matters, among other things. According to the Principal Director, DOD is now using the tier structure to differentiate executive performance payouts to recognize that high-level performance in some positions has more impact than comparable performance in other positions. Further, DOD uses a mathematical formula to differentiate the performance payout amounts among its senior executives based on the recommended performance rating, performance score, and performance payout shares, as shown in table 2. In determining the number of performance payout shares to recommend, rating officials are to consider areas such as the executive’s level of responsibility, mission impact, current basic pay, and performance against the relative performance of other executives, if applicable. The formula for computing the actual amount of the performance payout takes into account various indicators, such as the budget for bonuses and pay increases, annual adjustment to the EX pay rates, and total salaries and number of performance shares for all the senior executives in the pay pool. DOE sets prescribed ranges tied to performance ratings for its senior executives prior to finalizing ratings to help create a greater distinction between bonus amounts for top and middle performers and differentiate pay adjustment caps. Specifically, for fiscal year 2007, DOE required that all executives receiving an outstanding rating receive a bonus of 12 to 20 percent of basic pay, while executives receiving a meets expectations rating were eligible to receive a bonus of 5 to 9 percent at management’s discretion. For pay adjustments, executives were eligible to receive a discretionary increase of up to 5 or 7 percent of basic pay if rated at meets expectations or outstanding, respectively. Executives who received needs improvement or unsatisfactory ratings were not eligible for any bonuses or pay increases. We have reported that using multiple rating levels provides a useful framework for making distinctions in performance by allowing an agency to differentiate among individuals’ performance. As required for certification, all of the selected agencies have four or five rating levels in place for assessing senior executive performance. For the fiscal year 2007 appraisal cycle, senior executives were concentrated at the top two rating levels, as shown in figure 1. At State and USAID, about 69 percent and 60 percent of senior executives, respectively, received the top performance rating. At the other four agencies, the largest percentage of executives received the second highest rating—ranging from about 65 percent at NRC to 45 percent at Treasury. Conversely, less than 1 percent of senior executives across the selected agencies received a rating below fully successful (level 3). As a point of comparison, about 47 percent of career SES governmentwide received the top performance rating for fiscal year 2007, according to governmentwide data as reported by OPM. Similar to the selected agencies, less than 1 percent of career senior executives governmentwide received ratings below fully successful for fiscal year 2007. While OPM officials have certified that the selected agencies’ systems are making meaningful distinctions, performance ratings at the selected agencies raise questions about the extent to which meaningful distinctions based on relative performance are being made and how OPM applies this criterion, as indicated in figure 1. As part of making meaningful distinctions in performance, OPM has emphasized to agencies through its certification guidance that its regulations prohibit forced distribution of performance ratings and that agencies must avoid policies or practices that would lead to forced distributions or even the appearance of it. A senior OPM official acknowledged that it is difficult for OPM to determine if an agency is using forced distributions through its review of agencies’ aggregate appraisal results and policy documents. The official indicated that OPM looks at trends in the data across different components of agencies for statistical improbabilities, such as a certain percentage of SES members receiving an outstanding rating each year within an office that could be explained by a quota system. OPM has not provided specific guidance to agencies on how to make meaningful distinctions in senior executive performance while avoiding the perception of forced distributions of performance ratings. OPM has an opportunity to strengthen its communication with agencies and executives regarding the importance of using a range of rating levels when assessing performance while avoiding the use of forced distributions. Communicating this information to agencies will help them begin to transform their cultures to one where a fully successful rating is valued and rewarded. Senior-level officials at three of the selected agencies recognized the challenge in using a range of rating levels when appraising senior executive performance. In a memo to all SES members, DOE’s Deputy Secretary stated his concern with the negligible difference in bonuses and pay adjustments among executives receiving the top two rating levels and stressed the importance of making meaningful distinctions in the allocation of compensation tied to performance ratings in the upcoming appraisal cycle. According to State’s Deputy Assistant Secretary for the Bureau of Human Resources, historically the vast majority of senior executives have received the highest rating of outstanding, including for fiscal year 2007. Since the implementation of performance-based pay, this official said State has struggled with changing the culture and general perception among senior executives that any rating less than outstanding is a failure. According to DOD’s Principal Director, DOD is communicating the message that the SES performance-based pay system recalibrates performance appraisals as a way to help change the culture and make meaningful distinctions in performance. A fully successful or equivalent rating is a high standard as well as a valued and quality rating. Levels above fully successful require extraordinary results. Part of this communication is developing common benchmark descriptors for the performance elements at the 5, 4, and 3 rating levels. The Principal Director said she hopes that developing common definitions for the performance elements at all three levels will aid the development of a common understanding and in turn make more meaningful distinctions in ratings. The agency official recognizes that this shift will require a significant cultural change, and that such cultural transformation takes time. The percentage of eligible executives who received bonuses or pay adjustments varied across the selected agencies for fiscal year 2007, as shown in table 3. The percentage of eligible senior executives who received bonuses ranged from about 92 percent at DOD to about 30 percent at USAID, with the average dollar amount of bonuses ranging from $11,034 at State to about $17,917 at NRC. All eligible executives at State received pay adjustments, while about 88 percent of eligible executives at DOE received adjustments, with the average dollar amount of such adjustments ranging from about $5,414 at NRC to about $6,243 at DOE. As a point of comparison, about 75 percent of career senior executives received bonuses with an average dollar amount of $14,221 for fiscal year 2007, according to OPM’s governmentwide data report. The governmentwide percentage of career senior executives receiving pay adjustments and the average dollar amount of such adjustments in the aggregate are not available from OPM’s governmentwide data report for fiscal year 2007. The selected agencies have policies in place where only senior executives who receive a rating of fully successful (level 3) or higher are eligible to receive bonuses or pay increases. Also affecting executives’ bonus eligibility are the agencies’ policies on awarding bonuses to executives who also received Presidential Rank Awards that year, which varied among the selected agencies. NRC, State, and Treasury do not allow executives to receive both awards in the same year, while DOD, DOE, and USAID allow the practice. According to OPM regulations, agencies are to recognize the highest performing executives with the highest ratings and largest bonuses and pay adjustments. At five of the selected agencies, the highest performing executives (rated at level 5) made up the greatest percentage of eligible executives receiving bonuses. At NRC, all eligible executives rated at the top two levels received a bonus. At all the agencies, the executives rated at the highest level received the largest bonuses on average—about $23,333 at NRC compared to about $11,034 at State. State only awarded bonuses to executives receiving outstanding ratings for fiscal year 2007. According to State’s senior human resources official, State does not have an official policy prohibiting those receiving ratings of exceeds expectations or fully successful from receiving a bonus. Rather, the agency official stated that State’s decision to only award bonuses to executives who received outstanding ratings was due to budget constraints and an effort to keep the SES parallel with the SFS in the allocation of bonuses and pay adjustments. In addition, senior executives at NRC and USAID rated at fully successful (level 3) did not receive bonuses. (see fig. 2). In a memo to agencies on the certification process, OPM has stated that it expects senior executives who receive a fully successful or higher rating and are paid at a level consistent with their current responsibilities will receive a performance-based pay increase. According to a senior OPM official, agencies are not required to give these executives pay increases, but OPM considers fully successful to be a good rating and encourages agencies to recognize and reward executives performing at this level. At the selected agencies, the majority of eligible senior executives rated at fully successful received pay adjustments for fiscal year 2007, as shown in figure 3. The highest-performing executives (rated at level 5) did not make up the greatest percentage of executives receiving pay adjustments with the largest increases on average at some of the selected agencies. Specifically, at Treasury, about 95 percent of eligible executives rated at level 4 received a pay adjustment, compared with about 91 percent of eligible executives rated at level 5 and about 90 percent rated a level 3. At NRC, all of the eligible executives rated at level 5 and level 3 received pay adjustments compared with about 92 percent of eligible executives rated at level 4. For all the agencies except Treasury, the executives rated at the highest level received the largest pay adjustments on average—about $7,473 at USAID compared to about $6,133 at NRC. At Treasury, executives rated at levels 5, 4, and 3 on average received about the same pay adjustment amounts, primarily due to pay cap issues. We have reported that the federal government as a whole may face challenges in offering competitive compensation to its senior leaders. In 2003, about 70 percent of senior executives received the same basic pay due to compression—which occurred when their pay reached the statutory cap of EX-level III. In 2004, the SES performance-based pay system and certification process provided an interim solution to this issue of pay compression by creating a single, open-range pay band and allowing agencies to increase the basic pay cap for their senior executives to EX- level II upon certification of their performance appraisal systems by OPM with OMB concurrence. While the pay cap was raised for certified agencies, agencies are to reserve the pay rates above EX-level III for truly outstanding performers only, which in effect slows the growth of senior executives’ pay within the pay band. According to OPM regulations, the rates of basic pay higher than EX-level III but less than or equal to EX-level II are generally reserved for senior executives who have demonstrated the highest levels of individual performance and/or made the greatest contributions to the agency’s performance, or newly appointed senior executives who possess superior leadership or other competencies. The basic pay for senior executives at the selected agencies shows that pay compression may be a problem in the future for some agencies. Overall, however, only a small percentage of senior executives at the selected agencies have their basic pay capped out at EX-level II ($172,200) in 2008. Specifically, about half to three-quarters of senior executives at the selected agencies are paid at or above EX-level III ($158,500) in 2008, after performance-based pay adjustments were made for the fiscal year 2007 performance appraisal cycle, as shown in figure 4. For example, at State, about 76 percent of senior executives are paid at or above EX-level III, while about 50 percent of senior executives at DOD and USAID are paid at these rates. Of the senior executives paid at or above $158,500, the percentage of senior executives who are paid at the governmentwide pay cap for 2008—$172,200—varies across the agencies. Specifically, at State and DOE, about 26 percent of senior executives are paid at the pay cap for 2008, while only 1 percent of senior executives at DOD and none at USAID are paid at this cap. OPM has found that about 13 percent of SES members governmentwide are paid at the pay cap based on the fiscal year 2007 performance appraisal data they received from agencies. According to a senior OPM official, the SES performance-based pay system was never intended to fix the problem of pay compression that occurred prior to 2004 or be the answer to future pay compression issues. OPM recognizes that pay compression is a problem in the SES. While the majority of senior executives at the selected agencies have yet to reach the governmentwide basic pay cap, officials from two of the selected agencies recognized the challenge of making distinctions in executive performance given potential pay compression issues. Specifically, Treasury’s Deputy Assistant Secretary for Human Resources and CHCO said an agency does not have enough room in the governmentwide pay band to fully recognize the outstanding performers through the appraisal system since the best performers are already near the top of the pay range and their performance payouts in the form of basic pay increases are limited. DOD’s Principal Director said when pay increases are not possible given salary cap issues, bonuses are a tool for components to reward their executives’ performance for achieving results. NRC’s three position groups and the associated pay ceilings are intended, in part, to help reserve pay above EX-III for those executives who demonstrate the highest levels of performance, including the greatest contribution to organizational performance as determined through the appraisal system, according to a senior human resources official at NRC. A senior executive would not receive a pay increase if the executive had already reached the pay ceiling for the applicable position group. While there is little room for pay increases within the pay bands for each position group, the agency official indicated that NRC tries to give pay adjustments and generous bonus amounts when possible to acknowledge that their senior executives are high-performing individuals. To identify possible areas and options for improvement to its performance appraisal and pay system including the tier structure, NRC convened an executive working group of PRB members and senior executives from different areas and position groups. As part of its June 2008 findings, the working group recommended retaining the three position groups and existing pay ceilings with slight revisions to the positions that fall within each group. NRC management accepted this recommendation, according to the agency official. We have reported that agencies need to have modern, effective, credible, and validated performance management systems in place with adequate safeguards to ensure fairness and prevent politicization and abuse. All of the selected agencies have safeguards including higher-level reviews of performance appraisal recommendations, PRBs, and transparency in communicating the aggregate results, although agencies varied in how they implemented such safeguards. Higher-level reviews. By law, as part of their SES appraisal systems, all agencies must provide their senior executives with an opportunity to view their performance appraisals and to request a review of the recommended performance ratings by higher-level officials, before the ratings become final. The higher-level reviewer cannot change the initial rating given by the supervisor, but may recommend a different rating in writing to the PRB that is shared with the senior executive and the supervisor. For example, according to State’s policy, an executive may request a higher- level review of the initial rating in writing prior to the PRB convening, at which time the initial summary rating, the executive’s request, and the higher-level reviewer’s written findings and recommendations are considered. The PRB is to provide a written recommendation on the executive’s summary rating to State’s Director General of the Foreign Service and Director of Human Resources, who makes the final appraisal decisions. Performance review boards. All agencies must establish one or more PRBs to help ensure that performance appraisals reflect both individual and organizational performance and rating, bonus, and pay adjustment recommendations are consistently made. The PRB is to review senior executives’ initial summary performance ratings and other relevant documents and make written recommendations on the performance of the senior executives to the agency head or appointing authority. When appraising a career appointee’s performance or recommending a career appointee for a bonus, more than one-half of the PRB’s members must be SES career appointees. The selected agencies varied in their PRB structures and who provided the final approval of the appraisal decisions. On the one hand, given its small number of senior executives, USAID has one PRB that is responsible for making recommendations to the Administrator for his/her final approval on all rated career executives’ annual summary ratings, bonuses, performance-based pay adjustments, and Presidential Rank Award nominations. On the other hand, DOD has multiple PRBs within and across its components and agencies with separate authorizing officials who give the final approval of rating and performance payout recommendations. As another level of review after the PRB, DOE convenes a Senior Review Board—comprised mainly of political appointees—to review and approve the PRB recommendations for ratings, pay adjustments, and bonuses, and look for consistency in recommendations across the senior executives in headquarters, the field, and the various organizations within DOE. The Director of the Office of Human Capital Management said the Deputy Secretary, who serves as the chair, ultimately makes the final decisions on senior executives’ ratings, pay adjustments, and bonuses. Transparency in communicating aggregate appraisal results. Agencies should communicate the overall aggregate results of the performance appraisal decisions—ratings, bonuses, and pay adjustment distributions—to senior executives while protecting individual confidentiality, and as a result, provide a clear picture of how the executive’s performance compares with that of other executives in the agency. Further, as part of its certification decisions, OPM requires agencies to brief their SES members on the results of the completed appraisal process to make sure that the dynamics of the general distribution of ratings and accompanying rewards are fully understood. All the selected agencies communicated the aggregate appraisal results to senior executives, although their methods of communication and the types of information provided varied. Treasury and DOD posted the aggregate rating, bonus, and pay adjustment distributions for senior executives on their Web sites with comparison of data across previous fiscal years. NRC sent an e-mail to all senior executives providing the percentage of executives at each rating level and the percentage who received bonuses and pay adjustments, as well as the average dollar amounts. According to a senior human resources official at NRC, the agency periodically holds agencywide “all hands” SES meetings where the results of the appraisal cycle, among other topics, are communicated to executives. The Deputy Secretary of DOE provides a memo to all senior executives summarizing the percentage of executives at the top two rating levels and the average bonus and pay adjustment amounts, as well as OPM’s governmentwide results as a point of comparison. USAID communicated the aggregate SES appraisal results to SES members throughout the appraisal cycle. In a February 2008 notice, USAID communicated to all SES members the pay adjustment distributions in ranges by rating level for the fiscal year 2007 appraisal cycle. In a September 2008 e-mail to all SES members and rating officials at the end of the appraisal cycle, USAID communicated the aggregate performance rating distributions for the past two appraisal cycles for fiscal years 2006 and 2007. While the selected agencies all shared aggregate appraisal results with their senior executives, the results of the OPM SES survey show that the communication of overall performance appraisal results is not widely practiced throughout the government. Specifically, 65 percent of respondents said that they were not given a summary of their agency’s SES performance ratings, bonuses, and pay adjustments. At the June 2008 forum with agency executive resources staff where it shared the survey results, OPM officials emphasized the importance of communicating aggregate appraisal results to all senior executives. According to a senior OPM official, agencies need to figure out how best to communicate aggregate appraisal results in a way that supports their different cultures and practices. The official said OPM plans to continually monitor how well the agencies are communicating aggregate appraisal results through the certification review process. To ensure agencies’ senior executive appraisal systems are designed and implemented to address the certification criteria, OPM and OMB, as applicable, provide continuing oversight by issuing guidance to agencies on revisions to the certification process, using tools and other initiatives to help assess how agencies are implementing their SES performance-based pay systems, providing training and forums, and interacting with agencies on the review of their certification submissions. While generally satisfied with OPM’s and OMB’s oversight, officials at the selected agencies said OPM could strengthen its communication with agencies and executives on how it uses the SES performance appraisal data and the correlation between ratings and performance pay in determining whether agencies are making meaningful distinctions based on relative performance. In addition, senior-level officials at the selected agencies identified a need for increased efficiency in the certification submission process. Providing agencies with clear and timely guidance is one way for OPM and OMB to effectively communicate with agencies upcoming revisions to the certification process. OMB does not issue its own guidance to agencies, but reviews OPM’s guidance to agencies, according to OMB’s lead official in the certification review process. Officials at five of the selected agencies said that, in the past, OPM has revised its guidance midway through the appraisal cycle, which did not allow agencies sufficient time to change their systems in order to receive certification for that calendar year. Recognizing that it was late in issuing the guidance in 2006, OPM since has issued guidance in the fall via memos to agency heads. In the future, OPM plans to continue issuing any changes to the guidance in the fall, according to a senior OPM official, since this is when agencies are finishing up the performance appraisal cycle and starting to set expectations for the next cycle. This should also provide agencies with adequate time to revise their appraisal systems to reflect any new requirements before the certification submission deadline at the end of June. In light of changes to the law in October 2008, in the near future OPM plans to issue regulations and revised certification guidance to agencies reflecting the modifications to SL/ST basic pay rates for certified appraisal systems and changing the certification cycle coverage to up to 24 months from a calendar-year-based coverage. OPM and OMB use tools and other initiatives—such as the SES Performance Appraisal Assessment Tool (SES-PAAT), the correlation coefficient, and the governmentwide survey to all SES members on performance-based pay—to help assess how agencies are implementing their performance-based pay systems and addressing the certification criteria. Overall, selected agency officials were in favor of OPM and OMB using these tools and initiatives, although officials from three of the selected agencies expressed concern about how OPM calculated the correlation coefficient and the effect it had on the resulting score. In 2007, OPM developed the SES-PAAT to help streamline the certification process, improve the efficiency of its oversight process, and offer a more transparent and organized way for agencies, OPM, and OMB to examine SES appraisal systems, among other things. OPM first required agencies with fully certified SES appraisal systems to use this tool when requesting full certification for 2009 and 2010. Based on a set of questions that relate to the certification criteria, the SES-PAAT helps clarify the certification criteria and quantifies aspects of the certification package that agencies had previously supported through narrative form. In making an agency’s certification decision, OPM and OMB consider the SES-PAAT score and the quality of the provided supporting documentation, such as the sample of individual performance plans. OPM is hopeful that the SES-PAAT will improve the efficiency of its oversight process and the feedback provided to the agencies, but officials from our selected agencies that completed the SES-PAAT have mixed views on using this tool. OPM removed one office from the review process for SES-PAAT submissions with the intention of spending less time overseeing the continuation of full certifications and more time focusing on provisional certifications or those agencies that were in danger of dropping from full to provisional certification, according to a senior OPM official. Officials from the two selected agencies that completed the SES- PAAT for the first time said the certification submission process was more efficient with less documentation submitted overall; however, the process was still labor intensive and time consuming, specifically with regard to the sample of performance plans required. Even though the SES-PAAT requires agencies to submit a smaller sample of performance plans, a senior human resources official said her agency needed to do a complete review of its SES performance plans to ensure that all the plans were adequate for certification approval. To help assess in part how agencies are meeting the pay differentiation certification criterion, OPM is using a metric based on a correlation coefficient that summarizes the strength of the relationship between SES members’ ratings and their performance-based pay adjustments and bonuses as part of the Human Capital Assessment and Accountability Framework’s systems, standard, and metrics. Given that at least 60 percent of executives’ performance ratings are to be based on organizational results, a senior OPM official said calculating the relationship between executive ratings and performance pay provides an indication of how well an agency is recognizing its executives based on organizational results achieved. Officials from three of the selected agencies expressed concern about how OPM calculated the correlation coefficient and the effect it had on the resulting score. Specifically, OPM decided to include in its calculations those senior executives who received Presidential Rank Awards, but because of their agencies’ policies were not eligible for and did not receive bonuses. As a result, the coefficients for those agencies may show a weaker connection between SES ratings and performance pay because highly rated executives did not receive bonuses. A senior OPM official said OPM recognizes that the decision to include all executives regardless of their bonus eligibility in the correlation coefficient may have a negative effect on an agency’s coefficient, especially in the case of smaller agencies with few SES members, but it is a policy decision that OPM has made to ensure that the coefficients were calculated consistently across the government. For small agencies, OPM said that a correlation coefficient may not be appropriate for determining how the agency is addressing the pay differentiation criteria; rather, OPM will review the mean, median, and mode of the agencies’ total compensation including pay adjustments and bonuses, as applicable to determine whether higher-rated executives were rewarded appropriately. In January 2008, OPM conducted a governmentwide survey of all SES members to evaluate the performance-based pay system. While OPM found considerable variability in the executives’ responses across the different agencies, according to OPM the overall results show that the vast majority of executives believe pay should be based on performance and that areas for improvement exist, for example, in communicating aggregate appraisal results to senior executives. According to selected agency officials, the SES survey results were very helpful and useful to their agency. We previously recommended that OPM develop a strategy to allow it, other executive agencies, and Congress to monitor the progress of implementation of the senior executive performance-based pay system. The SES survey could be a vehicle for regularly monitoring progress in the future. OPM has not committed to administering the survey on an ongoing basis, in part due to the concern of over-surveying agency officials, but plans to revisit the idea of administering the survey again in the next several years. According to a senior OPM official, the Federal Human Capital Survey is administered every 2 years and provides OPM with the opportunity to monitor senior executives’ satisfaction with the appraisal process, including whether they consider their appraisals to be a fair reflection of their performance, based on the senior executives who responded. Rather, when OPM decides to administer the survey again, according to the OPM official, it plans to target the relevant issues of the day with some of the original questions in order to track trends over time. To help facilitate its communications and interactions with agency officials and the executive resources community, OPM periodically provides training and holds forums for agency officials to discuss different aspects of the SES performance-based pay system and the certification process, among other topics. Selected agency officials found the forums to be useful and helpful in understanding the certification process and requirements while allowing agencies to share lessons learned from and experiences with the certification process. OPM also finds these forums helpful in gathering agency feedback, which it considers in future revisions to the certification process and other human capital initiatives, according to a senior OPM official. For example, in December 2007 OPM conducted four training workshops for all interested agency executive resources officials on how to complete the SES-PAAT and plans to hold sessions in December 2008 and January 2009. OPM also holds forums five times a year for executive resources staff from all agencies. At these forums, OPM and agency officials have the opportunity to discuss common concerns, obtain status updates on various OPM initiatives, and learn about future plans for the certification process and other human capital areas. In addition, the CHCO Council chaired by the OPM Director works with agencies to develop and share leading practices in implementing human capital initiatives. For example, the CHCO Council periodically holds training academy sessions that are open to agency officials other than CHCOs to highlight and showcase human capital practices related to senior executive pay and certification issues. Specifically, over the last 2 years, the CHCO Council has held several training academy sessions related to SES performance management and pay systems, the SES-PAAT, and lessons learned from the governmentwide SES survey results. In our past work we recommended that OPM work with the CHCO Council to develop a formal mechanism for sharing leading practices for implementing human capital initiatives, such as the SES certification process. OPM has addressed this recommendation by inviting all levels of agency officials to attend CHCO Council training academy sessions when relevant topics, such as the SES performance management and SES survey results, were featured. Moving forward, the CHCO subcommittee on performance management plans to partner with OMB’s Performance Improvement Council to make improvements on SES certification and other human capital efforts. OPM provides oversight to the certification process by communicating and working directly with agencies to help them improve their systems. According to OMB’s lead official in the certification review process, OPM takes the lead on the certification review process and OMB has a concurrence role focusing most of its review on specific aspects of agencies’ certification submissions including aligning executive performance expectations with organizational and program goals, ensuring executives’ goals are sufficiently results-oriented and challenging to drive improved performance; measuring organizational performance; and linking organizational results to the performance rating distribution. Overall, the selected agencies have positive working relationships with OPM on executive resources issues. For example, officials at five of the selected agencies found that working with OPM on the individual performance plans prior to submitting the certification package was helpful and that OPM provided useful feedback at this step in the certification process. Through its executive resources forums, OPM has also communicated directly with agencies on SES performance-based pay and the certification process, including sharing key results from the SES survey with agencies. However, OPM could strengthen its communication with agencies and executives on how it uses the SES performance appraisal data and correlation coefficient in determining whether agencies are making meaningful distinctions based on relative performance as measured through the performance and pay differentiation certification criteria. Further communication from OPM is important in order for agencies to have a better understanding of how they are being held accountable for these certification criteria and make the necessary improvements to their systems to maintain certification. Officials at four of the selected agencies said they are unclear about how OPM uses the SES appraisal data to assess whether agencies are meeting these criteria and making meaningful distinctions in performance overall. In addition, officials at four of the selected agencies said that the communication they have received from OPM individually and through broader forums, such as OPM’s executive resources forums and certification guidance, has not provided them with a clear sense of how OPM is using the correlation coefficients to determine how agencies are addressing the pay differentiation criterion. For the coefficients based on fiscal year 2006 appraisal data, OPM provided each agency with their coefficients and technical information explaining the concept of a correlation coefficient, but did not communicate to agencies how the scores were used for certification decisions. In addition, OPM gave this information only to PMA-scored agencies. OPM provided all agencies with 10 or more senior executives their correlation coefficient based on the fiscal year 2007 appraisal data along with some contextual information, but OPM does not address the concerns expressed by officials at the selected agencies regarding how the correlation coefficient is being used in certification decisions. With respect to agencies’ working relationships with OMB, officials at five of the selected agencies had little to no direct contact with OMB through past reviews of their certification submissions and did not have a clear understanding of OMB’s role in the certification review process. However, while their interaction with OMB and understanding of its role was limited, the selected agency officials were satisfied overall with how they received OMB’s feedback through OPM. OMB’s feedback on agencies’ systems is most commonly communicated to agencies via the letter that it sends to OPM stating its concurrence or nonconcurrence with OPM’s certification recommendation, according to OMB’s lead official in the certification review process. For the 2008 certification review process, OMB is working with agencies to identify areas of improvement for their appraisal systems and asking agencies to commit to working with OPM and OMB to address these areas. This communication and feedback to agencies is taking place prior to receiving OPM’s and OMB’s certification decisions, according to the OMB official. OPM is considering phasing out the distinction between full and provisional certification once all agencies have received full certification. Provisionally certified agencies receive the same pay flexibilities—access to higher basic pay and total compensation—as those with fully certified systems. Currently, agencies’ systems are required to meet the same criteria for both types of certification, with the only distinction being fairly subtle differences in the degree to which the agencies meet the criteria, according to OPM. When certification began in 2004, an agency needed to only meet four of the nine criteria and demonstrate that its system in design would meet the remaining certification criteria to receive provisional certification. We reported that it would be important for OPM to continue to monitor the certification process, especially for those agencies’ systems with provisional certification, to help ensure that provisional certifications do not become the norm. After that initial year, agencies’ systems were required to meet all nine certification criteria to receive provisional certification, which according to a senior OPM official was done in part because agencies would have had the opportunity to produce performance data showing that meaningful distinctions were made by the second year. OPM expects that all applicant SES appraisal systems will meet full certification criteria in the future. As of October 28, 2008, 76 percent of the agency certification submissions reviewed by OPM and OMB had received full certification for calendar year 2008. According to a senior OPM official, OPM is currently revisiting the certification regulations to determine what revisions should be made. Regarding potential refinements to the certification process, officials at the selected agencies identified a need for increased efficiency in the certification submission process given the large quantity of documents required and the time it currently takes to gather the certification package. For example, officials at five of the selected agencies are in favor of moving to an electronic submission process, if it reduces the amount of paper documentation that is required and streamlines the process of compiling the necessary documentation for agencies. Selected agency officials acknowledged that the certification process is moving in that direction with the SES-PAAT, which is an electronic tool. For agencies completing the SES-PAAT, OPM requested five copies of the completed tool and supporting documentation, such as the sample of performance plans and organizational performance assessments. A senior human resources official who is not using the SES-PAAT said her agency submitted to OPM seven copies of its certification package for 2007. While OPM has informally encouraged agencies through its executive resources forums or e-mail communications to submit their certification submissions electronically, according to a senior OPM official OPM has not directly discussed electronic certification submissions in its guidance and submission instructions to the agencies. In addition, officials at three of the selected agencies said that OPM could lessen agencies’ reporting burden by only requiring them to submit documentation to OPM and OMB on those areas of the system where substantive changes have occurred since the previous certification submission. While OPM and OMB should receive completed certification submissions as part of their oversight responsibilities, moving to electronic documentation may help agencies more easily gather and submit the certification documents that change minimally from year to year. OPM and OMB officials both said they support electronic certification submissions and would not rule out a more electronic certification process in the future, starting with the SES-PAAT. Overall, selected agency officials supported lengthening the certification coverage beyond 2 years once agencies have proved their systems are addressing the certification criteria. Currently, full certification lasts for 24 months. Specifically, the process of gathering the necessary documentation for certification submissions would be less burdensome to agencies if they had more time between recertification deadlines, according to a senior human resources official. Prior to receiving an extended certification coverage period, the agency officials acknowledged that their systems would have to be operating at the fully certified level. The lead OMB official overseeing the certification review process said OMB could potentially be supportive of lengthening the certification coverage, once agencies have their systems up and running and all agencies have full certification. Recognizing extending coverage would require a statutory change, OPM is reviewing the recently passed law that changes the certification coverage to up to 24 months from a calendar- year-based coverage, among other things, and OPM is not taking a position on extending certification coverage at this time. OPM and OMB have taken important steps in overseeing the design and implementation of agencies’ senior executive performance-based pay systems. The selected agencies’ experiences with implementing their SES systems can help inform other agencies’ efforts to hold executives accountable for organizational results and link pay for performance. USAID has an opportunity to strengthen the link between organizational and individual performance by providing rating and reviewing officials with specific information on organizational performance. For senior executives to lead the way in transforming their organizations to meet the complex challenges facing the nation, it will be important for them to have confidence that the SES performance-based pay system is operating as intended and that they will be rewarded according to their performance. In this regard, making meaningful distinctions among executives when assessing performance is important to the overall credibility of the SES performance-based pay system. Less than a third of senior executives governmentwide strongly agreed or agreed that bonuses or pay distinctions were meaningfully different among senior executives, according to OPM’s SES survey. OPM has an opportunity to help ensure that agencies are making meaningful distinctions in executive performance by strengthening its communication with agencies and executives on the importance of using a range of rating levels when assessing performance, while avoiding forced distributions. Additionally, communicating how OPM uses data and other tools in making certification decisions will be important so that agencies can make continuous improvements to their systems to support the development of a stronger performance culture and the attainment of their missions, goals, and objectives. Moving forward, it will also be important for OPM and OMB to identify ways to improve the certification process and make it more streamlined while ensuring that agencies have the guidance, tools, and training they need to implement effective performance appraisal and pay systems for their senior executives. To help ensure consistency and clarity in how organizational performance is considered in appraising executive performance, we recommend that the Administrator of USAID provide uniform organizational performance assessments to PRB members and other reviewing officials to help inform their appraisal recommendations for senior executives at the end of the performance appraisal cycle. To help improve agencies’ understanding of certain aspects of the certification decisions, we recommend two areas for the Acting Director of OPM to take action to strengthen OPM’s communication with agencies and executives on the importance of making meaningful distinctions in performance while avoiding the use of forced distributions and that a fully successful rating is valued and rewarded and how it uses the SES performance appraisal data and the correlation between ratings and performance pay in determining whether agencies are making meaningful distinctions based on relative performance as measured though the pay and performance differentiation certification criteria. In addition, to help improve the efficiency of the certification submission process for agencies, we recommend that the Acting Director of OPM and Director of OMB explore opportunities for streamlining the certification process, such as electronic submissions or lengthening the full certification coverage beyond 2 years for agencies that received full certification. We provided a copy of the draft report to the Secretaries of Defense, Energy, State, and the Treasury; the Commissioners of NRC; the Administrator of USAID; the Acting Director of OPM; and the Director of OMB for their review and comment. DOE had no comments on the draft report. We received written comments from DOD and OPM, which are included in appendixes III and IV. NRC, OMB, State, Treasury, and USAID provided clarifying and technical comments, which we incorporated as appropriate. With respect to making meaningful distinctions in performance, Treasury officials provided broad comments on the challenge agencies face in using the full range of rating levels given that executives are high achievers and had to exhibit exceptional performance to enter the SES. The officials suggested that meaningful distinctions can not be defined simply by the distribution of performance ratings. Our findings and recommendations about making meaningful distinctions acknowledged challenges, but also the need for additional communication from OPM in this area. Regarding our recommendations, USAID, OPM, and OMB expressed general agreement. The Acting Director stated that OPM looks forward to working with agencies and OMB to find ways to further improve communications with the agencies concerning the certification process. OMB generally agreed with our assessment and recommendation regarding the possibilities of streamlining the certification process to improve efficiency and potentially extending full certification coverage beyond 2 years. OMB stated that it agrees with OPM that careful review of the newly passed law and its effect will be necessary before considering such an extension. Regarding our discussion of pay compression, OPM stated that it is not comfortable with the identification of tiers as a means to address SES pay compression. The Acting Director commented that using tiers results in salaries clustering around points in the salary range rather than only at the top of the range. While we recognize OPM’s concern about agencies’ use of tiers, we are not recommending the use of tiers as a way for agencies to address future problems with pay compression and have revised the language in the report to clarify this point. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. At that time, we will send copies of this report to the appropriate congressional committees; the Secretaries of Defense, Energy, State, and the Treasury; the Commissioners of NRC; the Administrator of USAID; the Acting Director of OPM; the Director of OMB; and other interested parties. The report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-6806 or goldenkoffr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. This report examines selected agencies’ policies and procedures for their career Senior Executive Service (SES) performance appraisal and pay systems in (1) factoring organizational performance into senior executive performance appraisal decisions, (2) making meaningful distinctions in assessing and rewarding senior executive performance, and (3) building safeguards into senior executive performance appraisal and pay systems. In addition, this report examines how the Office of Personnel Management (OPM) and the Office of Management and Budget (OMB) are providing oversight in the certification of the senior executive performance-based pay system through their statutory roles. We selected the U.S. Departments of Defense, Energy, State, and the Treasury; the U.S. Nuclear Regulatory Commission (NRC); and the United States Agency for International Development based on variations in agency mission, organizational structure, size of their career SES workforces to reflect agencies with a large, average, and small number of executives; and results of their SES performance appraisal systems in terms of the percentage of SES rated at the highest rating levels and the percentage that received performance awards or bonuses from fiscal years 2004 to 2006, according to OPM’s governmentwide data reports. We focused on career members of the SES because they represent the majority of all SES appointments governmentwide. To address our first and third objectives, we analyzed selected agencies’ documents on their SES performance appraisal and pay systems including performance management policies, directives, guidance, and other related information; performance planning and appraisal templates; briefings, memos, and other materials used to communicate aggregate appraisal results to senior executives; and the submission documents to OPM and OMB for certification in 2007. We also interviewed cognizant senior-level officials in the human capital offices at the selected agencies regarding their agencies’ SES systems and senior-level OPM and OMB officials on how organizational performance and safeguards are integrated into the certification process. To address our second objective, we analyzed aggregate SES basic pay, performance rating, bonus, and pay adjustment data as provided by the agencies for fiscal year 2007. We defined our universe of analysis as career senior executives who received ratings. In calculating the percentage of eligible senior executives who received bonuses (cash awards) or pay adjustments (increases to basic pay) and average (or mean) amounts, we excluded executives who received a rating less than “fully successful” (level 3), as applicable, from the eligible population since those executives are not eligible to receive bonuses, according to the selected agencies’ policies, or pay increases, according to OPM regulation. We also excluded senior executives at NRC, Treasury, and State who received Presidential Rank Awards from our calculations of percentages of eligible SES members receiving bonuses and average amounts because those individuals were not considered for bonuses that year, according to the agencies’ policies. In order to have consistency in our analysis across selected agencies, we included senior executives who were rated but left their positions—because of retirement, attrition, or assignment to a lower grade—prior to performance payouts being made in our analysis. The agencies’ policies and practices varied in whether or not senior executives who retired were eligible for performance payouts. In analyzing the basic pay rates, we used the rates of basic pay for rated, career senior executives after pay adjustments were made for the fiscal year 2007 appraisal cycle. We compared these basic pay amounts to the governmentwide SES basic pay range and the Executive Schedule (EX) pay levels, specifically EX-level II (the SES pay cap) and EX-level III (within the SES pay band) for calendar year 2008. EX pay rates are set at the beginning of each calendar year and because performance payouts from the completed appraisal cycle are usually paid out in January, agencies are able to use the next calendar year SES pay rates for their pay ranges. In calculating the percentage of senior executives paid at or above EX-level III, we included those SES paid at EX-level II in our calculations. For this objective, we also analyzed relevant agency policies and guidance regarding performance ratings, bonuses, and pay adjustments, and interviewed selected senior-level agency officials from the human capital offices. For the governmentwide perspective, we reviewed OPM’s Report on Senior Executive Pay for Performance for Fiscal Year 2007 to identify relevant governmentwide performance data as a comparison point for the selected agencies. We analyzed OPM memos and guidance on the certification process and criteria, and interviewed senior-level OPM and OMB officials who oversee the certification review process on how the SES performance data are used as part of the certification process. We checked the agency data for reasonableness and the presence of any obvious or potential errors in accuracy and completeness. We also reviewed related agency documentation, interviewed agency officials knowledgeable about the data, and brought to the attention of these officials any concerns or discrepancies we found with the data for correction or updating. The agency officials confirmed the correctness of the data or in some cases provided corrections to the data, which we used in our analysis. On the basis of these procedures, we believe the data are sufficiently reliable for use in the analyses presented in this report. For our fourth objective, we analyzed OPM’s and OMB’s guidance, memos, and other documents regarding the certification process and criteria; interviewed senior-level OPM and OMB officials involved in the certification process regarding their oversight of the certification process; and interviewed senior-level agency officials from the selected agencies’ human capital offices to obtain their perspectives on OPM and OMB oversight. As part of this objective, we are reporting on the status of OPM’s progress toward addressing recommendations from our past work on the SES performance-based pay system and certification process. We conducted our work from October 2007 through November 2008 in accordance with generally accepted government auditing standards. In November 2003, Congress authorized a new performance-based pay system for members of the Senior Executive Service. Agencies are to base pay adjustments for senior executives on individual performance and contributions to the agency’s performance by considering the individual’s accomplishments and such things as unique skills, qualifications, or competencies of the individual and the individual’s significance to the agency’s mission and performance, as well as the individual’s current responsibilities. If an agency’s senior executive performance appraisal system is certified by the Office of Personnel Management (OPM) and the Office of Management and Budget (OMB) concurs, an agency can raise the pay cap for its senior executives to $172,200 for basic pay (Level II of the Executive Schedule) and $221,100 for total compensation (the total annual compensation payable to the Vice President). To qualify for senior executive pay flexibilities, agencies’ performance appraisal systems are evaluated against nine certification criteria and any additional information that OPM and OMB may require to make determinations regarding certification. As shown in table 4, the certification criteria jointly developed by OPM and OMB are broad principles that position agencies to use their pay systems strategically to support the development of a stronger performance culture and the attainment of their mission, goals, and objectives. In addition to the individual named above, Belva Martin, Assistant Director; Amber Edwards; Janice Latimer; Donna Miller; Meredith Moore; Mary Robison; Sabrina Streagle; and Greg Wilmoth made major contributions.
Agencies are allowed to raise pay caps for their Senior Executive Service (SES) members if the Office of Personnel Management (OPM) certifies and the Office of Management and Budget (OMB) concurs that their appraisal systems meet applicable criteria. As requested, this report examines selected agencies' policies and procedures for (1) factoring organizational performance into SES appraisal decisions, (2) making meaningful distinctions in SES performance and (3) building safeguards into SES systems. Also, this report examines OPM and OMB oversight in certifying the pay systems through their statutory roles. GAO selected six agencies based on mission, structure, and number of career SES variations. GAO analyzed the agencies' policies and fiscal year 2007 aggregate SES appraisal data and OPM guidance. All of the selected agencies--the U.S. Departments of Defense, Energy, State, and the Treasury; U.S. Nuclear Regulatory Commission; and USAID--have policies in place that require senior executives' performance expectations to be aligned with organizational results and organizational performance to be factored into appraisal decisions. While almost all of the agencies provided and communicated the importance of considering organizational performance, USAID did not provide its performance review board members (PRB) and other reviewing officials with any specific information on organizational performance to help inform their executive appraisal recommendations. All of the selected agencies have multiple rating levels in place for assessing senior executive performance. For the fiscal year 2007 appraisal cycle, senior executives were concentrated at the top two rating levels, which raises questions about the extent to which meaningful distinctions based on relative performance are being made and how OPM applies this criterion. OPM has an opportunity to strengthen its communication with agencies and executives on the importance of using a range of rating levels when assessing performance while avoiding the use of forced distributions. All of the selected agencies have safeguards, including higher level reviews of performance appraisal recommendations, PRBs, and transparency in communicating the aggregate results, although agencies varied in how they implemented such safeguards. While generally satisfied with OPM's and OMB's oversight, officials at the selected agencies said OPM could strengthen its communication with agencies and executives on how it uses the SES performance appraisal data and correlation between ratings and performance pay in determining whether agencies are making meaningful distinctions based on relative performance. Further communication from OPM is important in order for agencies to have a better understanding of how they are being held accountable for these certification criteria and make the necessary improvements to their systems to maintain certification. Further, senior-level officials at the selected agencies suggested options--such as moving to an electronic submission process and lengthening the certification coverage beyond 2 years once their systems are operating at the fully certified level--to increase the efficiency of the process. Moving forward, it will be important for OPM and OMB to identify ways to improve the certification process and make it more streamlined while ensuring that agencies have the guidance, tools, and training they need to implement effective performance appraisal and pay systems for their senior executives.
Since the early 1990s, the unprecedented growth in computer interconnectivity, most notably growth in use of the Internet, has revolutionized the way our government, our nation, and much of the world communicate and conduct business. The benefits have been enormous in terms of facilitating communications, business processes, and access to information. However, without proper safeguards, this widespread interconnectivity poses enormous risks to our computer systems and, more importantly, to the critical operations and infrastructures they support. While attacks to date have not caused widespread or devastating disruptions, the potential for more catastrophic damage is significant. Official estimates show that over 100 countries already have or are developing computer attack capabilities. Hostile nations or terrorists could use cyber-based tools and techniques to disrupt military operations, communications networks, and other information systems or networks. The National Security Agency has determined that potential adversaries are developing a body of knowledge about U.S. systems and about methods to attack these systems. According to Defense officials, these methods, which include sophisticated computer viruses and automated attack routines, allow adversaries to launch untraceable attacks from anywhere in the world. According to a leading security software designer, viruses in particular are becoming more disruptive for computer users. In 1993 only about 10 percent of known viruses were considered destructive, harming files and hard drives. But now about 35 percent are regarded as harmful. Information sharing and coordination among organizations are central to producing comprehensive and practical approaches and solutions to these threats. First, having information on threats and on actual incidents experienced by others can help an organization better understand the risks it faces and determine what preventative measures should be implemented. Second, more urgent, real-time warnings can help an organization take immediate steps to mitigate an imminent attack. Lastly, information sharing and coordination are important after an attack has occurred to facilitate criminal investigations, which may cross jurisdictional boundaries. Such after-the-fact coordination could also be useful in recovering from a devastating attack, should such an attack ever occur. The recent episode of the ILOVEYOU computer virus in May 2000, which affected governments, corporations, media outlets, and other institutions worldwide, highlighted the need for greater information sharing and coordination. Because information sharing mechanisms were not able to provide timely enough warnings against the impending attack, many entities were caught off guard and forced to take their networks off-line for hours. Getting the word out within some federal agencies themselves also proved difficult. At the Department of Defense, for example, the lack of teleconferencing capability slowed the response effort because Defense components had to be called individually. The National Aeronautics and Space Administration (NASA) had difficulty communicating warnings when e-mail services disappeared, and while backup communication mechanisms are in place, NASA officials told us that they are rarely tested. We also found that the few federal components that either discovered or were alerted to the virus early did not effectively warn others. For example, officials at the Department of the Treasury told us that the U.S. Customs Service received an Air Force Computer Emergency Response Team (AFCERT) advisory early in the morning of May 4, but that Customs did not share this information with other Treasury bureaus. The federal government recognized several years ago that addressing computer-based risks to our nation’s critical infrastructures required coordination and cooperation across federal agencies and among public- and private-sector entities and other nations. In May 1998, following a report by the President’s Commission on Critical Infrastructure Protection that described the potential devastating implications of poor information security from a national perspective, the government issued Presidential Decision Directive (PDD) 63. Among other things, this directive tasked federal agencies with developing critical infrastructure protection plans and establishing related links with private industry sectors. It also required that certain executive branch agencies assess the cyber vulnerabilities of the nation’s critical infrastructures—information and communications; energy; banking and finance; transportation; water supply; emergency services; law enforcement; and public health, as well as those authorities responsible for continuity of federal, state, and local governments. A variety of activities have been undertaken in response to PDD 63, including development and review of individual agency critical infrastructure protection plans, identification and evaluation of information security standards and best practices, and efforts to build communication links. In January 2000 the White House released its NationalPlanforInformationSystemsProtectionas a first major element of a more comprehensive effort to protect the nation’s information systems and critical assets from future attacks. The plan focuses largely on federal efforts being undertaken to protect the nation’s critical cyber- based infrastructure. Subsequent versions are to address protecting other elements of the nation’s infrastructure, including those pertaining to the physical infrastructure and specific roles and responsibilities of state and local governments and the private sector. Moreover, a number of government and private sector organizations have already been established to facilitate information sharing and coordination. These range from groups that disseminate information on immediate threats and vulnerabilities, to those that seek to facilitate public-private sector information sharing on threats pertaining to individual infrastructure sectors, and those that promote coordination on an international scale. At the federal level, for example, the National Infrastructure Protection Center (NIPC), located at the Federal Bureau of Investigation (FBI), is to serve as a focal point in the federal government for gathering information on threats as well as facilitating and coordinating the federal government’s response to incidents impacting key infrastructures. It is also charged with issuing attack warnings to private sector and government entities as well as alerts to increases in threat conditions. The Federal Computer Incident Response Capability (FedCIRC) is a collaborative partnership of computer security and law enforcement professionals established to handle computer security incidents and to provide both proactive and reactive security services for the federal government. In addition, the National Institute of Standards and Technology (NIST) is working to facilitate information sharing in the security community by building a database containing detailed information on computer attacks and the Critical Infrastructure Assurance Office (CIAO) is working to coordinate private sector participation in information gathering in the area of cyber assurance. The Administration is also undertaking efforts to facilitate information sharing with other nations. Examples of other organizations focusing on information sharing and coordination include the following: Carnegie Mellon University’s CERT Coordination Center,which is charged with establishing a capability to quickly and effectively coordinate communication among experts in order to limit damage, respond to incidents, build awareness of security issues across the Internet community. The System Administration, Networking, and Security (SANS) Institute, which is a cooperative research and education organization through which more than 96,000 system administrators, security professionals, and network administrators share the lessons they are learning and find solutions for challenges they face. The National Coordinating Center for Telecommunications, which is a joint industry/government organization that is focusing on facilitating information sharing between the telecommunications industry and government. The Financial Services Information Sharing and Analysis Center, which is a similar organization that exclusively serves the banking, securities, and insurance industries. Agora, which is a forum that is composed more than 300 people from approximately 100 companies and 45 government agencies, including Microsoft, Blue Shield, the FBI, U.S. Secret Service, U.S. Customs Service agents, and the Royal Canadian Mounted Police as well as local police, county prosecutors, and computer professionals from the Pacific Northwest. Members voluntarily share information on common computer security problems, best practices to counter them, protecting electronic infrastructures, and educational opportunities. The Forum of Incident Response and Security Teams (FIRST), which provides a closed forum for incident response and security teams from 19 countries to share experiences, exchange information related to incidents, and promote preventative activities. The International Organization on Computer Evidence, which provides an international forum for law enforcement agencies to exchange information concerning computer crime investigation and related forensic issues. Developing the information sharing and coordination capabilities needed to effectively deal with computer threats and actual incidents is complex and challenging but essential. Data on possible threats–ranging from viruses, to hoaxes, to random threats, to news events, and computer intrusions–must be continually collected and analyzed from a wide spectrum of globally distributed sources. Moreover, once an imminent threat is identified, appropriate warnings and response actions must be effectively coordinated among government agencies, the private sector, and, when appropriate, other nations. It is important that this function be carried out as effectively, efficiently, and quickly as possible in order to ensure continuity of operations as well as minimize disruptions. At the same time, it is not possible to build an overall, comprehensive picture of activity on the global information infrastructure. Networks themselves are too big, they are growing too quickly, and they are continually being reconfigured and reengineered. As a result, it is essential that strong partnerships be developed between a wide range of stakeholders in order to ensure that the right data are at the right place at the right time. Creating partnerships for information sharing and coordination is a formidable task. Trust needs to be established among a broad range of parties with varying interests and expectations, procedures for gathering and sharing information need to be developed, and technical issues need to be addressed. Moreover, if the federal government itself is going to be a credible player in response coordination, it needs to have its own systems and assets well protected. This means overcoming significant and pervasive security weaknesses at each of the major federal agencies and instituting governmentwide controls and mechanisms needed to provide effective oversight, guidance, and leadership. Perhaps most importantly, this activity needs to be guided by a comprehensive strategy to ensure that it is effective, to avoid unnecessary duplication of effort, and to maintain continuity. I would like to discuss each of these challenges in more detail as successfully addressing them is essential to getting the most from information sharing mechanisms currently operating as well as establishing new ones. A key element to the success of information sharing partnerships is developing trusted relationships among the broad range of stakeholders involved with critical infrastructure protection. (See figure 1 for examples of these stakeholders). Jointly-designed, built, and staffed mechanisms among involved parties is most likely to obtain critical buy-in and acceptance by industry and others. Each partner must ensure the sharing activity is equitable and that it provides a value added to the cost of information sharing. However, this can be difficult in the face of varying interests, concerns, and expectations. The private sector, for example, is motivated by business concerns and profits, whereas the government is driven by national and economic security concerns. These disparate interests can lead to profoundly different views and perceptions about threats, vulnerabilities, and risks, and they can affect the level of risk each party is willing to accept and the costs each is willing to bear. Moreover, as we testified before this Subcommittee in June,concerns have been raised that industry could potentially face antitrust violations for sharing information with other industry partners, subject their information the Freedom of Information Act (FOIA) disclosures or face potential liability concerns for information shared in good faith. Further, there is a concern that an inadvertent release of confidential business material, such as trade secrets or proprietary information, could damage reputations, lower consumer confidence, hurt competitiveness, and decrease market shares of firms. Some of these concerns are addressed by this Subcommittee’s proposed Cyber Security Information Act of 2000 (H.R. 4246). Specifically, the bill would protect information being provided by the private sector from disclosure by federal entities under FOIA or disclosure to or by any third party. It would prohibit the use of information by any federal and state organization or any third party in any civil actions. And it would enable the President to establish and terminate working groups composed of federal employees for the purposes of engaging outside organizations in discussions to address and share information about cyber security. By removing these concerns about sharing information on critical infrastructure threats, H.R. 4246 can facilitate private-public partnerships and help spark the dialogue needed to identify threats and vulnerabilities and to develop response strategies. For several reasons, the private sector may also have reservations about sharing information with law enforcement agencies. For example, law enforcement entities have strict rules regarding evidence in order to preserve its integrity for prosecuting cases. Yet, complying with law enforcement procedures can be costly because it requires training, implementing proper auditing and control mechanisms, and following proper procedures. Additionally, a business may not wish to report an incident if it believes that its image might be tarnished. For national security reasons, the government itself may be reluctant to share classified information that could be of value to the private sector in deterring or thwarting electronic intrusions and information attacks. Moreover, declassifying and sanitizing such data takes time, which could affect time-critical operations. Nevertheless, until the government provides detailed information on specific threats and vulnerabilities, the private sector will not be able to build a business case to justify information sharing and will likely remain reluctant to share its own information. A significant amount of work still needs to be done just in terms of ensuring that the right type of information is being collected and that there are effective and secure mechanisms for collecting, analyzing, and sharing it. This requires agreeing, in advance, on the types of data to be collected and reported as well as on the level of detail. Again, this can be difficult given varying interests and expectations. The private sector, for example, may want specific threat or vulnerability information so that immediate actions can be taken to avert an intrusion. Law enforcement agencies may want specific information on perpetrators and particular aspects of the attack, as well as the intent of the attack and the consequences of or damages due to the attack. At the same time, many computer security professionals may want the technical details that enable a user to compromise a computer system in order to determine how to detect such actions. After determining what types of information to collect and report, guidelines and procedures need to be established to effectively collect and disseminate data and contact others during an incident. Among other things, this involves identifying the best mechanisms for disseminating advisories and urgent notices, such as e-mail, fax, voice messages, pagers, or cell phones; designating points-of-contact; identifying the specific responsibilities of information-sharing partners; and deciding whether and how information should be shared with outside organizations. Working through these and other issues has already proven to be a formidable task for some information-sharing organizations. According to the CERT Coordination Center, for example, it has taken years for incident response and security teams to develop comprehensive policies and procedures for their own internal operations because there is little or no experience on which to draw from. Moreover, the incident response team community as a whole is lacking in policies and procedures to support operations among teams. According to the Center, progress typically comes to a halt when teams become overwhelmed by the number of issues that need to be addressed before they can reach agreement on basic factors such as terminology, definitions, and priorities. Significant resources, knowledge, skills, and abilities clearly need to be brought together to develop mechanisms that can quickly and accurately collect, correlate, and analyze information and coordinate response efforts. But presently, there is a shortage of such expertise. At the federal level, for example, we have observed a number of instances where agency staff did not even have the skills needed to carry out their own computer security responsibilities or to oversee contractor activities. Additionally, according to the CERT Coordination Center, there are not enough suitably trained staff in the incident response community to implement any effective and reliable global incident response infrastructure. The President’s NationalPlanforInformationSystemsProtectionrecognizes this dilemma and proposes a program to develop a cadre of highly skilled computer science and information security personnel. As this program is implemented, it will be important for the federal government to ensure that capabilities are developed for information sharing and response mechanisms in addition to individual agency computer security programs. At the federal level, there is also a pressing need for better computer network intrusion detection monitoring systems to detect unauthorized and possible criminal activity both within and across government agencies. Under the President’s NationalPlanforInformationSystems Protection, the federal government is working to design and implement highly automated security and intrusion detection capabilities for federal systems. Such systems are to provide (1) intrusion detection monitors on key nodes of agency systems, (2) access and activity rules for authorized users and a scanning program to identify anomalous or suspicious activity, (3) enterprise-wide management programs that can identify what systems are on the network, determine what they are doing, enforce access and activity rules, and potentially apply security upgrades, and (4) techniques to analyze operating system code and other software to determine if malicious code, such as logic bombs, has been installed. As we testified in February,available tools and methods for analyzing and correlating network traffic are still evolving and cannot yet be relied on to serve as an effective “burglar alarm,” as envisioned by the plan. While holding promise for the future, such tools and methods raise many questions regarding technical feasibility, cost-effectiveness, and the appropriate extent of centralized federal oversight. Accordingly, these efforts will merit close congressional oversight as they are implemented. If our government is going to play a key role in overcoming these challenges and spurring effective information sharing and coordination, it must be a model for information security and critical infrastructure protection, which means having its own systems and assets adequately protected. Unfortunately, we have a long way to go before we can point to our government as a model for others to emulate. As noted in previous testimonies and reports, virtually every major federal agency has poor computer security. Federal agencies are at risk of having their key systems and information assets compromised or damaged from both computer hackers as well as unauthorized activity by insiders. Recent audits conducted by GAO and agency inspectors general show that 22 of the largest federal agencies have significant computer security weaknesses, ranging from poor controls over access to sensitive systems and data, to poor control over software development and changes, and nonexistent or weak continuity of service plans. While a number of factors have contributed to weak federal information security, such as insufficient understanding of risks, technical staff shortages, and a lack of system and security architectures, the fundamental underlying problem is poor security program management. Agencies have not established the basic management framework needed to effectively protect their systems. Based on our 1998 studyof organizations with superior security programs, such a framework involves managing information security risks through a cycle of risk management activities that include (1) assessing risk and determining protection needs, (2) selecting and implementing cost-effective policies and controls to meet these needs, (3) promoting awareness of policies and controls and of the risks that prompted their adoption, and (4) implementing a program of routine tests and examinations for evaluating the effectiveness of policies and related controls. Additionally, a strong central focal point can help ensure that the major elements of the risk management cycle are carried out and can serve as a communications link among organizational units. While individual agencies bear primary responsibility for the information security associated with their own operations and assets, there are several areas where governmentwide criteria and requirements also need to be strengthened. Specifically, there is a need for routine, periodic independent audits of agency security programs to provide a basis for measuring agency performance and information for strengthened oversight. There is also a need for more prescriptive guidance regarding the level of protection that is appropriate for agency systems. Additionally, as mentioned earlier, gaps in technical expertise should be addressed. A comprehensive, cohesive strategy is needed to ensure that our information security and critical infrastructure protection efforts are effective and that we build on efforts already underway. However, developing and implementing such a strategy will require strong federal leadership. Such leadership will be needed to press individual federal agencies to institute the basic management framework needed to make the federal government a model for critical infrastructure protection and to foster the governmentwide mechanisms needed to facilitate oversight and guidance. In addition, leadership will be needed to ensure that the other challenges discussed today are met. The NationalPlanforInformationSystemsProtectionis a move towards developing such a framework. However, it does not address a broad range of concerns that go beyond federal efforts to protect the nation’s critical cyber-based infrastructures. In particular, the plan does not address the international aspects of critical infrastructure protection or the specific roles industry and state and local governments will play. The Administration is working toward issuing a new version of the plan this fall that addresses these issues. However, there is no guarantee that this version will be completed by then or that it will be implemented in a timely manner. Additionally, a sound long-term strategy to protect U.S. critical infrastructures depends not only on implementation of our national plan, but on appropriately coordinating our plans with those of other nations, establishing and maintaining a dialogue on issues of mutual importance, and cooperating with other nations and infrastructure owners. An important element of such a plan will be defining and clarifying the roles and responsibilities of organizations—especially federal entities– serving as central repositories of information or as coordination focal points. As discussed earlier, there are numerous organizations currently collecting, analyzing, and disseminating data or guidance on computer security vulnerabilities and incidents, including NIST, the NIPC, FedCIRC, the Critical Information Assurance Office, the federal CIO Council, and various units within the Department of Defense. The varying types of information and analysis that these organizations provide can be useful. However, especially in emergency situations, it is important that federal agencies and others clearly understand the roles of these organizations, which ones they should contact if they want to report a computer-based attack, and which ones they can rely on for information and assistance. Clarifying organizational responsibilities can also ensure a common understanding of how the activities of these many organizations interrelate, who should be held accountable for their success or failure, and whether they will effectively and efficiently support national goals. Moreover, the need for such clear delineation of responsibilities will be even more important as international cooperative relationships in this area mature. If such roles and responsibilities are not clearly defined and coordinated under a comprehensive plan, there is a risk that these efforts will be unfocused, inefficient, and ineffective. In conclusion, a number of positive actions have already been taken to provide a coordinated response to computer security threats. In particular, the federal government is in the process of establishing mechanisms for gathering information on threats, facilitating and coordinating response efforts, sharing information with the private sector, and working to build collaborative partnerships. Other stakeholders are also working to facilitate public-private information sharing on threats in individual sectors and to promote international coordination. Nevertheless, there are formidable challenges that need to be overcome to strengthen ongoing efforts and to work toward building a more comprehensive and effective information-sharing and coordination infrastructure. In particular, trust needs to be established among a broad range of stakeholders, questions on the mechanics of information sharing and coordination need to be resolved, roles and responsibilities need to be clarified, and technical expertise needs to be developed. Addressing these challenges will require concerted efforts by senior executives—both public and private—as well as technical specialists, law enforcement and national security officials, and providers of network services and other key infrastructure services, among others. Moreover, it will require stronger leadership by the federal government to develop a comprehensive strategy for critical infrastructure protection, work through concerns and barriers to sharing information, and institute the basic management framework needed to make the federal government a model of critical infrastructure protection. Mr. Chairman, this concludes my statement. I would be happy to answer any questions you or other Members of the Subcommittee may have. We performed our review from July 10 through July 24, 2000, in accordance with generally accepted government auditing standards. For information about this testimony, please contact Jack L. Brock, Jr., at (202) 512-6240. Jean Boltz, Cristina Chaplain, Mike Gilmore, Danielle Hollomon, Paul Nicholas, and Alicia Sommers made key contributions to this testimony. (512012)
Pursuant to a congressional request, GAO discussed the challenges of developing effective information sharing and coordination strategies needed to deal with computer security threats. GAO noted that: (1) developing the information sharing and coordination capabilities needed to effectively deal with computer threats and actual incidents is complex and challenging but essential; (2) data on possible threats--ranging from viruses, to hoaxes, to random threats, to news events, and computer intrusions--must be continually collected and analyzed from a wide spectrum of globally distributed sources; (3) once an imminent threat is identified, appropriate warnings and response actions must be effectively coordinated among government agencies, the private sector, and, when appropriate, other nations; (4) it is important that this function be carried out as effectively, efficiently, and quickly as possible in order to ensure continuity of operations as well as minimize disruptions; (5) at the same time, it is not possible to build an overall, comprehensive picture of activity on the global infrastructure; (6) networks themselves are too big, they are growing too quickly, and they are continually being reconfigured and reengineered; (7) as a result, it is essential that strong partnerships be developed between a wide range of stakeholders in order to ensure that the right data are at the right place at the right time; (8) creating partnerships for information sharing and coordination is a formidable task; (9) trust needs to be established among a broad range of parties with varying interests and expectations, procedures for gathering and sharing information need to be developed, and technical issues need to be addressed; (10) if the federal government itself is going to be a credible player in response coordination, it needs to have its own systems and assets well protected; (11) this means overcoming significant and pervasive security weaknesses at each of the major federal agencies and instituting governmentwide controls and mechanisms needed to provide effective oversight, guidance, and leadership; and (12) perhaps most importantly, this activity needs to be guided by a comprehensive strategy to ensure that it is effective, to avoid unnecessary duplication of effort, and to maintain continuity.
VA pays basic compensation benefits to veterans incurring disabilities from injuries or diseases that were incurred or aggravated while on active military duty. VA rates the severity of all service-connected disabilities by using its Schedule for Rating Disabilities. The schedule lists types of disabilities and assigns each disability a percentage rating, which is intended to represent an average earning impairment the veteran would experience in civilian occupations because of the disability. All veterans awarded service-connected disabilities are assigned single or combined (in case of multiple disabilities) ratings ranging from 0 to 100 percent, in increments of 10 percent, based on the rating schedule; such a rating is known as a schedular rating. Diseases and injuries incurred or aggravated while on active duty are called service-connected disabilities. Disability compensation can be increased if VA determines that the veteran is unemployable (not able to engage in substantially gainful employment) because of the service-connected disability. Under VA’s unemployability regulations, the agency can assign a total disability rating of 100 percent to veterans who cannot perform substantial gainful employment because of service-connected disabilities, even though their schedular rating is less than 100 percent. To qualify for unemployability benefits, a veteran must have a single service-connected disability of 60 percent or more or multiple disabilities with a combined rating of 70 percent or more, with at least one of the disabilities rated 40 percent or more. VA can waive the minimum ratings requirement and grant unemployability benefits to a veteran with a lower rating; this is known as an extra-schedular rating. Staff at VA’s regional offices make virtually all eligibility decisions for disability compensation benefits, including IU benefits. The 57 VA regional offices use nonmedical rating specialists to evaluate veterans’ eligibility for these benefits. Upon receipt of an application for compensation benefits, the rating specialist would typically refer the veteran to a VA medical center or clinic for an exam. Based on the medical examination and other information available to the rater, the rater must first determine which of the veteran’s conditions are or are not service-connected. For service-connected conditions, the rater compares the diagnosis with the rating schedule to assign a disability rating. Along with medical records, raters may also obtain other records to evaluate an IU claim. VA may require veterans to furnish an employment history for the 5-year period preceding the date on which the veteran claims to have become too disabled to work and for the entire time after that date. VA guidance also requires that raters request basic employment information from each employer during the 12-month period prior to the date the veteran last worked. In addition, if the veteran has received services from VA’s VR&E program or Social Security disability benefits, the rater may also request and review related information from these organizations. Once VA grants unemployability benefits, a veteran may continue to receive the benefits while working if VA determines that the work is only marginal employment rather than substantially gainful employment. Marginal employment exists when a veteran’s annual earned income does not exceed the annual poverty threshold for one person as determined by the U.S. Census Bureau—$ 9,827 for 2004. Furthermore, if veterans are unable to maintain employment for 12 continuous months due to their service-connected disabilities they may retain their IU benefits, regardless of the amount earned. After more than a decade of research, GAO has determined that federal disability programs were in urgent need of attention and transformation and placed modernizing federal disability programs on its high-risk list in January 2003. Specifically, our research showed that the disability programs administered by VA and the Social Security Administration (SSA) lagged behind the scientific advances and economic and social changes that have redefined the relationship between impairments and work. For example, advances in medicine and technology have reduced the severity of some medical conditions and have allowed individuals to live with greater independence and function in work settings. Moreover, the nature of work has changed in recent decades as the national economy has moved away from manufacturing-based jobs to service- and knowledge-based employment. Yet VA’s and SSA’s disability programs remain mired in concepts from the past—particularly the concept that impairment equates to an inability to work—and as such, we found that these programs are poorly positioned to provide meaningful and timely support for Americans with disabilities. In contrast, we found that a growing number of U.S. private insurance companies had modernized their programs to enable people with disabilities to return to work. In general, private insurer disability plans can provide short- or long-term disability insurance coverage, or both, to replace income lost by employees because of injuries and illnesses. Employers may choose to sponsor private disability insurance plans for employees either by self-insuring or by purchasing a plan through a private disability insurer. The three private disability insurers we reviewed recognized the potential for reducing disability costs through an increased focus on returning people with disabilities to productive activity. To accomplish this comprehensive shift in orientation, these insurers have begun developing and implementing strategies for helping people with disabilities return to work as soon as possible, when appropriate. The three private insurers we studied incorporate return-to-work considerations early in the assessment process to assist claimants in their recovery and in returning to work as soon as possible. With the initial reporting of a disability claim, these insurers immediately set up the expectation that claimants with the potential to do so will return to work. Identifying and providing services intended to enhance the claimants’ capacity to work are central to their process of deciding eligibility for benefits. Further, the insurers continue to periodically monitor work potential and provide return-to-work assistance to claimants as needed throughout the duration of the claim. Their ongoing assessment process is closely linked to a definition of disability that shifts over time from less to more restrictive—that is, from an inability to perform one’s own occupation to an inability to perform any occupation. After a claim is received, the private insurers’ assessment process begins with determining whether the claimant meets the initial definition of disability. In general, for the three private sector insurers we studied, claimants are considered disabled when, because of injury or sickness, they are limited in performing the essential duties of their own occupation and they earn less than 60 to 80 percent of their predisability earnings, depending upon the particular insurer. As part of determining whether the claimant meets this definition, the insurers compare the claimant’s capabilities and limitations with the demands of his or her own occupation and identify and pursue possible opportunities for accommodation— including alternative jobs or job modifications—that would allow a quick and safe return to work. A claimant may receive benefits under this definition of disability for up to 2 years. As part of the process of assessing eligibility according to the “own occupation” definition, insurers directly contact the claimant, the treating physician, and the employer to collect medical and vocational information and initiate return-to-work efforts, as needed. Insurers’ contacts with the claimant’s treating physician are aimed at ensuring that the claimant has an appropriate treatment plan focused, in many cases, on timely recovery and return to work. Similarly, insurers use early contact with employers to encourage them to provide workplace accommodations for claimants with the capacity to work. If the insurers find the claimant initially unable to return to his or her own occupation, they provide cash benefits and continue to assess the claimant to determine if he or she has any work potential. For those with work potential, the insurers focus on return to work before the end of the 2-year period, when, for all the private insurers we studied, the definition of disability becomes more restrictive. After 2 years, the definition shifts from an inability to perform one’s own occupation to an inability to perform any occupation for which the claimant is qualified by education, training, or experience. Claimants initially found eligible for benefits may be found ineligible under the more restrictive definition. The private insurers’ shift from a less to a more restrictive disability definition after 2 years reflects the changing nature of disability and allows a transitional period for insurers to provide financial and other assistance, as needed, to help claimants with work potential return to the workforce. During this 2-year period, the insurer attempts to determine the best strategy for managing the claim. Such strategies can include, for example, helping plan medical care or providing vocational services to help claimants acquire new skills, adapt to assistive devices to increase functioning, or find new positions. For those requiring vocational intervention to return to work, the insurers develop an individualized return-to-work plan, as needed. Basing the continuing receipt of benefits upon a more restrictive definition after 2 years provides the insurer with leverage to encourage the claimant to participate in a rehabilitation and return-to-work program. Indeed, the insurers told us they find that claimants tend to increase their efforts to return to work as they near the end of the 2-year period. If the insurer initially determines that the claimant has no work potential, it regularly monitors the claimant’s condition for changes that could increase the potential to work and reassesses after 2 years the claimant’s eligibility under the more restrictive definition of disability. The insurer continues to look for opportunities to assist claimants who qualify under this definition of disability in returning to work. Such opportunities may occur, for example, when changes in medical technology—such as new treatments for cancer or AIDS—may enable claimants to work, or when claimants are motivated to work. The private insurers that we reviewed told us that throughout the duration of the claim, they tailor the assessment of work potential and development of a return-to-work plan to the specific situation of each individual claimant. To do this, disability insurers use a wide variety of tools and methods when needed. Some of these tools, as shown in tables 1 and 2, are used to help ensure that medical and vocational information is complete and as objective as possible. For example, insurers consult medical staff and other resources to evaluate whether the treating physician’s diagnosis and the expected duration of the disability are in line with the claimant’s reported symptoms and test results. Insurers may also use an independent medical examination or a test of basic skills, interests, and aptitudes to clarify the medical or vocational limitations and capabilities of a claimant. In addition, insurers identify transferable skills to compare the claimant’s capabilities and limitations with the demands of the claimant’s own occupation. This method is also used to help identify other suitable occupations and the specific skills needed for these new occupations when the claimant’s limitations prevent him or her from returning to a prior occupation. Included in these tools and methods are services to help the claimant return to work, such as job placement, job modification, and retraining. To facilitate return to work, the private insurers we studied employment incentives both for claimants to participate in vocational activities and receive appropriate medical treatment, and for employers to accommodate claimants. The insurers require claimants who could benefit from vocational rehabilitation to participate in an individualized return-to- work program. They also provide financial incentives to promote claimants’ efforts to become rehabilitated and return to work. To better ensure that medical needs are met, the insurers we studied require that claimants receive appropriate medical treatment and assist them in obtaining this treatment. In addition, they provide financial incentives to employers to encourage them to provide work opportunities for claimants. The three private insurers we reviewed require claimants who could benefit from vocational rehabilitation to participate in a customized rehabilitation program or risk loss of benefits. As part of this program, a return-to-work plan for each claimant can include, for example, adaptive equipment, modifications to the work site, or other accommodations. These private insurers mandate the participation of claimants whom they believe could benefit from rehabilitation because they believe that voluntary compliance has not encouraged sufficient claimant participation in these plans. The insurers told us that they encourage rehabilitation and return to work by allowing claimants who work to supplement their disability benefit payments with earned income. During the first 12 or 24 months of receiving benefits, depending upon the particular insurer, claimants who are able to work can do so to supplement their benefit payments and thereby receive total income of up to 100 percent of predisability earnings. After this period, if the claimant is still working, the insurers decrease the benefit amount so that the total income a claimant is allowed to retain is less than 100 percent of predisability income. When a private insurer, however, determines that a claimant is able, but unwilling, to work, the insurer may reduce or terminate the claimant’s benefits. To encourage claimants to work to the extent they can, even if only part-time, two of the insurers told us they may reduce a claimant’s benefit by the amount the claimant would have earned if he or she had worked to maximum capacity. The other insurer may reduce a claimant’s monthly benefit by the amount that the claimant could have earned if he or she had not refused a reasonable job offer—that is, a job that was consistent with the claimant’s background, education, and training. Claimants’ benefits may also be terminated if claimants refuse to accept a reasonable accommodation that would enable them to work. Since medical improvement or recovery can also enhance claimants’ ability to work, the private insurers we studied not only require, but also help, claimants to obtain appropriate medical treatment. To maximize medical improvement, these private insurers require that the claimant’s physician be qualified to treat the particular impairment. Additionally, two insurers require that treatment be provided in conformance with medical standards for treatment type and frequency. Moreover, the insurers’ medical staff work with the treating physician as needed to ensure that the claimant has an appropriate treatment plan. The insurers told us they may also provide funding for those who cannot otherwise afford treatment. The three private sector insurers we studied may also provide financial incentives to employers to encourage them to provide work opportunities for claimants. By offering lower insurance premiums to employers and paying for accommodations, these private insurers encourage employers to become partners in returning disabled workers to productive employment. For example, to encourage employers to adopt a disability policy with return-to-work incentives, the three insurers offer employers a discounted insurance premium. If their disability caseload declines to the level expected for those companies that assist claimants in returning to work, the employers may continue to pay the discounted premium amount. These insurers also fund accommodations, as needed, for disabled workers at the employer’s work site. The private disability insurers we studied have developed techniques for using the right staff to assess eligibility for benefits and return those who can to work. Officials of the three private insurers told us that they have access to individuals with a range of skills and expertise, including medical experts and vocational rehabilitation experts. They also told us that they apply this expertise as appropriate to cost effectively assess and enhance claimants’ capacity to work. The three private disability insurers that we studied have access to multidisciplinary staff with a wide variety of skills and experience who can assess claimants’ eligibility for benefits and provide needed return-to-work services to enhance the work capacity of claimants with severe impairments. The private insurers’ core staff generally includes claims managers, medical experts, vocational rehabilitation experts, and team supervisors. The insurers explained that they set hiring standards to ensure that the multidisciplinary staff is highly qualified. Such qualifications are particularly important because assessments of benefit eligibility and work capacity can involve a significant amount of professional judgment when, for example, a disability cannot be objectively verified on the basis of medical tests or procedures or clinical examinations alone. Table 3 describes the responsibilities of this core staff of experts employed by private disability insurers, as well as its general qualifications and training. The three disability insurers we reviewed use various strategies for organizing their staff to focus on return to work, with teams organized to manage claims associated either with a specific impairment type or with a specific employer (that is, the group disability insurance policyholder). One insurer organizes its staff by the claimant’s impairment type—for example, cardiac/respiratory, orthopedic, or general medical—to develop in-depth staff expertise in the medical treatments and accommodations targeted at overcoming the work limitations associated with a particular impairment. The other two insurers organize their staff by the claimant’s employer because they believe that this enables them to better assess a claimant’s job-specific work limitations and pursue workplace accommodations, including alternative job arrangements, to eliminate these limitations. Regardless of the overall type of staff organization, each of the three insurers facilitates the interaction of its core staff— claims managers, medical experts, and vocational rehabilitation experts— by pulling these experts together into small, multidisciplinary teams responsible for managing claims. Additionally, one insurer engenders team interaction by physically colocating core team members in a single working area. To provide a wide array of needed experts, the three disability insurers expand their core staff through agreements or contracts with subsidiaries or other companies. These experts—deployed both at the insurer’s work site and in the field—provide specialized services to support the eligibility assessment process and to help return claimants to work. For instance, these insurers contract with medical experts beyond their core employee staff—such as physicians, psychologists, psychiatrists, nurses, and physical therapists—to help test and evaluate the claimant’s medical condition and level of functioning. In addition, the insurers contract with vocational rehabilitation counselors and service providers for various vocational services, such as training, employment services, and vocational testing. The private insurers we examined told us that they strive to apply the appropriate type and intensity of staff resources to cost-effectively return to work claimants with work capacity. The insurers described various techniques that they use to route claims to the appropriate claims management staff, which include separating (or triaging) different types of claims and directing them to staff with the appropriate expertise. According to one insurer, the critical factor in increasing return-to-work rates and, at the same time, reducing overall disability costs is proper triaging of claims. In general, the private insurers separate claims by those who are likely to return to work and those who are not expected to return to work. The insurers told us that they assign the type and level of staff necessary to manage claims of people who are likely to return to work on the basis of the particular needs and complexity of the specific case (see table 4). As shown in table 4, claimants expected to need medical assistance, such as those requiring more than a year for medical stabilization, are likely to receive an intensive medical claims management strategy. A medical strategy involves, for example, ensuring that the claimant receives appropriate medical treatment. Claimants who need less than a year to stabilize medically are managed much less intensively. For these claims, a claims manager primarily monitors the claimant’s medical condition to assess whether it is stable enough to begin vocational rehabilitation, if appropriate. Alternatively, a claimant with a more stable, albeit serious, medical condition who is expected to need vocational rehabilitation, job accommodations, or both to return to work might warrant an intensive vocational strategy. The private disability insurers generally apply their most resource-intensive, and therefore most expensive, multidisciplinary team approach to these claimants. Working closely with the employer and the attending physician, the team actively pursues return-to-work opportunities for claimants with work potential. Finally, claimants who are likely not to return to work (or “stable and mature” claims) are generally managed using a minimum level of resources, with a single claims manager responsible for regularly reviewing a claimant’s medical condition and level of functioning. The managers of these claims carry much larger caseloads than managers of claims that receive an intensive vocational strategy. For example, one insurer’s average claims manager’s caseload for these stable and mature claims is about 2,200 claims, compared with an average caseload of 80 claims in the same company for claims managed more actively. Unlike disability compensation programs in the private sector, VA has not drawn on vocational experts for IU assessments to examine the claimant’s work potential and identify the services and accommodations needed to help those who could work to realize their full potential. In our 1987 report, we found that VA had not routinely obtained all vocational information needed to determine a veteran's ability to engage in substantially gainful employment before it granted IU benefits. Without understanding how key vocational factors, such as the veteran’s education, training, earnings, and prior work history, affect the veteran’s work capacity, VA cannot adequately assess the veteran’s ability to work. To perform this analysis, VA officials told us that the agency has vocational specialists who are specially trained to perform this difficult analysis. Skilled vocational staff can determine veterans’ vocational history, their ability to perform past or other work, and their need for retraining. By not collecting sufficient information and including the expertise of vocational specialists in the assessment, VA did not have an adequate basis for awarding or denying a veteran's claim for unemployability benefits. Preliminary findings from our ongoing work indicate that VA still does not have procedures in place to fully assess veterans’ work potential. In addition, the IU decision-making process lacks sufficient incentives to encourage return to work. In considering whether to grant IU benefits, VA does not have procedures to include vocational specialists from its VR&E services to help evaluate a veteran’s work potential. By not using these specialists, VA also misses an opportunity to have the specialist develop a return-to-work plan, in collaboration with the veteran, and identify and provide needed accommodations or services for those who can work. Instead, VA's IU assessment is focused on the veterans’ inabilities and providing cash benefits to those labeled as “unemployable,” rather than providing opportunities to help them return to work. Return-to-work practices used in the U.S. private sector reflect the understanding that people with disabilities can and do return to work. The continuing deployment of our military forces to armed conflict has focused national attention on ensuring that those who incur disabilities while serving in the military are provided the services needed to help them reach their full work potential. Approaches from the private sector demonstrate the importance of using the appropriate medical and vocational expertise to assess the claimant’s condition and provide appropriate medical treatment, vocational services, and work incentives. Applying these approaches to VA’s IU assessment process would raise a number of important policy issues. For example, to what extent should the VA require veterans seeking IU benefits to accept vocational assistance or appropriate medical treatment? Such policy questions will be answered through the national policymaking process involving the Congress, VA, veterans’ organizations, and other key stakeholders. Nevertheless, we believe that including vocational expertise in the IU decision-making process could provide VA with a more adequate basis to make decisions and thereby better ensure program integrity. Moreover, incorporating return-to-work practices could help VA modernize its disability program to enable veterans to realize their full productive potential without jeopardizing the availability of benefits for people who cannot work. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions you or members of the committee may have. For future contacts regarding this testimony, please call Cynthia Bascetta at (202) 512-7215. Carol Dawn Petersen, Julie DeVault, and Joseph Natalicchio also made key contributions to this testimony. 21st Century Challenges: Reexamining the Base of the Federal Government, GAO-05-325SP (Washington, D.C.: February 2005). High-Risk Series: An Update, GAO-05-207 (Washington, D.C.: January 2005). High–Risk Series: An Update, GAO-03-119 (Washington, D.C.: January 2003). SSA and VA Disability Programs: Re-Examination of Disability Criteria Needed to Help Ensure Program Integrity, GAO-02-597 (Washington, D.C.: Aug. 9, 2002). SSA Disability: Other Programs May Provide Lessons for Improving Return-to-Work Efforts, GAO-01-153 (Washington, D.C.: Jan. 12, 2001). SSA Disability: Return-to-Work Strategies May Improve Federal Programs, GAO/HEHS-96-133 (Washington, D.C.: July 11, 1996). Veterans’ Benefits: Improving the Integrity of VA’s Unemployability Compensation Program, GAO/HRD-87-62 (Washington, D.C.: Sept. 21, 1987). This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Department of Veterans Affairs (VA) provides disability compensation to veterans disabled by injuries or diseases that were incurred or aggravated while on active military duty. Under Individual Unemployability (IU) benefit regulations, a veteran can receive increased compensation at the total disability compensation rate if VA determines that the veteran is unemployable because of service-connected disabilities. GAO has reported that numerous technological and medical advances, combined with changes in society and the nature of work, have increased the potential for people with disabilities to work. Yet VA has seen substantial growth of IU benefit awards to veterans over the last five years. In 2001 GAO reported that a growing number of private insurance companies in the United States have focused their programs on developing and implementing strategies to enable people with disabilities to return to work. Our testimony will describe how U.S. private insurers facilitate return to work in three key areas: (1) the eligibility assessment process, (2) work incentives, and (3) staffing practices. It will also compare these practices with those of VA's IU eligibility assessment process. The disability programs of the three private insurers we reported on in 2001 included three common return-to-work practices in their disability assessment process. Incorporate return-to-work considerations from the beginning of the assessment process: Private insurers integrated return-to-work considerations early and throughout the eligibility assessment process. Their assessment process both evaluated a person's potential to work and assisted those with work potential to return to the labor force. Provide incentives for claimants and employers to encourage and facilitate return to work: These incentives included requirements for obtaining appropriate medical treatment and participating in a return-to-work program, if such a program would benefit the individual. In addition, they provided financial incentives to employers to encourage them to provide work opportunities for claimants. Strive to use appropriate staff to achieve accurate disability decisions and successful return-to-work outcomes: Private insurers have access to staff with a wide range of expertise not only in making eligibility decisions, but also in providing return-to-work assistance. The three private disability insurers told us that they selected the appropriate type and intensity of staff resources to assess and return individuals with work capacity to employment cost-effectively. In comparison, VA's Individual Unemployability decision-making practices lag behind those used in the private sector. As we have reported in the past, a key weakness in VA's decision-making process is that the agency has not routinely included a vocational specialist in the evaluation to fully evaluate the applicant's ability to work. Preliminary findings from our ongoing work indicate that VA still does not have procedures in place to fully assess veterans' work potential. In addition, the IU decision-making process lacks sufficient incentives to encourage return to work. In considering whether to grant IU benefits, VA does not have procedures to include vocational specialists from its Vocational Rehabilitation and Education (VR&E) services to help evaluate a veteran's work potential. By not using these specialists, VA also misses an opportunity to have the specialist develop a return-to-work plan, in collaboration with the veteran, and identify and provide needed accommodations or services for those who can work. Instead, VA's IU assessment is focused on the veterans' inabilities and providing cash benefits to those labeled as "unemployable," rather than providing opportunities to help them return to work. Incorporating return-to-work practices could help VA modernize its disability program to enable veterans to realize their full productive potential without jeopardizing the availability of benefits for people who cannot work.
For more than 30 years, federal law has provided comprehensive health coverage for low-income children through Medicaid. The children eligible for such care have made up a significant and growing portion of the nation’s population, as eligibility for Medicaid benefits has expanded to cover increasing numbers of previously uninsured children. In 1998, Medicaid covered more than one-third of young children ages 0 through 5, and more than one-fourth of children under age 21 (see figure 1). The 21 million children covered by Medicaid that year composed slightly more than half of the 41 million people in the program while the $32 billion spent for their care was about 23 percent of the $142 billion spent on the program by the federal government and states. An increasing number of children are also becoming eligible for EPSDT services, as federal policy designed to cover the growing number of uninsured children allows states to provide Medicaid services through the federally supported State Children’s Health Insurance Program (SCHIP). To implement SCHIP, states have the option of expanding their Medicaid programs, developing separate SCHIP programs, or doing some combination of both. If a state elects Medicaid expansion, it must offer the same comprehensive benefit package, including EPSDT services, to SCHIP beneficiaries as it does to Medicaid beneficiaries. In 2000, more than 1 million children were enrolled in SCHIP Medicaid expansion programs and were therefore also eligible for EPSDT services. Although many coverage, eligibility, and administrative decisions are left to individual states, the federal government sets certain requirements for state Medicaid programs. Coverage of screening and necessary treatment for children is one of these requirements. EPSDT components are designed to target health conditions and problems for which growing children are at risk, including iron deficiency, obesity, lead poisoning, and dental disease. They are also intended to detect and correct conditions that can hinder a child’s learning and development, such as vision and hearing problems. For many children, especially those with special needs because of disabilities or chronic conditions, EPSDT is an important help in identifying the need for essential medical and supportive services, and in making these services available. The federally required EPSDT components that constitute an EPSDT “screen” include a comprehensive health and developmental history, a comprehensive unclothed physical exam, appropriate immunizations, laboratory tests (including a blood lead-level assessment), and health education. Other required EPSDT services include vision services, including diagnosis, treatment, and eyeglasses; dental services, including relief of pain and infections, restoration, and hearing services, including diagnosis, treatment, and hearing aids; and services for other conditions discovered through screenings, regardless of whether these services are typically covered by the state’s Medicaid plan for other beneficiaries. While state Medicaid programs must cover EPSDT, they have some flexibility in determining the frequency and timing of screens. States develop, in consultation with recognized medical and dental organizations, their own “periodicity schedules,” which contain age-specific timetables that identify when physical examinations and certain laboratory tests and immunizations should occur. These tables vary somewhat from state to state. For example, the number of recommended EPSDT screens ranged from 15 to 29 across the five states we visited (see table 1). States have increasingly turned to managed care as a way to deliver Medicaid services, including EPSDT. From 1991 to 1999, the proportion of all Medicaid beneficiaries enrolled in managed care—either capitated or in primary care case management models—rose from about 10 percent to about 56 percent. Only two states do not have at least some Medicaid beneficiaries in managed care plans. Managed care, with its emphasis on preventive and primary care, is philosophically an ideal model for delivering EPSDT-type services. Under a capitated managed care model, states contract with managed care plans, such as health maintenance organizations, and pay a fixed monthly fee per Medicaid enrollee (a capitated fee) to provide most medical services. This model, with its fixed prospective payment for a package of services, creates an incentive for plans to provide preventive and primary care to reduce the chance that beneficiaries will require more expensive treatment services in the future. However, capitated managed care can also create a financial incentive to underserve or deny beneficiaries access to needed care. Moreover, Medicaid beneficiaries required to enroll in managed care may find it difficult to seek alternative care if their plan providers fail to meet their needs. Because of the potential to underserve, states must build in safeguards and accountability measures, such as grievance and appeals processes, to ensure that beneficiaries receive appropriate care. The Congress has given states greater flexibility in moving Medicaid beneficiaries into mandatory managed care plans. Before the Balanced Budget Act (BBA) of 1997, a state could require Medicaid beneficiaries to enroll in managed care only if it first obtained approval from HCFA to waive certain statutory provisions, such as the freedom to choose providers. Under HCFA waivers, states have implemented a variety of mandatory managed care programs, ranging from programs serving limited populations in just a few counties to state-wide programs covering all Medicaid beneficiaries, including children with special needs. The BBA gave states new flexibility in implementing mandatory Medicaid managed care programs, allowing them to implement programs through an amendment to their state Medicaid plan without first obtaining a HCFA waiver. The Omnibus Budget Reconciliation Act of 1989 (OBRA 89) made significant changes to improve the provision of EPSDT services to children in Medicaid. It required that the Secretary of HHS set state- specific annual goals for children’s participation in EPSDT; mandated state-established periodicity schedules for screening in dental, vision, and hearing services; required blood lead assessments appropriate for age and risk factors; and imposed new reporting requirements. To fulfill the state-specific goal requirement, in 1990 HCFA set a participation goal of 80 percent by 1995 for every state. To measure progress towards participation goals and in accordance with the OBRA 89 requirement that states report certain EPSDT statistics, HCFA required, starting in 1990, that states submit annual EPSDT reports (known as the form 416). The EPSDT report captures, by age group, the number of children who (1) received EPSDT health screens; (2) were referred for corrective treatment; (3) received dental treatment or preventive services; and (4) were enrolled in managed care plans. Since fiscal year 1999, states also are required to report the number of blood tests provided to screen children for lead poisoning. Lawsuits have been filed in many states alleging shortcomings in the provision of EPSDT services. According to information from the National Health Law Program, at least 28 states have been sued by beneficiaries or advocates since 1995 for failing to provide required access to EPSDT services. These lawsuits range from single-issue suits—such as coverage of selected services including mental health services in Maine—to alleged programwide failures and deficiencies in Texas, Tennessee, and Washington, D.C. In several instances, the outcomes, including court orders and settlements agreed to by both parties to remedy known concerns, illustrate the difficulties states have encountered in providing services and also suggest strategies to remedy established EPSDT deficiencies. Despite statutory reporting requirements, reliable national data are not available on the extent to which children in Medicaid are receiving EPSDT services. However, a number of studies of limited scope indicate that many children in Medicaid are not receiving EPSDT services. These studies also show that several factors are at work in limiting the successful delivery of EPSDT services. Some factors are program-related, such as a lack of providers or systems to ensure access to covered services. Others are related to beneficiaries themselves, such as the beneficiaries’ lack of awareness about the importance of preventive health care and about services covered, or their difficulty in maintaining continuity of care with one provider. HCFA’s efforts to assemble reliable information about EPSDT participation in each state have so far been unsuccessful. State-reported data, upon which HCFA depends, are often not timely or accurate. For example, states were required to submit their fiscal year 1999 reports by April 1, 2000. As of January 2001, 15 states had not submitted their 1999 reports and another 15 states’ reports had been returned by HCFA because they were deficient. HCFA and state officials acknowledge long-standing difficulties that states face in their efforts to collect complete and reliable data, which are used as the basis for the EPSDT reports. These difficulties continue despite HCFA’s attempts to improve the reliability of state EPSDT reports by revising the report format and guidance. One reason for the continued difficulty involves collecting data on EPSDT services provided under managed care. Under the more traditional fee-for- service approach, data on service delivery are often relatively easy to collect as part of the payment process because states pay providers for each service for which they bill the state. Under capitated managed care, however, states pay the managed care plan a prospective monthly per- enrollee fee that is not tied to the individual services provided. As a result, data on service utilization (often referred to as “encounter data”) are not necessarily captured. Instead, states have to rely on managed care plans to collect and report these data separately. Managed care plans, particularly those that also pay their participating providers on a capitated basis, often have difficulty collecting and reporting complete and accurate data. States face continuing challenges in determining how to minimize the administrative burden on managed care plans and providers while still collecting information at the level needed to administer the program. For example, to facilitate the collection of EPSDT data, California uses a special EPSDT form for providers to use in documenting the components of EPSDT services provided. California’s managed care contracts also call for managed care plans to collect the EPSDT forms from their providers and submit detailed encounter data to the state. However, the state has had difficulty enforcing these requirements across the several layers of contractors involved in its managed care delivery system. For example, in the Los Angeles area, the state contracts with two large managed care organizations that subcontract with multiple commercial and nonprofit health plans, such as Blue Cross, that further subcontract with a network of providers. Most of these contracts are on a capitated basis. State officials said that some of the health plans had difficulty collecting the required encounter data and that one plan had never submitted the required data. Also, they said that capitated providers of health plans had little incentive to fill out and submit the EPSDT form because their payments are not linked to it. The state’s Medicaid agency has not imposed sanctions against noncompliant plans or providers, restrained in part by its reluctance to lose any providers given the shortage of providers willing to serve children in Medicaid. Although problems are more extensive with managed care data than with fee-for-service data, most of the states we visited had some difficulty obtaining complete and accurate data from fee-for-service providers as well. Florida illustrates the kinds of difficulties that can be encountered. Providers in Florida are required to use a specific EPSDT code and a claim form to document the components of EPSDT services they provide. However, according to state officials, providers often choose to use other codes instead. For example, providers may submit a claim under a comprehensive office-visit code for a new patient that pays a higher rate than an EPSDT screen or they may submit claims under other comprehensive office-visit codes that require less documentation. Compounding these difficulties are limitations in claims processing systems used by states for fee-for-service programs or by managed care plans. In Florida, for example, if a child receives laboratory work from one provider and the remaining components of a screening from another provider, some managed care plans’ data systems do not combine the services to correctly reflect that a full screening for the child has been provided. Similarly, some states have problems tracking referrals and follow-up treatment services. This tracking difficulty may explain why, in HCFA’s 1998 compilation of state reports, seven states reported that no children had been referred for corrective treatments. While HCFA’s data cannot present a reliable and comprehensive picture of the extent to which children in Medicaid receive EPSDT services, other studies indicate that many of these children are not receiving such services. These other studies have been narrow in scope, allowing analysts to overcome the kinds of problems that so far have thwarted attempts to gather comprehensive data. They have focused on specific EPSDT services or reviews of a sample of patients’ medical records. For example, in recent years we have conducted reviews of screening rates for lead poisoning and dental care, basing our analysis primarily on data from national health surveys. Both studies found low screening rates for these specific services among low-income populations served by Medicaid. For lead poisoning, about 19 percent of children in Medicaid aged 1 through 5 were screened—a serious concern, because these children are almost five times more likely than others to have a harmful blood lead level. The screening rate for potential dental problems was similar, with about 21 percent of low-income children aged 2 to 5 having had a dental visit in the previous year. Older children fared somewhat better, with 36 percent of low-income children aged 6 to 18 having had a dental visit within the previous year. Studies by others have shown similar results. A 1997 study by HHS’ Office of Inspector General, which examined a sample of 338 children’s medical records from 12 health plans in 10 states, estimated that only 28 percent of children enrolled in Medicaid managed care received all prescribed EPSDT screens and that 60 percent received no screens at all. In several states, organizations responsible for external quality review of the Medicaid program have conducted sample medical record reviews of children enrolled in fee-for-service programs as well as those in managed care, and they have found similar results. For example, a study by Minnesota’s external quality review organization found that nearly half of the children in managed care plans whose files were reviewed had not visited a clinic in the previous year, and only 6 percent of those due for an EPSDT screen had received a comprehensive screen. A study in Washington State found that for the sampled files of children in managed care, 32 percent of infants (birth to 15 months) and 20 percent of children age 3 to 6 years received screenings for all aspects of EPSDT. The screening rates for children in fee-for-service care were also low—7 percent for infants and 24 percent for children age 3 to 6 years. Studies such as those cited above have collectively identified a number of reasons why many children in Medicaid are not receiving EPSDT services. Some of these reasons involve program-related matters, such as limited provider participation in Medicaid. For example, low provider participation in Medicaid has been noted as a particular problem in dental and mental health. Our earlier study found that a shortage of dentists willing to treat Medicaid patients was the major factor contributing to the low use of dental services. Similarly, a study by the Economic and Social Research Institute for the Kaiser Commission on Medicaid and the Uninsured found shortages of mental health and substance abuse professionals willing to treat Medicaid patients. Other program-related factors include inadequate methods for ensuring access to services. Our study of lead screening found problems with providers’ missing opportunities to perform follow-up tests when children returned for other care. Lawsuits brought in a number of states have also highlighted such problems as inadequate systems for informing beneficiaries about the availability of EPSDT services and poor coordination by managed care plans and state agencies. Several advocacy groups we interviewed echoed concerns that states and managed care plans do not adequately inform beneficiaries about the broad scope of EPSDT services or about beneficiary appeal rights. These groups also questioned the adequacy of the provider networks for serving children in Medicaid. In addition to these program-related factors, some beneficiary-related factors have also been found to limit screening services. For example, many Medicaid beneficiaries change eligibility status over short periods of time, and they may move frequently, making it more difficult to maintain continuity in their medical care. Researchers have also found that parents whose children are eligible to receive services under Medicaid tend to be less aware of the importance of preventive care than the general population. Those who try to obtain preventive care face other barriers. In our reports on oral health and screening for lead poisoning, we noted several other contributing factors, such as difficulty in getting time off from work, finding child care, arranging transportation to the provider, and overcoming language differences. These factors may contribute to a higher rate of broken appointments—a major concern among providers, particularly dentists. An American Dental Association survey reported that about one-third of Medicaid patients failed to keep appointments. A 1999 study conducted for the Florida Medicaid agency found that the top three reasons given by survey respondents for missing pediatric appointments were not having a ride to the appointment, the child no longer being sick, and forgetting an appointment. The five states we visited have implemented a variety of initiatives intended to improve the provision of EPSDT services to children in Medicaid, including those in managed care. The state and health plan efforts we identified fall into three general categories: (1) improving data; (2) better ensuring that plans deliver services; and (3) improving beneficiary outreach and education. Although in most cases states and health plans could not provide information on their specific impact, these initiatives represented efforts that state and plan officials cited as helping to better ensure that children receive EPSDT services. The five states we visited have taken a number of steps to improve the quality of the data they collect—especially from managed care programs— to monitor the utilization of services and to compile EPSDT reports to HCFA. These steps have not yet solved the states’ data and reporting problems; however, by moving toward more timely and reliable encounter data, states can better assess progress toward participation goals, identify specific plans or providers experiencing problems, and target corrective measures. As table 2 shows, these steps involve four main types of actions: requiring plans to submit detailed encounter data, validating those data, linking data with other sources, and reporting summary data in print or on the Internet. For example, to encourage health plans to report complete and accurate data, and to publicize comparative data, New York publishes summary statistics on individual plans on its health department Web site. These states’ experiences demonstrate that gathering complete and reliable encounter data is a long-term effort. Wisconsin, for example, worked collaboratively with capitated managed care plans for 4 years to formulate a uniform encounter data set and reporting system that all plans are required to use. Wisconsin’s system did not become functional until May 2000 and has not yet produced its first report to HCFA. New York has required managed care plans to submit encounter data for the past 6 years, but state Medicaid officials said the first few years of data were unreliable. The data became more reliable around the fourth year, after state officials worked with health plans to improve their data collection and verification efforts. States have also put into action a number of initiatives to help ensure that managed care plans and health providers deliver screening and treatment services to children enrolled in Medicaid. The broad package of benefits offered under EPSDT can result in confusion and potential under-service if health plans and providers are not clearly informed of their responsibilities to provide EPSDT services. In California, for example, officials said some health plans were not performing screens according to the state’s managed care periodicity schedule. Plan providers were confused, they said, because the state’s Medicaid fee-for-service periodicity schedule called for fewer screens than its managed care periodicity schedule (15 compared to 27) and physicians often served both fee-for-service and managed care patients. In addition, a recent HCFA- sponsored study of Medicaid managed care contracts in more than three dozen states found that states often fail to spell out the full range of EPSDT services that plans are responsible for providing. The study concluded, among other things, that while states routinely expect managed care plans to provide the full range of EPSDT service obligations, they do not always explain in contracts what this means and may not require contractors to educate beneficiaries about the benefit package offered under EPSDT. To better ensure EPSDT service delivery, the states we visited have taken action in several areas (see table 3). Some of these actions have involved states’ laying out expectations for managed care plans or providers through extensive specification of responsibilities in contracts or provider education. Other actions have involved the monitoring of health plans, the use of incentives and sanctions for provision of services, and requirements for plans to coordinate care with public health departments. States have also increased reimbursement rates for EPSDT services. For example, in 1995, to encourage fee-for-service providers to screen more children, Florida more than doubled its reimbursement rate for a comprehensive EPSDT screen. The examples in table 3 represent a few of the promising actions these states and health plans have implemented. The third area in which states have taken action is in educating and encouraging parents to better ensure that their children receive EPSDT services. Beneficiary outreach and education is typically a responsibility shared between the states and the health plans. At certain times in the process, the states may have primary responsibility for informing beneficiaries about covered services, such as when new beneficiaries are enrolled. Once a beneficiary is enrolled in a health plan, the state may require the plan to take measures to inform parents and families about covered services and how to access them. Officials from states and plans we visited reported a number of initiatives to better inform beneficiaries about EPSDT services (see table 4). These generally fell into four categories: designing clear and informative member handbooks, creating helpful and easy-to-understand materials to supplement member handbooks, developing programs to reach special populations such as children with disabilities, and conducting community outreach activities. For example, to encourage Medicaid beneficiaries, including those in managed care, to take advantage of preventive care, Florida mails reminder letters to families when their children are due for EPSDT screens. In addition to these efforts in the five states we visited, children’s advocates also informed us that several states have implemented initiatives as part of settlement agreements arising from EPSDT-related lawsuits. Settlement documents and court orders from selected EPSDT lawsuits contain information on a number of state initiatives to improve delivery of EPSDT services. For example, Pennsylvania established a series of 18 performance standards and health outcome measures and incorporated them into managed care contracts. Standards and interim targets were established for the percentage of children to receive immunizations and EPSDT screens, and measures were established for treatment and prevention of asthma, anemia, and lead poisoning. Appendix II contains further information on the basis for selected lawsuits and actions taken by states in response. HCFA, now called CMS, is currently reevaluating how best to carry out its role in helping to ensure that children receive access to EPSDT services. In recent years, HCFA’s efforts have focused largely on trying to improve the guidance to states about reporting the extent to which children are being screened. Attempts to improve reporting have been time-consuming, and progress has been slow. Because HCFA’s focus has been mainly on improving the format and specificity of the state EPSDT reports, it has placed little emphasis on the extent to which states are improving the underlying data or meeting HCFA’s EPSDT participation goals. At the regional office level, where much of the responsibility for working with states resides, a few offices have begun to help states identify problems and promote state progress in increasing children’s use of services. However, because most regional offices have focused their resources on priorities other than EPSDT, these efforts have not been widespread. In January 2001, HCFA’s central office proposed to regional offices and other stakeholders that the agency work more closely with states to improve both reporting and children’s use of services, but a specific plan for how to do so has not yet been developed. Recognizing that progress in providing services is difficult to assess without good data as a starting point, HCFA has centered its monitoring efforts largely on revising the guidance and format in order to improve state EPSDT reports. These revisions were largely aimed at capturing more reliable and more consistent EPSDT information while minimizing the burden on states in completing the reports. For example, in 1999 HCFA changed the EPSDT report to, among other revisions, require new information on dental services and blood lead tests, and to add more precise definitions of certain required data elements. It also allowed states to use their own periodicity schedules to determine their participation and screening rates. While these revisions have changed the reporting requirements, they have done little to address the continuing difficulties states face in their efforts to gather reliable and complete data. As our review of the five states showed, these problems require determined efforts at the state level, and because of the complexities associated with collecting managed care encounter data, such efforts take considerable time to accomplish. In the meantime, these EPSDT reports do not provide an accurate or complete picture of most state EPSDT programs, nor do they allow for reasonable national estimates of EPSDT screening and participation rates or for meaningful comparisons between states. Although HCFA’s efforts to improve data collection are important, by themselves they do not represent a strategy for helping states meet EPSDT goals. In part because HCFA acknowledges the limitations of the state EPSDT reports, the agency has done little to address how well states are doing in meeting the goal of providing EPSDT services to 80 percent of children enrolled in Medicaid. The existing reports show that most states are considerably below this goal. However, even if issues regarding data and reporting are adequately addressed, improved EPSDT reports, taken alone, will not provide HCFA with sufficient program detail to perform other oversight duties, such as helping states identify and correct specific problems or share information on lessons learned from other states and model state practices. A few HCFA regional offices have conducted reviews of state EPSDT programs. HCFA regional officials reported to us that eight such studies have been completed since 1995. Four included EPSDT as one element of a broader review of a state’s Medicaid managed care program; four focused exclusively on EPSDT. While these EPSDT and managed care assessments varied widely in their methodology and coverage of EPSDT issues, they have helped illuminate policy and process concerns and innovative practices of states. They have also identified needed actions to improve children’s access to EPSDT care. For example: In Oklahoma, an EPSDT-focused study conducted jointly by HCFA’s Dallas Regional Office and state Medicaid officials found several ways to increase screening and improve the quality of data submitted. The team found that providers relied on a review of a child’s medical chart to determine whether an EPSDT screen was due—a step they generally took only when an office visit occurred. As a result, children not visiting for other reasons were often not screened. The study recommended that the state establish a system to notify providers when children were due for screens. The study team also found that Medicaid provider knowledge of EPSDT services varied widely, and that many providers did not know about a monetary bonus the state offered to those providers who increased, to 60 percent or more, the proportion of eligible children who had EPSDT screens. To increase provider awareness, the study team recommended that the state annually include a discussion of EPSDT at provider education meetings. In California, an EPSDT-focused study conducted by HCFA’s San Francisco Regional Office with the cooperation of state Medicaid officials found that families of children in Medicaid were not being effectively informed about the availability of services or how to gain access to them. State officials who responded to the report’s findings acknowledged the need for a more cohesive effort to provide information about EPSDT services, and they indicated that the state would work to ensure that systems are in place to provide adequate information to families of children in Medicaid. The same HCFA study also singled out commendable practices including state efforts to coordinate care between Medicaid managed care plans and community health providers such as county mental health centers. In Michigan, a review of the state’s Medicaid managed care program conducted by HCFA’s Chicago Regional Office and others included an assessment of certain EPSDT policies and processes. These included EPSDT-covered services; processes and responsibilities for outreach, informing, and providing transportation services to beneficiaries; provider access and coordination; data reporting; and the achievement of screening goals. The review contained observations such as problems the state was having in collecting reliable data for the state EPSDT reports and differences in the usefulness of health plan member handbooks for describing how beneficiaries can obtain transportation services covered under EPSDT. Stated goals of the review were to gather information that would be useful in improving access and quality in the managed care program and to identify areas of innovation and best practices that could be shared with other states. While these assessments have helped those state programs that were reviewed and have identified best practices that might be applicable to other states, HCFA has reviewed only eight states since 1995 and has not established a mechanism for sharing lessons learned or innovative practices already in place among states. Since there is no HCFA requirement to periodically focus on and promote EPSDT on the state level, the decision to do so resides with management of each HCFA region. Most regions have not devoted resources to actively monitor or promote EPSDT. Some regional office staff cited other priority efforts, such as SCHIP, as diverting their resources. We found that regions typically have one staff person designated as EPSDT Coordinator, but with multiple responsibilities other than EPSDT. HCFA has recently begun to reevaluate the adequacy of its role in EPSDT. In a January 2001 letter to the agency’s regional offices, HCFA’s Director of the Center for Medicaid and State Operations introduced a proposal to broaden the agency’s role in promoting state EPSDT activities. In the letter, the Director sought input to a proposal designed to assure children’s access to services under the Medicaid program and to assist states in addressing problems in the collection and reporting of state EPSDT data. HCFA officials told us that the goal of the letter was to obtain stakeholder comments on what HCFA’s focus and direction should be. As of April 2001, HCFA regional staff had reviewed and commented on the letter, as had representatives from the American Academy of Pediatrics, officials from HHS’s Health Resources and Services Administration, and the Maternal and Child Health Technical Advisory Group (an advisory group made up of 6 to 10 state Medicaid directors). HCFA officials informed us that stakeholder reaction to the proposed initiative had generally been positive. The current chair of the Maternal and Child Health Technical Advisory Group told us that the general tone of the letter represents a collaborative, partnership approach that would provide for needed technical assistance while affording the flexibility needed for states to address conditions and impediments unique to each state. It is too early to determine whether this initiative will move forward, what form it will take, or what might result from it. The agency has not yet established a plan or devoted resources to develop and implement this proposal. HCFA officials said that they were continuing to solicit comments and input from stakeholders to develop a plan and that decisions about resources and implementation would depend on guidance and direction on agency priorities. More than a decade ago, the Congress passed legislative changes to help ensure that millions of low-income children under Medicaid have access to important health screening and treatment services. In the years since then, the Congress has placed even more emphasis on providing a health care safety net by expanding coverage to more and more children who do not have health insurance. This safety net, however, cannot be considered fully in place unless there are assurances that the covered health care services are actually provided. Unfortunately, reported data are unreliable and incomplete. They are inadequate for gauging Medicaid’s success in providing screening, diagnostic, and treatment services to enrolled children. Particularly for children served by managed care plans—a growing segment of the population—current information does not allow a thorough assessment of progress. However, the available information indicates that many children are still not receiving health screening services. Recognizing this concern, some states are taking a more active role in identifying ways to reach the at-risk population served by Medicaid. HCFA, now called CMS, has recently indicated increased emphasis on EPSDT services and can build on these state efforts in several ways while still giving states the flexibility to administer the program. One way is to continue the important task of working with states to improve the reporting of information on service delivery. Many providers, plans, and states will need to improve their reporting in the long-term so that there will be a more accurate picture of how well they are doing in providing these services, especially in a capitated managed care environment. In the short-term, CMS can take action to obtain a better understanding of the many different state policies and practices so it can work collaboratively with states to improve data and reporting, monitor the provision of services, and better inform and reach beneficiaries. In its position of setting federal policy and assessing a broad array of state activities intended to help reach at-risk Medicaid children, CMS can help build on successful efforts by sharing successes among states and working with the many different agencies and parties to ensure a coordinated approach to this care. By signaling a broadening of its interest in state EPSDT efforts, the agency has taken a positive first step. An important next step is for CMS to develop a more specific plan and time frames for working with states to assess their efforts and results in providing services to children in Medicaid. To strengthen the federal role in ensuring the delivery of EPSDT services and to bring greater visibility to ways that states can better serve children in Medicaid, we recommend that the Administrator of CMS: work with states to develop criteria and time frames for consistently assessing and improving EPSDT reporting and the provision of services, including requiring that states develop improvement plans as appropriate for achieving the EPSDT goal of providing health services to children in Medicaid; and develop a mechanism for sharing information among states on successful state, plan, and provider practices for reaching children in Medicaid. We obtained comments on a draft of this report from CMS and the five states we visited. CMS commented that, as noted in the draft report, the problem is complex and not subject to an easy resolution (CMS’s comments are included in app. III). CMS agreed that more could be done to work with states to help ensure children’s access to services and compliance with federal requirements and stated that the agency’s regional offices are already starting to work with some states where problems exist. CMS partially agreed with our recommendation that it work with states to develop criteria and time frames for assessing and improving EPSDT reporting and the provision of services, including developing state-specific improvement plans for achieving EPSDT goals. While acknowledging the importance of working with states to improve the provision of services, CMS indicated that it was not certain that improvement plans for all states were necessary as part of this effort. Because of the unreliability of EPSDT reports, we believe that a more consistent assessment across all states is necessary to provide greater insight into states’ progress in achieving EPSDT goals. Depending on the assessment outcomes, improvement plans may not be needed for every state. We have clarified our recommendation accordingly. CMS agreed with our recommendation that the agency do more to foster information sharing and cooperation among states to improve EPSDT. The agency indicated that, as a first step, it is planning several activities with states, foundations, and others to promote the value of EPSDT services. The agency also provided technical comments that we incorporated where appropriate. California and Connecticut reviewed our findings concerning their state programs and said they had no comments. Florida, New York, and Wisconsin provided technical comments, which we incorporated where appropriate. New York also commented that the draft did not acknowledge that compliance rates with screening requirements are uniformly low, even for children not in Medicaid, and stated that EPSDT expectations may not be realistic. While some available reports, such as our past work on lead and dental screening, do show low screening rates in the aggregate, these reports also show wide variations among states. Because available data are insufficient to gauge states’ progress in providing EPSDT services, assessing whether the agency’s 80 percent screening goal is realistic is difficult. We anticipate that once state EPSDT data are more reliable, CMS will be in a better position to reevaluate whether the annual screening goals that it set more than a decade ago are realistic and achievable. New York also commented that the shortfalls in the provision of recommended levels of preventive health services identified in the report apply to all children, not just those served by Medicaid. Rather than perform a comparative analysis of the provision of services for children in Medicaid versus others, this report focused on the provision of EPSDT services to children in Medicaid, which our past work, as well as the work of others, has shown to be an at-risk population. New York’s comments are included in appendix IV. As arranged with your offices, unless you release its contents earlier, we plan no further distribution of this report until 30 days after its issuance date. At that time, we will send copies to the Secretary of Health and Human Services; the Administrator of CMS; appropriate congressional committees; and other interested parties. If you or your staff have any questions about this report, please contact me at (202) 512-7118. Other contacts and major contributors are included in appendix V. To obtain information about efforts states were taking to improve EPSDT services, particularly within managed care, we visited five states. These states—California, Connecticut, Florida, New York, and Wisconsin—were selected to represent different regions of the country and because they had relatively high numbers of children in managed care or a reputation for having an innovative EPSDT program or both. These states differed greatly in the size of their Medicaid populations and the number of participating health plans. Table 5 contains background information on the states we visited. Lawsuits have been filed in at least 28 states alleging the states had failed to adequately provide EPSDT services. The seven cases summarized in table 6 were suggested by the National Health Law Program’s Director of Legal Affairs and other EPSDT advocates as examples of states that have adopted innovative or promising EPSDT practices as a result of lawsuits. The following information reflects our review of relevant court documents in each of these cases and, in some instances, follow-up contacts with state officials to obtain further information about the state’s efforts. Other major contributors to this report were Matthew Byer, Bruce Greenstein, Sophia Ku, Behn Miller, and Stan Stenersen.
The Early and Periodic Screening, Diagnostic, and Treatment (EPSDT) Program calls for states to provide children and adolescents under age 21 with access to comprehensive, periodic evaluations of health, development, and nutritional status, as well as vision, hearing, and dental services. There is concern that state Medicaid programs are not doing an adequate job of screening children for medical conditions or providing treatment for the children who need it. There is also concern about how these services are faring under managed care plans. This report examines (1) the extent to which children in Medicaid are receiving EPSDT services, (2) efforts that selected states are taking to improve delivery of EPSDT services, particularly within managed care, and (3) federal government efforts to ensure that state Medicaid programs provide covered EPSDT services. GAO found that the extent to which children in Medicaid are receiving EPSDT services are not fully known, but the available evidence indicates that many are not receiving these services. A Department of Health and Human Services Office of Inspector General study found that less than one-half of enrolled children in their sample received any EPSDT screens. GAO found that states are taking actions to improve delivery of EPSDT services, particularly within managed care. These actions include linking several state databases, publishing statistics that compare performance, contracting with local health departments to coordinate care for children, and mailing reminder letters to parents. Federal efforts to ensure that children are receiving services have largely focused on changing the state reports so that they can collect reliable information about the extent of the EPSDT screening.
Flooding is the most widespread natural hazard in the country, affecting virtually every state. From February 1978 through August 2008, there were 90 significant flood events. Since its inception in 1968, NFIP has sought to have local communities adopt floodplain management ordinances and offered flood insurance to their residents in an effort to reduce the need for government assistance after a flood event. Premium subsidies were seen as a way to achieve the program’s objectives by ensuring that owners of existing properties in flood zones could afford flood insurance. The authority for subsidized rates was therefore included in the National Flood Insurance Act of 1968 as an incentive for communities to join the program by adopting and enforcing floodplain management ordinances that would reduce future flood losses, with the intent that the subsidies would be only a part of an interim solution to long-term adjustments in land use. The first $35,000 of any subsidized policy for a one-to-four family residential property, and the first $100,000 of any other residential property, receives the NFIP subsidy; amounts of insurance in excess of $35,000 and $100,000, respectively, are charged full-risk rates. On average, the premium for a subsidized policy in a high-risk flood zone is higher than the premium on a full-risk policy in the same zone because properties with full-risk rates have either been built to newer flood-resistant building codes or have been mitigated to reduce flood risks and thus are generally less flood prone than properties that are eligible for subsidized rates. For example, the average annual subsidized premium in 2007 for properties located in the highest-risk zones was about $880, while the average annual premium for properties in the same zones paying full-risk rates was about $379. The program has three components: (1) the provision of flood insurance, as mentioned above; (2) the requirement that participating communities adopt and enforce floodplain management regulations; and (3) the identification and mapping of floodplains. Community participation in NFIP is voluntary. However, communities must join NFIP and adopt FEMA-approved building standards and floodplain management strategies in order for their residents to purchase flood insurance. Participating communities can receive discounts on flood insurance if they establish floodplain management programs that go beyond the minimum requirements of NFIP. FEMA can suspend communities that do not comply with the program, and communities can withdraw from the program. Currently, more than 20,000 communities participate in NFIP. FIRMs, which show the level of flood risk in various areas and assign a flood zone designation to each area based on its risk level, are used to set premium rates, among other things. The risk levels range from high to low risk depending on the risk of flooding. Structures used to secure loans from a federally regulated lending institution that are deemed high- risk or high-risk coastal are required to have flood insurance. For structures deemed to have moderate to low risk of flooding, the purchase of flood insurance is voluntary. FIRMs are also used to determine whether a structure is eligible for rate subsidies. Structures built after a community’s FIRM was published must be built to NFIP building standards and pay full-risk rates. Communities also use the maps to establish minimum building standards designed to reduce the impact of flooding, and lenders use them to identify which property owners are required to purchase flood insurance. Once communities join NFIP and are mapped, structures that were built before the FIRM—pre-FIRM structures—become eligible for subsidized rates. Pre-FIRM structures generally are at a high risk of flooding because they are located below the area’s base flood elevation (BFE), which is the computed elevation to which floodwater is anticipated to rise during a flood that is estimated to have a 1 percent chance of occurring annually. To lessen the flood risk level, pre-FIRM structures can be mitigated. FEMA recognizes the following steps for mitigating residential pre-FIRM structures: (1) elevation of structures to or above their BFE, (2) relocation of structures to a higher area, or (3) demolition of structures. Mitigation of pre-FIRM properties is voluntary unless a property is substantially damaged or the owner undertakes substantial improvement. In these cases, the structure must be repaired or renovated to meet the same standards as new construction. Unmitigated existing pre-FIRM properties are eligible for subsidized rates for the life of the properties. As owners sell their subsidized properties, the new owners also become eligible for the subsidized rates, and subsidies apply even if the owners discontinue their insurance coverage and do not purchase insurance again until years later. Mitigation activities have always been part of NFIP, but it was not until the 1988 passage of the Robert T. Stafford Disaster Relief and Emergency Assistance Act that FEMA received the authority to fund mitigation projects for all types of disasters, including flooding. Later, the National Flood Insurance Reform Act of 1994 gave FEMA the authority to carry out a flood-only mitigation assistance program to help policyholders reduce the risk of flood damage to individual properties. In 2004, the Bunning- Bereuter-Blumenauer Flood Insurance Reform Act of 2004 authorized two additional grant programs specifically for properties that experienced repetitive flooding and mandated increased premiums if property owners refused to mitigate. Each program has different types of requirements, purposes, and appropriations. FEMA uses a cost-benefit analysis to determine the cost-effectiveness of proposed mitigation projects and to rank the projects in order of priority. Policyholders can also buy Increased Cost of Compliance (ICC) Coverage—a component of the standard flood insurance policy—which provides up to $30,000 above the insured policy amount for mitigating flood-damaged properties that meet certain criteria. Table 1 summarizes the five mitigation programs and ICC. NFIP’s inventory of properties receiving subsidized premium rates has grown over the past 20 years, hindering the program’s ability to pay claims without borrowing from the Treasury. While the percentage of policies receiving subsidies has dropped since 1978 to 23 percent of all policies as of December 2007, the number of subsidized properties has continued to increase. In addition, despite earlier expectations that the number of subsidized properties would decrease over time, for several reasons the number of policies with subsidized rates is at its highest point since 1980. Further, because of current low NFIP participation rates, there appears to be room for substantial growth in the number of NFIP policies, many of which are likely to receive subsidized premium rates. The properties receiving subsidized rates have been a financial burden on the program because of their relatively high loss experience and subsidized rates that do not reflect the actual risk of flooding. Subsidized properties also account for the majority of repetitive loss properties—properties that have experienced multiple flood losses—which make up around 1 percent of the total policies but 30 percent of the claims dollars paid. While the percentage of residential subsidized properties has dropped over time, the number of subsidized properties has fluctuated since NFIP began but has grown fairly consistently over the last 20 years (see fig. 1). Specifically, the percentage of residential subsidized policies has dropped since the early years of the program from 77 percent in 1978 to 23 percent of all policies as of December 2007. But the number of policies with subsidized rates is at its highest point since 1978, despite earlier expectations that the number of subsidized properties would decrease substantially. According to FEMA, in the early years of the program it used subsidies to encourage participation in the program, and because of the high number of pre-FIRM structures, the number of policies with subsidized rates reached a high of about 1.09 million in 1980. Subsequently, between 1980 and 1985, aggressive annual rate increases for subsidized policies corresponded with a reduction in the number of subsidized policies, which fell to a low of about 705,000 in 1985. However, the number of policies with subsidized rates has increased nearly every year since 1986, reaching a high of almost 1.13 million in 2007. A number of factors help explain this increase. Specifically, according to FEMA, there has been an increase in the number of mortgages with mandatory purchase requirements for flood insurance—that is, mortgages on structures that are located in SFHAs. The Flood Disaster Protection Act of 1973 made flood insurance mandatory for mortgages from federally regulated lenders on buildings located in SFHAs. These lenders are required to check the current FIRM to determine whether the structure is in the SFHA at the time a mortgage is made. FEMA officials also told us that since the 1973 act, the increase in the number of mortgages subject to the flood insurance requirement, coupled with greater enforcement of this requirement by financial regulators in recent years, had resulted in an increased number of flood insurance policies, including policies with subsidized rates. According to FEMA, many of these mortgages were on buildings that were constructed before the most recent FIRMs were in place, making the policies eligible for the subsidized rates. Additionally, the populations of coastal communities have grown steadily over the last 28 years. These communities have relatively high concentrations of properties in SFHAs that are required to have flood insurance, including properties that qualified for subsidized premiums. Moreover, FEMA said that the longer-than-expected life of structures eligible for subsidies has made decreasing the subsidized property inventory more difficult. Some in Congress, at the time NFIP was created, assumed that buildings would be torn down as they aged and any new structures, which would have to meet more strict building codes, would be ineligible for subsidized rates. However, according to FEMA, existing structures have been demolished at a much lower rate than expected and reductions in the overall subsidized property inventory have not occurred. Moreover, some older structures have been renovated and thus may retain their subsidies. And because subsidized premiums are tied to the property and not the policyholder, properties have retained their subsidies even as ownership has changed. Other factors have also contributed to the increase in the number of subsidized properties. For example, FEMA told us that SFHA boundaries have been modified through its map modernization program, resulting in more properties in SFHAs, and many of these properties are eligible for subsidized rates. Moreover, FEMA told us that many homeowners purchased flood insurance after seeing the devastation caused by the hurricanes of 2005. FEMA officials commented that many homeowners believed that there was little to no chance that their homes would be flooded, but that after the 2005 hurricanes, these homeowners had a better understanding of the reality of their actual flood risk. FEMA noted that a community’s policy inventory often increases sharply after experiencing a flood. Another possible reason for the increase is that disaster assistance for repair or replacement of buildings or manufactured (mobile) homes and/or personal property in SFHAs can trigger a requirement to purchase flood insurance. In addition, according to FEMA, the recent increase in its marketing efforts through its FloodSmart campaign has contributed to the increase in policies. This program was designed to educate and inform partners, stakeholders, property owners, and renters about insuring their homes and businesses against flood damage. In 2004, the year in which FloodSmart was implemented, NFIP had 1.05 million policies with subsidized rates. By 2007, this number had increased 8 percent to almost 1.13 million. However, for the reasons discussed earlier, proving a causal relationship is difficult. According to FEMA officials, most populated floodplains participate in NFIP, but communities are still joining. For example from 1978 to 2007, the number of communities participating in NFIP has steadily increased from 15,999 to 20,474. Additionally, FEMA expects as many as 300 new communities to join NFIP in fiscal year 2008, and by the end of the first quarter, 141 communities had already joined. As of December 31, 2007, NFIP included almost 5.3 million active flood insurance policies on residential properties, nearly 23 percent (1.19 million) of which were charged subsidized premiums. Figure 2 details the number of total residential NFIP policies in each state, as well as the number of those policies that received subsidized premium rates. Approximately 70 percent (3.69 million) of the total policies were concentrated in five states: California, Florida, Louisiana, New Jersey, and Texas. Furthermore, 57 percent (673,964) of the almost 1.2 million residential policies with subsidized premiums were located in the same five states. Because of the high number of policies, these states have historically accounted for the majority of claims losses paid out as well as premium dollars received by the program. According to FEMA data, these states accounted for 59 percent of claims losses from 1978 to 2004 and 67 percent of premium dollars. Taking the 2005 hurricanes into account, the same numbers for 1978 to 2007 changed to 70 percent of claims losses and 66 percent of premium dollars. Low market penetration for NFIP flood insurance policies, particularly in some areas, leaves room for growth in the number of flood insurance policies as FEMA continues to encourage participation in NFIP through FloodSmart. According to a 2006 RAND study commissioned by FEMA, there were approximately 3.6 million single-family homes in SFHAs nationwide, half of which had no flood insurance. The study also found that while about a third of NFIP’s policies were for homes outside of SHFAs, NFIP’s market penetration rate for such properties was only about 1 percent. Another indicator of the potential for growth is that, according to FEMA data, approximately 2,000 communities do not participate in NFIP, and of the 20,400 that do participate, approximately 3,500 had no NFIP policies and 1,700 others each had only one policy. FEMA is aware of the low market penetration rates and has been making efforts to increase the number of flood insurance policies, largely through its FloodSmart campaign. To aid in this effort, FEMA recently purchased more detailed market penetration data, which could allow FEMA to target areas with particularly low participation in NFIP. While these data are not yet finalized, initial calculations suggest that the actual market penetration rate for SFHA structures could be even lower than what the RAND study estimated. For example, some areas of the Midwest and Northeast appear to have considerably lower policy volumes than other areas of the country, based on their flood declarations, cumulative flood claims payments, and population. (See app. II for a more detailed analysis of market penetration.) Similarly, the RAND study found that the Midwest and Northeast had a much lower market penetration than other regions of the United States. While it is uncertain what percentage of any new policies might be eligible to receive subsidized rates, FEMA officials said that any increase would largely depend on the location of future program participants. Because older structures are more likely to be pre-FIRM, areas of the country with older structures, such as the Midwest, are more likely to have a higher percentage of potentially subsidized properties. The lower market penetration in the Midwest, combined with flood risk awareness resulting from the recent Midwest floods as well as the FloodSmart campaign, could increase participation in NFIP, resulting in a higher proportion of subsidized rates than the current 23 percent. On the other hand, FEMA said that areas of the country with newer structures, such as the Gulf Coast, are likely to have a lower percentage of subsidized policies. Most recent policy growth has been in these regions, so if this trend continues, future additional policies could have a lower proportion of subsidized rates. The large number of subsidized properties has contributed to NFIP’s historical operating losses through its relatively high loss experience and rates that do not reflect the actual risk of flooding. Therefore, despite the increase in policies with full-risk rates relative to policies with subsidized rates, policies with subsidized rates have continued to be a drain on the program’s overall financial condition. For example, while there have been fewer policies with subsidized rates than policies with full-risk rates in every year since 1982, subsidized properties have accounted for more claims payments than properties paying full-risk premium rates in all but 5 of those years. As previously mentioned, subsidized premiums average about 35 to 40 percent of the premium that would fully reflect the associated risk of loss. As a result, NFIP has not collected enough in premiums to cover the claims that FEMA estimates will be made on these properties in an average year. From 1986 to 2004, policies receiving subsidized rates resulted in a $962 million operating deficit. This deficit occurred despite the fact that in 1986, among other things, FEMA finished a series of aggressive rate increases on subsidized properties to ensure that the premiums collected better reflected expected losses. However, in 2005, Hurricanes Katrina, Rita, and Wilma resulted in claims losses that far exceeded those in previous years. As a result of these Gulf Coast hurricanes, FEMA had to borrow $17.5 billion to pay NFIP claims. Moreover, in 2008, FEMA had to borrow additional funds from the Treasury to pay its interest payment on its outstanding debt to the Treasury. Prior to 2005, policies with subsidized rates accounted for 58 percent of claims dollars paid, but because of the extraordinary nature of the 2005 hurricanes, including that many losses occurred on properties that were located in moderate- to low-risk areas, properties with both subsidized and full-risk rated policies experienced significant losses. Of the total losses from the 2005 hurricanes, 29 percent were from claims paid on subsidized properties, while 71 percent were from full-risk policies. However, the operating deficit for subsidized policies increased substantially, to $6.3 billion. Properties with repetitive losses, the majority of which receive subsidized premium rates, have also contributed to NFIP’s operating deficit. As previously reported, these properties account for about 1 percent of all policies but are estimated to account for up to 30 percent of all NFIP losses. As of March 2008, there were 126,351 repetitive loss properties, just over 60 percent of which had subsidized rates. Although not all repetitive loss properties are part of the subsidized property inventory, given that a high proportion of these properties receive subsidized rates, their propensity for flood losses contributes to the financial risks faced by NFIP. While Congress has made efforts to target these properties, the number of subsidized properties that are also repetitive loss properties has continued to grow, making them an ongoing challenge to the financial stability of the program. Because of the financial condition of NFIP and mounting losses, the negative financial impact that subsidized premium rates have on the program continues to be an area warranting ongoing attention, as we pointed out when placing NFIP on the high-risk list in 2006. As Congress continues to evaluate the appropriate role of the federal government in insuring natural catastrophes in light of recent events in the Gulf Coast region, evaluating whether to maintain the current system of NFIP subsidies or make changes has been an ongoing part of the debate, as evidenced by various bills that have been introduced in Congress. However, balancing the public policy goals of charging premium rates that fully reflect actual risks, encouraging broad participation in natural catastrophe insurance programs by maintaining affordable rates, and limiting taxpayer costs before and after a disaster will be an ongoing challenge. While the current system of subsidies and voluntary mitigation might promote broad program participation, it does create some exposure for taxpayers and allows rates that do not reflect actual risks. We discuss three broad public policy options for addressing the financial impact of subsidized properties on the financial solvency of NFIP: eliminate or reduce use of subsidies, and target use of subsidies based on the financial need of the property owner. Each of the options has both advantages and disadvantages in terms of how it affects the program’s public policy goals. Subsidizing premiums can encourage participation in NFIP, especially among those who might not be able to afford premium rates that fully reflect the actual risk of flooding. Some proponents believe that charging actuarial risk rates could result in some property owners not buying any flood insurance and NFIP receiving less in total premiums than it would if it allowed subsidized rates. The proponents also assert that continuing the subsidies is also preferable to charging full-risk rates, because while subsidized rates do not cover the actual risk of loss, they at least offset a portion of the cost of providing postdisaster assistance to property owners who might otherwise have no flood insurance and pay no premiums. One disadvantage of the current approach is that those who receive subsidies are not paying premium rates that reflect the full risk of loss from flooding. As noted previously, not charging full-risk rates contributes to FEMA’s challenges in maintaining the financial stability of NFIP. In addition, charging less than full-risk rates can send incorrect signals to property owners about the risks associated with living in certain areas and reduce incentives to undertake mitigation efforts because subsidized rates may distort a property owners view about the financial benefits of mitigation. Further, policies with subsidized rates could result in higher financial losses for NFIP than policies with full-risk rates. Another disadvantage of the current approach is that although FEMA has stated that it is generally cost-beneficial to mitigate properties, depending on the properties’ flooding history and expected future losses, among others, it faces several limitations in attempting to reduce the number of properties receiving subsidized premium rates, including those properties that have the greatest negative financial impact on NFIP. To begin with, mitigation is generally voluntary, except when there has been substantial damage to the insured structure, and participating communities interested in NFIP mitigation funding are required to compete for available funding through one of the available mitigation programs. In addition, even when funds are made available to a community and property owners are interested in mitigating their properties, the property owners may still have to pay a portion of the mitigation expenses, a fact that could discourage mitigation among those unable or unwilling to contribute to the cost of mitigation. For example, local officials and real estate agents in Sonoma County, California, told us that ICC was the primary financial tool used by flooded homeowners to elevate their homes, but because ICC limits mitigation assistance to $30,000 and the cost of elevating a house in Sonoma County typically is more than twice that, some residents were not able to cover the additional cost and therefore could not take advantage of ICC funds. In addition, although FEMA has provided communities with information on which properties have had the most severe repetitive flood losses, current mitigation efforts in participating communities are not necessarily targeted at properties receiving subsidized premium rates that have flooded repeatedly. States and local communities determine their priorities, and some communities, therefore, may focus their mitigation efforts on activities that benefit more than one property, such as regrading the land to control the flow of water and building retaining ponds. Finally, although mitigation is mandatory when a property has been substantially damaged or renovated, mitigation may not always occur. If the cost of repairing a pre-FIRM structure to its condition before the damage occurred is equal to or greater than 50 percent of that structure’s market value before the damage, NFIP requires that the structure be mitigated. However, participating communities, not FEMA, are responsible for enforcing compliance with NFIP regulations and building codes, although FEMA can suspend a community that is not in compliance with NFIP. According to some local stakeholders, not all communities enforce or are able to enforce compliance. For example, local officials in Harris County, Texas, identified one pre-FIRM property owner in the county who has refused the county’s offers to buy his property despite repeated offers. According to the county tax office, that property had a market value of $153,330 in 2007. According to NFIP data, that policyholder had collected over $975,000 in 15 flood claim payments from 1979 through 2006 for structural damage, ranging from over $3,000 to $185,000 per payment. In spite of these limitations, existing mitigation efforts have resulted in the reduced risk of loss for a number of properties. However, the number of properties mitigated is small compared with the total number of properties receiving subsidized rates. As shown in table 2, nearly 30,000 properties have been mitigated with FEMA funds since fiscal year 1997. However, the number of policies with subsidized rates still increased during that same period from 1.03 million in 1997 to almost 1.13 million in 2007. FEMA officials have acknowledged that mitigating properties can be difficult, at least in part due to the cost, time, and resources required. According to FEMA, the current average cost to mitigate a residential property ranges from $143,000 for elevating a property to $176,000 for acquiring a property. After the passage of the Bunning-Bereuter-Blumenauer Flood Insurance Reform Act of 2004, FEMA officials made mitigating repetitive loss properties a priority, especially those with severe repetitive losses. FEMA has identified approximately 7,000 properties as having had experienced severe repetitive losses. Over 1,400 properties of these severe repetitive loss properties have received cumulative claims payments ranging from $200,000 to over several million dollars per property. Although each property must be subject to an individual cost-benefit determination to reflect its unique characteristics and expected future losses, because these aggregate payments were above the average mitigation costs, mitigation may be cost-effective for many of them if similar losses were expected in the future. However, FEMA officials told us that they did not anticipate being able to totally eliminate severe repetitive loss properties given the current funding level for the Severe Repetitive Loss Pilot Program of $160 million for fiscal years 2006 through 2008, and uncertainty over ongoing appropriations for this program. Reducing the financial impact of subsidized properties on NFIP would generally involve either reducing the number of properties receiving subsidized premium rates, reducing the losses associated with these properties, reducing the amount of the subsidy, or some combination of these approaches. Whether maintaining the current program or making changes to NFIP subsidies, Congress will be faced with balancing often- competing public policy goals, which include charging premium rates that more fully reflect actual flood risks and help better ensure NFIP solvency, encouraging broad participation in natural catastrophe insurance programs by offering affordable rates, and limiting taxpayer costs before and after a disaster. We discuss three broad options that could help address NFIP’s financial situation: (1) increase mitigation efforts, (2) eliminate or reduce use of subsidies, and (3) target use of subsidies on the financial need of property owners. Each of the three options has both advantages and disadvantages in terms of its effect on these public policy goals, which we highlight in table 4. We also note that the options are not mutually exclusive and may be used in conjunction with others, and that how an option is implemented can affect its advantages and disadvantages. One option to address the financial impact of subsidized premium rates on NFIP would be to substantially expand flood mitigation efforts, including targeting those properties that have been most costly to the program. This option would substantially expand the requirements of the Bunning- Bereuter-Blumenauer Flood Insurance Reform Act of 2004, which mandated mitigation for insured properties that have received four or more flood claims payments totaling more than $20,000 or two claims payments whose total exceeds the value of the property and created the Severe Repetitive Loss Pilot Program to help carry out such mitigation. This option would have a more restrictive criterion, which could increase the number of subsidized properties for which mitigation is required. Mitigation could be required for all insured properties that have filed two or more flood claims, irrespective of claims total; subsidies could be eliminated for property owners who refuse or do not respond to a mitigation offer; or some combination of these approaches. This option would require increased funding for mitigation purposes. This option has several advantages. First, it could reduce flood losses by ensuring that more homes were better protected from flooding through mitigation, whether it was through elevation, relocation, or demolition. Because many repetitive loss properties have subsidized premiums—that is, rates that do not reflect their actual risk of flooding—increased mitigation could reduce the claims payments the program makes on these properties and could ultimately reduce taxpayer exposure in the long term. As the congressional findings in the Bunning-Bereuter-Blumenauer Flood Insurance Reform Act of 2004 noted, and as FEMA officials concurred, mitigating repetitive loss properties through buyouts, elevations, relocation, flood-proofing, or regrading and other engineering projects would produce savings for policyholders and for federal taxpayers through reduced flood insurance losses and federal disaster assistance. Second, denying subsidies to those who refuse or do not respond to mitigation offers could increase the number of property owners paying full-risk rates and encourage mitigation. Third, FEMA could build upon its existing mitigation programs and thus continue targeting those properties that have been most costly in terms of claims paid while maintaining current subsidy rates. As we have noted, subsidies have been used to encourage participation in the program. Local officials generally support increased mitigation efforts. Reducing flood risk generally increases property values and, as a consequence, the local tax base. And as we have seen, participation from local communities is critical for successful mitigation efforts. However, there are several disadvantages associated with this option. First, because subsidized rates do not reflect a property’s actual flood risk, subsidized property owners might not be motivated to undertake mitigation efforts that would reduce the risk of flood and their premium rate. Second, substantially increasing mitigation efforts would be costly and would require increased funding for FEMA’s mitigation programs. As stated earlier, about 1.2 million policies received subsidized rates in 2007, including approximately 7,000 severe repetitive loss properties. FEMA estimates that the average mitigation cost would range from about $143,000 to about $176,000 per residential property. Buyouts and relocations would be more costly in areas of the country with relatively expensive real estate. Applying FEMA’s mitigation cost range per property to the number of severe repetitive loss properties results in an estimated cost range of approximately $1 billion to approximately $1.2 billion. Applying the same calculation to the rest of the repetitive loss properties would add over $17 billion to over $22 billion to the estimate. However, mitigation costs would have to be weighed against the possible savings from a decrease in flood damage that would result from mitigation. Third, the mitigation process is often lengthy, and mitigating a large number of properties could take a number of years to complete, and until then, subsidized premium rates would continue to negatively affect the program’s financial health. Fourth, FEMA’s reliance on local communities to undertake and enforce mitigation activities could limit the effectiveness of these efforts. Despite being a national program, NFIP relies on state and local communities to ensure the program’s implementation and success. While local communities recognize the importance of mitigation, not all communities have the staff or resources to fully carry out current mitigation efforts, meet the cost-sharing requirement (generally 25 percent of the eligible project costs, which either the community or the property owner could provide) that four of the five mitigation programs require, and enforce noncompliance requirements. Some communities, in fact, require the homeowner to provide the cost-sharing requirement. Moreover, it is the responsibility of the local floodplain management agencies to enforce compliance with the ordinances by, for example, ensuring that property owners undertake proper mitigation efforts and by issuing appropriate work permits for the damaged property. Some communities may not have sufficient resources for expanded efforts in these areas. In addition, certain types of mitigation, such as relocation or demolition, might be met with resistance by communities that rely on those properties for tax revenues, such as coastal communities with significant development in areas prone to flooding. A second option—eliminating or reducing the subsidies—would meet the public policy goals of charging premium rates that more fully reflect actual risks. Because FEMA would be able to charge more policyholders premium rates that more closely represent actual flood risk, the premiums collected would more closely reflect the losses that the agency expected to incur, contributing to the financial health of NFIP. One way to implement a reduction of the subsidies is to base the rate on the number and amounts of flood claims per property. In other words, if a property has a certain number of claims, the subsidy would be rerated and the policyholder could be required to pay a higher premium. Another way is to eliminate subsidies for certain categories of subsidized properties, such as nonprimary residences (vacation homes or rental properties) or to limit subsidies to existing property owners. Another advantage to eliminating or reducing subsidies is that the resulting higher premium rates could motivate property owners to undertake mitigation efforts in order to reduce those premium rates. More mitigation could, in turn, result in less flood damage, lower losses for NFIP, and potentially lower taxpayer exposure. Moreover, by paying the rate that more closely reflects the actual risk of flooding, property owners who previously had paid subsidized premiums would better understand the actual costs and risks associated with living in certain areas. However, this option has at least two disadvantages. First, while many current NFIP policyholders are required by their lenders to maintain those policies, the elimination of subsidies, according to various stakeholders and a 1999 study commissioned by FEMA, would on average more than double these policyholders’ premium rates and may result in reduced participation in NFIP over time as people either dropped their policies or were priced out of the market. Even reducing subsidies could increase the financial burden on some existing policyholders—particularly low-income policyholders—and could cause some of them to leave the program. As a result, if owners of pre-FIRM structures, which suffer the greatest flood loss, cancel their insurance policies, the federal government—and ultimately taxpayers—could likely face increased costs in the form of FEMA disaster assistance grants and low-interest disaster loans from the Small Business Administration (SBA) in future floods. To the extent that higher premium rates would lead some property owners to decide not to purchase flood insurance, those property owners would not be eligible for NFIP mitigation assistance, reducing the likelihood that they would undertake mitigation efforts to reduce their flood risk. Furthermore, some FEMA officials said that a lack of subsidies could cause communities to drop out of NFIP. These communities would no longer be eligible for federal mitigation assistance or be subject to mandatory purchase requirements. Moreover, they would not have to comply with NFIP floodplain management standards and building codes, raising the possibility that residents would construct properties that had a high risk of being damaged by a flood. Second, we found that some communities might resist the elimination or reduction of subsidies because of the potential effect on residents. For example, some officials in one Texas community with a large rental population and low-income residents said that eliminating or reducing the subsidy would negatively affect their residents. Premium rate increases on rental properties likely would be passed to the tenants, some of whom are low-income tenants, thus creating a potential hardship. Officials in an Ohio community we visited said that many businesses would be unable to afford full-risk premiums, which would have a negative effect on their economy. A third option would be to target premium rate subsidies to those policyholders who had the greatest financial need based on a means-based test. As currently structured, the subsidy is tied to the property, not the property owner, and any pre-FIRM property located in an SFHA in a participating community is eligible for a subsidy. And as mentioned previously, when a pre-FIRM property is sold, the new owner is also eligible for the subsidy. Additionally, the program does not take into account any characteristics of the owner, such as income level, or consider how the property is used—for example, as a residence, vacation home, or rental. FEMA does currently offer a temporary subsidized premium rate based on the financial need of the property owner through its Group Flood Insurance Policy (GFIP) program. Under that program, property owners in federally declared disaster areas apply to state based Individual and Family Grant (IFG) programs and, if accepted based on their financial need, are eligible to receive a flat premium rate of $200 per year for three years. After the three year period the rates would be adjusted to the appropriate rate for that location and property. This needs-based option would remove the subsidy from the property and instead attach it to the policyholder on the basis of need as determined by specified financial requirements and eligibility criteria. Means-tested programs are not new to the federal government. Over the years, Congress has established about 80 separate programs to provide cash and noncash assistance to low-income individuals and families. Such programs provide a means of delivering assistance to those in need, and we have made recommendations to simplify the process for determining financial eligibility for various programs. Depending on how the option was implemented, a potential advantage to this option would be that more policyholders would have to pay the full- risk rate and that those eligible for the subsidy would be made aware of the full-risk rate before applying for the subsidy. As a result, more policyholders would be aware that they were receiving subsidies and would better understand the actual costs and risks associated with living in certain areas. In addition, because some policyholders would no longer be receiving a subsidy, FEMA would be collecting more in premiums. Increased premium collection would improve NFIP’s ability to make claims payments, reduce its need to borrow from the U.S. Treasury, and potentially limit taxpayer exposure. Further, because the only policyholders who would lose their subsidies generally would be those who were deemed able to afford full-risk rates, to the extent that higher rates would negatively affect the program, potentially fewer property owners may drop their insurance as compared with other nontargeted options for reducing subsidies. The program would benefit those in greatest financial need. Finally, charging higher rates that more accurately reflect the risk of flooding may motivate policyholders to undertake mitigation to reduce their premium rates. However, this option has several disadvantages. Eliminating subsidies and requiring those who are deemed able to afford them to pay full-risk rates could cause some property owners to stop buying flood insurance. Even though a means-based test might determine that some property owners did not qualify for subsidies, the higher cost of the full-risk rate premiums could lead some to decide not to purchase coverage and instead rely on federal disaster assistance, which generally requires that they purchase flood insurance as a condition of the assistance. In addition, requiring property owners to go through an application process to receive subsidized premium rates, rather than receiving them on the basis of their property’s characteristics, could discourage some property owners with limited resources and in greatest need of coverage from applying for the subsidy. This option also would involve certain implementation challenges in the midst of other ongoing management challenges for NFIP. To implement this option, FEMA first would need to determine how to design the program and determine how to conduct the means test. Depending on how the program was designed, FEMA might need to collect or purchase data on income and wealth of property owners to help determine eligibility benchmarks. In addition, FEMA would need to devote resources, including staff, to developing, implementing, and monitoring the means test program. For example, FEMA would need to develop eligibility benchmarks and a process for applying for and awarding subsidies. The agency would need to determine who would conduct the tests and certify the results—that is, whether FEMA, state and community officials, the Write-Your-Own insurance companies that currently serve as the delivery system for NFIP, or some other entity would perform these activities. FEMA also would need to establish an oversight mechanism to ensure that the program was operating as intended. Finally, FEMA would have to ensure that costs of the subsidies and the costs associated with administering means-based testing did not result in costs that were larger than the current subsidies. FEMA could use existing programs in other agencies to formulate a template for means testing in order to make implementation easier. Moreover, addressing these challenges could be difficult for the agency, which is already in the process of addressing management and oversight challenges. As we have previously reported, FEMA faces challenges in providing oversight of its contractors, state and local partners, and Write- Your-Own insurance companies, as well as overseeing claims adjustments and its map modernization program. New management challenges created by implementing a means-based test could make addressing these existing challenges more difficult and may require additional staff. While any of these options—or a combination of them—could help reduce the adverse impact of subsidies on the financial health of NFIP, the potential would still exist for claims to exceed losses in any given year. As we have seen in 2008, flood losses are volatile and highly unpredictable, and estimating future losses and determining premium rates adequate to cover those losses is an inherently difficult process. In addition, even if subsidized rates were eliminated, the potential for catastrophic losses could still result in NFIP needing to borrow from the Treasury to pay losses. Absent a change in the NFIP’s use of subsidized premium rates, however, the subsidies will continue to hinder the financial stability of the program, and the potential further increases in the number of properties receiving subsidies could make the situation worse. Therefore, implementing any or a combination of these options could significantly reduce the adverse financial impact of subsidies on NFIP. We provided a draft of this report to the Department of Homeland Security (DHS) for comment. It provided written comments that are reprinted in appendix III. In its written comments, DHS expounded upon several topics discussed in the report. First, DHS noted that it is aware of the financial impact of subsidized and repetitive loss properties on the NFIP, and stated that while it has proposed a number of initiatives through the years, most of these were not welcomed by stakeholders. Second, DHS noted that amendments to current statutes and rules would be needed if FEMA were to require mitigation via a grant program beyond the substantial damage provision that currently is the only provision that triggers mandatory mitigation. We recognize that some aspects of the options discussed in this report would require legislative changes. However, we would encourage FEMA to continue to pursue actions to address the financial drain on NFIP brought about by subsidized premium rates, such as the planned 2009 increase in the standard deductible for subsidized policyholders as mentioned in its comments. Third, DHS recognized that a needs-based subsidy could be beneficial, but it recommended that the burden of making needs-based determinations be placed on someone other than the insurance agent and that a discussion be held on how the costs of discounted premiums would be borne. We noted in the report that a needs- based program could be implemented in a number of ways, and agree that careful study would have to be done before implementing such a program. DHS also described a current program where some participants receive subsidized premium rates based on their short-term financial need, with the needs determination performed by a third party. We have added a discussion of this program to the report and note that this may provide useful insights to a broader-based approach. DHS also provided technical comments, which we have incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of this report until 30 days from the report date. At that time, we will provide copies to the Chairman, Senate Committee on Banking, Housing, and Urban Affairs; the Chairman and Ranking Member of the Senate Committee on Homeland Security and Governmental Affairs; the Chairman and Ranking Member of the House Committee on Financial Services; the Chairman and Ranking Member of the House Committee on Homeland Security; and other interested committees. We are also sending a copy of this report to the Secretary of Homeland Security and other interested parties. In addition, the report will available at no charge on our Web site at http://www.gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. If you or your staff has any questions about this report, please contact me at (202) 512-8678 or williamso@gao.gov. GAO contact and staff acknowledgments are listed in appendix IV. To provide information on the National Flood Insurance Program’s (NFIP) inventory of subsidized properties in terms of size, location, and financial impact on NFIP, we obtained data on policies, claims, and repetitive losses from the Federal Emergency Management Agency’s (FEMA) private contractor, Computer Sciences Corporation, that maintains various NFIP databases. We obtained data pertaining to NFIP and NFIP subsidized and full-risk policies from 1978 through June 2008, including information on policies, premiums, and claims. We used these data to analyze the size, growth, costs, geographic distribution, and market penetration of the subsidized inventory and total inventory nationwide and for states and counties. We also reviewed relevant FEMA reports and analysis on these factors. We assessed the reliability of FEMA’s policy and claims data by (1) reviewing existing information about the data and the system that produced them, (2) interviewing agency officials knowledgeable about the data, and (3) performing electronic testing of required data elements. We determined that the data were sufficiently reliable for the purposes of this report. Originally, we planned to construct a comprehensive nationwide profile of subsidized properties and policyholders by merging vendor data containing market values of subsidized properties and income data of owners with NFIP policy and claims data. To do this, we met with private vendors that, for marketing purposes, collect and sell nationwide statistics on real estate market values and transactions and household incomes. Specifically, we explored ways to develop nationwide comparisons of subsidized and full-risk properties—for example, comparing market values and household income—within and across geographic areas. However, we were unable to identify data sources that would enable us to pull statistically valid samples of subsidized properties and policyholders nationwide that could be projected to the entire inventory of subsidized and full-risk properties. While we were able to identify sources that had nationwide data, the vendors we contacted lacked data on real estate values in certain areas of the country. The omitted areas not only included rural areas, but also some areas with large populations, such as parts of Louisiana and Texas—both of which have large numbers of subsidized properties. We also determined that matching individual property addresses maintained on an NFIP database and a vendor database would create inconsistencies that would prohibit a valid nationwide sample, thereby preventing us from extrapolating any results nationwide. In 2007, the Congressional Budget Office (CBO) attempted to produce a similar nationwide profile by merging vendor and NFIP data, but its match rates for addresses between databases were too low and thus the results from its study were limited to the properties that they were able to match and could not be generalized nationally. We spoke with CBO officials regarding their study. As an alternative to the national profile, we planned to construct profiles for the five counties that we judgmentally selected for site visits (our methodology and purpose for the site visits are discussed below). This alternative effort involved matching NFIP data on individual properties with county tax records, local real estate listings, and other local sources that might have data on those properties. However, we determined that this approach also would not produce match rates high enough to produce countywide profiles for three of the five counties, and data from the two remaining counties were not usable for our purposes. For example, we found that conventions for mailing addresses varied considerably across the five counties and differed from the NFIP data. While counties and NFIP use U.S. Postal Service’s address standardization format, NFIP also permits descriptive addresses, such as “Third Cabin on Beulah Lake,” “N Side of Shell Belt Rd,” and “5 Houses From Johnson’s Seafood,” which made address matching difficult. In addition, we found certain data not to be useful for our purposes. For example, local property tax records did not maintain comparable market values of properties. In one county we found that tax records contained last sale information that, for many properties, could be several years old if the properties were not sold annually, and did not reflect current market values of those properties. In another county, tax records did not have information on selling prices of properties because state law prohibited public disclosure of this information. Thus we decided not to pursue this alternative effort. Finally, to satisfy the objective, we selected and visited a judgmental sample of five counties across the country (Sonoma County, California; Pinellas County, Florida; Jefferson County, Missouri; Washington County, Ohio; and Harris County, Texas). Our purpose was to obtain available information on the characteristics of subsidized properties in these counties (such as types of structures, flooding history, and market values) and characteristics of their policyholders (such as income and perceived benefits obtained from subsidized rates). We also sought to understand similarities and differences in how NFIP is implemented within each locality. We selected counties with NFIP communities that had completed NFIP’s map modernization in order to have timely data to help construct profiles of properties in these counties. We selected a mix of coastal and inland counties in order to capture coastal and riverine types of flooding. We selected from counties that had large numbers or percentages of subsidized properties, large numbers of repetitive loss properties, and large cumulative historical dollars of claims losses paid in order to capture areas likely to have had meaningful, if not extensive, experience dealing with flooding and NFIP. During our visits to the five counties, we met with local floodplain managers, property tax appraisers and assessors, building permit officials, civil engineers, real estate agents, flood insurance agents, flood claims adjusters, and other relevant parties. We discussed local flooding history, flood plain management, building standards, flood claims adjusting, and real estate values and taxes as they pertained to implementation of NFIP generally, and NFIP subsidized properties in particular. We also spoke with officials from the five FEMA regional offices responsible for these counties. Tables 5 through 7 compare the five counties using a number of factors. While these five counties are not a complete representation of the entire body of NFIP communities, their diversity across multiple factors contributed to our understanding of the administration of NFIP at the local level. Table 5 compares the five counties by population, area, population density, housing density, household income and housing values and shows the ranges in these factors across the counties, as well as the types of flooding and the percentages of land area in the floodplain. Table 6 shows policies in force and cumulative claims paid broken down by subsidized and total and the percentages for subsidized for the five counties. In each of the five counties, cumulative claims paid on subsidized policies were a higher percentage of cumulative total claims paid than were subsidized policies in force as a percentage of total policies in force. Table 7 compares repetitive loss properties across the five counties using the number of repetitive loss properties still insured versus the number of repetitive loss properties no longer insured, and the number and dollars of loss payments for these groups. To evaluate NFIP’s existing structure and identify and evaluate options for reducing or eliminating the costs of properties insured at subsidized premium rates and the advantages and disadvantages of these options, we analyzed NFIP’s legislative history, which described the objectives of NFIP overall and NFIP subsidies in particular, and original expectations about the subsidized inventory. We also reviewed more recent legislation, including the Robert T. Stafford Disaster Assistance and Emergency Relief Act and the Bunning-Bereuter-Blumenauer Flood Insurance Reform Act of 2004, which established the Severe Repetitive Loss Pilot Program. We discussed nationwide mitigation strategies and related efforts and costs for repetitive loss properties, including severe repetitive loss properties with FEMA officials. In our visits with local entities in the five counties as noted above, we also obtained available information on resources, expenditures, and costs of individual mitigation efforts. We also discussed these issues with FEMA regional offices responsible for the five counties. Finally, we analyzed FEMA’s statistics on repetitive loss properties including cumulative historical claims costs and the number of these properties mitigated in the five counties we visited and nationwide. We also analyzed relevant information in various other studies, including two of our studies discussing public policy goals for federal involvement in catastrophe insurance. We conducted our work between December 2006 and November 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Recent flooding, especially in the Midwest in 2008, highlights the devastation that can be caused from flooding. This appendix provides examples of areas of the country that appear to have higher populations and flooding risks relative to their policy volumes when compared to other areas, and thus have the potential for increases in the number of NFIP policies. As we noted in the report, an increase in market penetration would also likely bring an increase in the number of subsidized policies. We identified the examples by comparing the number of NFIP policies in a given area, as of September 2006, with the total number of county flood declarations from January 1980 to June 2008, cumulative flood claims payments from January 1978 to April 2008, and population as of 2004 for counties and 2005 for states. Example 1: Some Midwestern and Northeastern states and counties that appeared to have a higher history of flood losses relative to policy counts than other areas of the country The five combined states of Iowa, Michigan, Minnesota, Missouri, and Wisconsin, when compared to Collier County, Florida, had more county flood disaster declarations (2,092 versus 12), significantly more flood claims payments ($704,706,000 versus $12,483,000), and a much larger population (28,906,000 versus 297,000), but a similar number of NFIP policies (80,572 versus 85,246). Maine, when compared to Idaho, had significantly more flood claim payments ($36,332,000 versus $4,754,000) and county flood disaster declarations (159 versus 42), but a similar number of NFIP policies (7,891 versus 7,079). The states also had similar populations: 1,285,000 for Maine and 1,480,000 for Idaho. Wisconsin, when compared to Rhode Island, had many more county flood disaster declarations (276 versus 11), but had similar flood claims payments ($32,693,000 versus $34,219,000). Even though Wisconsin has a much larger population (5,479,000 versus 1,012,000), it has a similar number of NFIP policies (12,945 versus 14,432). Iowa, when compared to New Mexico, had almost 10 times more county flood disaster declarations (558 versus 56), and about eight times more in flood claims payments ($65.915.000 versus $8,038,000) but almost 30 percent fewer policies (10,185 versus 14,455). Iowa’s population was larger than New Mexico’s (2,941,000 versus 2,016,000). The four combined states of Kansas, Nebraska, South Dakota, and North Dakota, when compared to Oregon, had more county flood disaster declarations (1,346 versus 124) and three times more in flood claims payments ($244,828,499 vs. $76,727,971), but a similar number of policies (30,683 versus 29,780) for a much larger population (6,009,000 versus 3,613,000). Example 2: Counties with flood disaster declarations but no communities in NFIP We found 66 counties that had flood disaster declarations but no communities that had joined NFIP. Below are selected examples from those counties. Clay County, Alabama (population 14,092) has had seven flood declarations. San Francisco County, California (population 744,230) has had three flood declarations. Henry County, Iowa (population 20,258) has had six flood declarations. Winneshiek County, Iowa (population 21,188) has had seven flood declarations. Adair County, Kentucky (population 17,575) has had six flood declarations. Dallas County, Missouri (population 16,328) has had eight flood declarations. Example 3: Counties with flood disaster declarations but very few NFIP policies We found 14 counties, all with populations over 100,000, that had one or more flood declarations but very few NFIP policies. Below are selected examples from those counties. Potter County, Texas (population 118,000) has had three flood disaster declarations but had only six policies. Bibb County, Georgia (population 155,000) has had four flood disaster declarations but had only 13 policies. Carroll County, Georgia (population 102,000) has had six flood disaster declarations but had only 83 policies. In addition to the contact named above, Patrick Ward, Assistant Director; Lawrence Cluff, Assistant Director; Tania Calhoun; Emily Chalmers; William (Rudy) Chatlos; Martha Chow; Nima Patel Edwards; Christopher Forys; Catherine Hurley; Karen Jarzynka; and Melvin Thomas made significant contributions to this report.
The Federal Emergency Management Agency (FEMA), the Department of Homeland Security (DHS) agency that administers the National Flood Insurance Program (NFIP), estimates that subsidized properties--those that receive discounted premium rates that do not fully reflect the properties' actual flood risk--experience as much as five times the flood damage as properties that do not qualify for subsidized rates. Almost one in every four residential policies has subsidized rates that are on average 35-40 percent of the full-risk rate. Unprecedented losses from the 2005 hurricane season and NFIP's periodic need to borrow from the Department of the Treasury to pay flood insurance claims has raised concerns about the impact that subsidized premium rates have on the longterm financial solvency of NFIP. GAO designated NFIP as high-risk in March 2006; as of June 2008, NFIP's debt stood at $17.4 billion. This report (1) provides information on NFIP's inventory of subsidized properties and (2) examines NFIP's current approach to subsidized properties and the advantages and disadvantages of options for reducing the costs associated with these properties. To do this work, GAO analyzed data on policies and claims and collected available data about subsidized properties. GAO also reviewed applicable reports and interviewed relevant agency, state, and private sector officials. In its written comments, DHS expounded upon several topics discussed in this report. While it constitutes a declining percentage of all NFIP policies, the number of properties receiving subsidized premium rates has grown since 1985; by 2007 it was at its highest point in almost 30 years. According to FEMA, this growth resulted from several factors, including a growing number of mortgages with mandatory flood insurance purchase requirements and greater enforcement of these requirements, the longer-than-expected life of the structures that are eligible for subsidies, and increased awareness of the dangers of floods from several major recent disasters and increased NFIP marketing efforts. To date, more than half of the subsidized policies are concentrated in five states with relatively high flood risk--California, Florida, Louisiana, New Jersey, and Texas. Current low participation rates--around 50 percent of single-family homes in high-risk areas--leave room for substantial growth in the number of NFIP policies, many of which would be likely to receive subsidized rates. Because of their relatively high loss experience and lower premium rates, the policies receiving subsidized rates have been a financial burden on the program, with total claims exceeding premiums by $962 million over the period from 1986 through 2004, before the large losses from the 2005 hurricanes. Without changes to the program, the number of subsidized properties will likely continue to grow, increasing the potential for future NFIP operating deficits. As Congress evaluates the impact of subsidized premium rates, it is faced with balancing the public policy goals of charging premium rates that fully reflect actual risks, encouraging broad program participation through affordable rates, and limiting costs to taxpayers. While the current program of propertybased subsidies and voluntary mitigation efforts--steps taken to reduce a property's flood risk such as relocation or elevation--encourages broad program participation, it is unlikely to substantially reduce the adverse financial impact of subsidized properties. GAO identified three options for addressing the financial impact of subsidized properties on the NFIP, each with advantages and disadvantages. One option would be to increase mitigation efforts, including making mitigation mandatory. Mitigation could help reduce flood losses, but the increased funding for such efforts could be high. A second option, eliminating or reducing subsidies, could improve NFIP's financial stability by increasing the number of policies that more accurately reflect the risk of flooding. However, the resulting higher premium rates could reduce NFIP participation and could meet resistance from local communities. A third option would be to target subsidies based on financial need, which could help ensure that only those in need receive subsidies, with the rest paying full-risk rates. However, it could be challenging for FEMA to develop and administer such a program in the midst of ongoing management challenges. While the inherent difficulty in determining premium rates adequate to cover potentially volatile and at times catastrophic flood losses means that the potential for the program to incur future operating deficits will always exist, implementing any or a combination of these options could significantly reduce the adverse financial impact of subsidies on NFIP.
HUBZone program fraud and abuse continues to be problematic for the federal government. We identified 19 firms in Texas, Alabama, and California participating in the HUBZone program even though they clearly do not meet program requirements (i.e., principal office location or percentage of employees residing in the HUBZone and subcontracting limitations). Although we cannot conclude whether this is a systemic problem based on these cases, the issue of misrepresentation clearly extends beyond the Washington, D.C., metropolitan area where we conducted our initial investigation. In fiscal years 2006 and 2007, federal agencies had obligated nearly $30 million to these 19 firms for performance as the prime contractor on federal HUBZone contracts. HUBZone regulations also place restrictions on the amount of work that can be subcontracted to non-HUBZone firms. Specifically, HUBZone regulations generally require a firm to expend at least 50 percent of the personnel costs of a contract on its own employees. As part of our investigative work, we found examples of service firms that subcontracted most HUBZone contract work to other non-HUBZone firms and thus did not meet this program requirement. When a firm subcontracts the majority of its work to other non-HUBZone firms, it is undermining the HUBZone program’s stated purpose of stimulating development in economically distressed areas, as well as evading eligibility requirements for principal office and 35 percent residency requirement. Examples of firms that did not meet HUBZone requirements included the following: An environmental consulting firm located in Fort Worth, Texas, that violated HUBZone program requirements because it did not expend at least 50 percent of personnel costs on its own employees or use personnel from other HUBZone firms. From fiscal year 2006 through fiscal year 2007, the Department of the Army obligated more than $2.3 million in HUBZone contracts to this firm. At the time of our investigation, company documents showed that the company was subcontracting from 71 to 89 percent of its total contract obligations to other non-HUBZone firms—in some cases, large firms. The principal admitted that her firm was not meeting the subcontracting performance requirement of HUBZone regulations. Further, the principal stated that the firm made bids on HUBZone contracts knowing that the company would have to subcontract work to other firms after the award. The principal added that other large firms use HUBZone firms in this manner, referring to these HUBZone firms as “contract vehicles.” A ground maintenance services company located in Jacksonville, Alabama, failed to meet both principal office and 35 percent residency requirements. From fiscal year 2006 through fiscal year 2007, this firm received more than $900,000 in HUBZone set-aside obligations. However, our investigation found that the purported principal office was in fact a residential trailer occupied by someone not associated with the company. The company had represented its office as located in “suite 19,” when in reality, the address was associated with trailer 19 in a residential trailer park. The two employees of the firm—a father and a son—lived in non- HUBZone areas that are located about 90 miles from the trailer park. This firm also subcontracted most of its HUBZone work to non-HUBZone firms. An information technology firm in Huntsville, Alabama, failed to meet both principal office and 35 percent residency requirements. From fiscal year 2006 through fiscal year 2007, federal agencies obligated over $5 million in HUBZone awards to this firm, consisting mainly of two HUBZone set-aside contracts. Based on our review of payroll records and written correspondence that we received from the firm, we determined that only 18 of 116 of the firm’s employees (16 percent) who were employed in December 2007 lived in HUBZone-designated areas. In addition, our investigation found that no employees were located at the location listed as a principal office. The firm’s president acknowledged that he “had recently become aware” that he was not in compliance with HUBZone requirements and was taking “corrective actions.” However, the firm continued to represent itself as a HUBZone firm even after this acknowledgment. According to HUBZone regulations, persons or firms are subject to criminal penalties for knowingly making false statements or misrepresentations in connection with the HUBZone program, including failure to correct “continuing representations” that are no longer true. During the application process, applicants are not only reminded of the program eligibility requirements, but are required to agree to the statement that anyone failing to correct “continuing representations” shall be subject to fines, imprisonment, and penalties. Further, the Federal Acquisition Regulation (FAR) requires all prospective contractors to update the government’s Online Representations and Certifications Application, which includes a statement certifying whether the firm is currently a HUBZone firm and that there have been “no material changes in ownership and control, principal office, or HUBZone employee percentage since it was certified by the SBA.” Of the 19 firms that did not meet HUBZone eligibility requirements, we found that all of them continued to represent themselves as eligible HUBZone interests to SBA. Because the 19 case examples clearly are not eligible, we consider each firm’s continued representation indicative of fraud, abuse, or both related to this program. Our June 2008 report and July 2008 testimony clearly showed that SBA did not have effective internal controls related to the HUBZone program. In response to our findings and recommendations, SBA initiated a process of reengineering the HUBZone program. SBA officials stated that this process is intended to make improvements to the program that are necessary for making the program more effective while also minimizing fraud and abuse. To that end, SBA has hired business consultants and reached out to GAO in an attempt to identify control weaknesses in the HUBZone program and to strengthen its fraud prevention controls. As of the end of our fieldwork, SBA did not have in place the key elements of an effective fraud prevention system. A well-designed fraud prevention system (which can also be used to prevent waste and abuse) should consist of three crucial elements: (1) up-front preventive controls, (2) detection and monitoring, and (3) investigations and prosecutions. For the HUBZone program this would mean (1) front-end controls at the application stage, (2) fraud detection and monitoring of firms already in the program, and (3) decertification from the program of ineligible firms and the aggressive pursuit and prosecution of individuals committing fraud. Preventive controls. We have previously reported that fraud prevention is the most efficient and effective means to minimize fraud, waste, and abuse. Thus, controls that prevent fraudulent firms and individuals from entering the program in the first place are the most important element in an effective fraud prevention program. SBA officials stated that as part of their interim process they are now requesting from all firms that apply to the HUBZone program documentation that demonstrates their eligibility. While requiring additional documentation has some value as a deterrent, the most effective preventive controls involve the verification of information, such as verifying a principal office location through an unannounced site visit. Moreover, SBA did not adequately field-test its interim process for processing applications. If it had done so, SBA would have known that it did not have the resources to effectively carry out its review of applications in a timely manner. As a result, SBA had a backlog of about 800 HUBZone applications as of January 2009. At that time, SBA officials stated that it would take about 6 months to process each HUBZone application—well over the 1 month goal set forth in SBA regulations. Detection and monitoring. Although preventive controls are the most effective way to prevent fraud, continual monitoring is an important component in detecting and deterring fraud. We reported in June 2008 that the mechanisms SBA used to monitor HUBZone firms provided limited assurance that only eligible firms participate in the program. SBA officials stated that during this fiscal year, they will be conducting program examinations on all HUBZone firms that received contracts in fiscal year 2007 to determine whether they still meet HUBZone requirements. In addition, SBA officials stated that as of September 2008, SBA had eliminated its backlog of recertifications. Although SBA has initiated several positive steps, SBA will need to make further progress to achieve an effective fraud monitoring program, including steps to (1) verify the validity of a stated principal office during its recertification and application processes; (2) establish a streamlined and risk-based methodology for selecting firms for program examinations going forward; (3) incorporate an “element of surprise” into its program examinations, such as using random, unannounced site visits; and (4) review whether HUBZone firms are expending at least 50 percent of the personnel costs of a contract on their own personnel. Investigation and prosecution. The final element of an effective fraud prevention system is the aggressive investigation and prosecution of individuals who commit fraud against the federal government. However, SBA currently does not have an effective process for investigating fraud and abuse within the HUBZone program. To date, other than the firms identified by our prior investigation, the SBA program office has never referred any firms for debarment and/or suspension proceedings based on findings from its program eligibility reviews. By failing to hold firms accountable, SBA has sent a message to the contracting community that there is no punishment or consequences for committing fraud or abusing the intent of the HUBZone program. SBA has taken some enforcement steps on the 10 firms that we found did not meet HUBZone program requirements as of July 2008. According to SBA, as of January 2009, 2 of the firms have been removed from the program and 2 others are in the process of being removed. However, SBA’s failure to examine some of the most egregious cases we previously identified has resulted in an additional $7.2 million in HUBZone obligations and about $25 million in HUBZone set-aside or price preference contracts to these firms. In the written statement for the July 2008 hearing, the Acting Administrator of SBA stated that SBA would take “immediate steps to require site visits for those HUBZone firms that have received HUBZone contracts and will be instituting suspension and debarment proceedings against firms that have intentionally misrepresented their HUBZone status.” However, as of February 2009, according to SBA’s Dynamic Small Business Web site, 7 of the 10 firms that we investigated were still HUBZone certified. SBA has removed 2 firms from the HUBZone program and is in the process of providing due process to 2 additional firms to determine whether they should be removed. SBA officials stated that no action will be taken on 3 firms because SBA’s program evaluations concluded that these firms met all the eligibility requirements of the HUBZone program. We attempted to verify SBA’s work, but were not provided with the requested documentation to support its conclusion that the firms moved into compliance after our July 2008 testimony. SBA officials said that they have not yet performed program evaluations for 3 of the most egregious firms because they are experiencing technical problems with SBA’s caseload system. As such, these 3 firms remain eligible to receive HUBZone set-aside contracts. SBA is also pursuing suspension and debarment actions for 7 of these firms, and the Department of Justice is considering civil actions in 5 of the 10 cases. We will be referring all the cases we identified to SBA for further action. In our report, we also recommended that the Administrator of SBA expeditiously implement our June 2008 recommendations and take the following four actions: Consider incorporating a risk-based mechanism for conducting unannounced site visits as part of the screening and monitoring process. Consider incorporating policies and procedures into SBA’s program examinations for evaluating if a HUBZone firm is expending at least 50 percent of the personnel costs of a contract using its own employees. Ensure appropriate policies and procedures are in place for the prompt reporting and referral of fraud and abuse to SBA’s Office of Inspector General as well as SBA’s Suspension and Debarment Official. Take appropriate enforcement actions on the 19 HUBZone firms we found to violate HUBZone program requirements to include, where applicable, immediate removal or decertification from the program, and coordination with SBA’s Office of Inspector General as well as SBA’s Suspension and Debarment Official. In written comments on a draft of our report, SBA agreed with three of our four recommendations. SBA disagreed with our recommendation to consider incorporating policies and procedures into SBA’s program examinations for evaluating if a HUBZone firm is complying with the performance of work requirements by expending at least 50 percent of the personnel costs of a contract on its own employees. SBA stated that although this requirement is included in SBA HUBZone regulations, it is not a criterion for HUBZone program eligibility but rather a mandatory contract term. SBA stated that contracting officers are required by the FAR to insert such clauses regarding subcontracting limitations. While we recognize that contracting officers have a responsibility for monitoring the subcontracting limitation, SBA also has this responsibility. In order to receive HUBZone certification, a firm must certify to SBA that it will abide by this performance requirement, and SBA is required by statute to establish procedures to verify such certifications. Therefore, we continue to believe that SBA should consider incorporating policies and procedures into its program examinations for evaluating if a HUBZone firm is meeting the performance of work requirements. Madam Chairwoman, this concludes my statement. I would be pleased to answer any questions that you or other members of the committee may have at this time. For further information about this testimony, please contact Gregory D. Kutz at (202) 512-6722 or kutzg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. In addition to the individual named above, Bruce Causseaux, Senior Level Contract and Procurement Fraud Specialist; Matt Valenta, Assistant Director; Erika Axelson; Gary Bianchi; Donald Brown; Bruce Causseaux; Eric Eskew; Dennis Fauber; Craig Fischer; Robert Graves; Betsy Isom; Jason Kelly; Julia Kennon; Barbara Lewis; Olivia Lopez; Jeff McDermott; Andrew McIntosh; John Mingus; Andy O’Connell; Mary Osorno; and Chris Rodgers also provided assistance on this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Created in 1997, the HUBZone program provides federal contracting assistance to small businesses in economically distressed communities, or HUBZone areas, with the intent of stimulating economic development in those areas. On July 17, 2008, we testified before Congress that SBA's lack of controls over the HUBZone program exposed the government to fraud and abuse and that SBA's mechanisms to certify and monitor HUBZone firms provide limited assurance that only eligible firms participate in the program. In our testimony, we identified 10 firms from the Washington, D.C., metropolitan area that were participating in the HUBZone program even though they clearly did not meet eligibility requirements. Of the 10 firms, 6 did not meet both principal office and employee residency requirements while 4 met the principal office requirement but significantly failed the employee residency requirement. We reported in our July 2008 testimony that federal agencies had obligated a total of nearly $26 million in HUBZone contract obligations to these 10 firms since 2006. After the hearing, Congress requested that we perform a follow-on investigation. We describe the results of this investigation and further background about the HUBZone program in a companion report that is being made public today. This testimony will summarize our overall findings. Specifically, this testimony will address (1) whether cases of fraud and abuse in the program exist outside of the Washington, D.C., metro area; (2) what actions, if any, SBA has taken to establish an effective fraud prevention system for the HUBZone program; and (3) what actions, if any, SBA has taken on the 10 firms that we found misrepresented their HUBZone status in July 2008. In summary, we found that fraud and abuse in the HUBZone program extends beyond the Washington, D.C., area. We identified 19 firms in Texas, Alabama, and California participating in the HUBZone program that clearly do not meet program requirements (i.e., principal office location or percentage of employees in HUBZone and subcontracting limitations). In fiscal years 2006 and 2007, federal agencies obligated nearly $30 million to these 19 firms for performance as the prime contractor on HUBZone contracts and a total of $187 million on all federal contracts. Although SBA has initiated steps to strengthen its internal controls as a result of our 2008 testimonies and report, substantial work remains for incorporating a fraud prevention system that includes effective fraud controls consisting of (1) front-end controls at the application stage, (2) fraud detection and monitoring of firms already in the program, and (3) the aggressive pursuit and prosecution of individuals committing fraud. SBA has taken some enforcement steps on the 10 firms previously identified by GAO that knowingly did not meet HUBZone program requirements. However, as of February 2009, according to SBA's Dynamic Small Business Web site, 7 of the 10 firms that we investigated were still HUBZone certified. SBA's failure to promptly remove firms from the HUBZone program and examine some of the most egregious cases from our testimony has resulted in an additional $7.2 million in HUBZone obligations and about $25 million in HUBZone contracts to these firms.
In general, federal housing assistance is available only to people or households that have low incomes. Consequently, income, not age, is the single biggest factor in deciding on an elderly person’s need and eligibility for federal housing assistance. HUD also identifies problems that, regardless of age, exacerbate a person’s need for assisted housing. These problems include housing that costs more than 30 percent of a person’s income or is inadequate or substandard. Figure 1 shows the magnitude of the housing needs among low-income elderly households in each state. According to HUD, the need for housing assistance, for the elderly as for the general population, far outstrips the federal resources available to address that need. As a result, federal housing assistance, which is provided through a variety of programs, reaches just over one-third of the elderly households that need assistance. Furthermore, most of the programs are maintaining, rather than increasing, the level of assistance they provide. Only two of these programs—Section 202 and HOME—are under HUD’s jurisdiction and are receiving annual appropriations for the sole purpose of increasing housing assistance for elderly and other households. Under the Section 202 program, HUD provides funding to private nonprofit organizations to expand the supply of housing for the elderly by constructing or rehabilitating buildings or by acquiring existing structures from the Federal Deposit Insurance Corporation. Since it was first created in 1959, the Section 202 program has provided over $10 billion to the sponsors of 4,854 projects containing 266,270 housing units. At the same time that HUD awards Section 202 funds, it enters into contracts with these nonprofit organizations to provide them with project-based rental assistance. This assistance subsidizes the rents that elderly residents with very low incomes will pay when they move into the building. In addition to having a very low income, each household in a Section 202 project must have at least one resident who is at least 62 years old. Finally, sponsoring organizations must identify how they will ensure that their residents have access to appropriate supportive services, such as subsidized meals programs or transportation to health care facilities. When HUD evaluates sponsors’ applications, it awards more points to, and is thus more likely to fund, applicants who have experience providing such services or have shown that they will readily be able to do so. The purpose of the HOME program is to address the affordable housing needs of individual communities. As a result, the day-to-day responsibility for implementing the program rests not with HUD, but with over 570 participating jurisdictions. These participating jurisdictions can be states, metropolitan cities, urban counties, or consortia made up of contiguous units of general local government. HUD requires these jurisdictions to develop consolidated plans in which they identify their communities’ most pressing housing needs and describe how they plan to address these needs. Each year, HUD allocates HOME program funds to these jurisdictions and expects them to use the funds according to the needs they have identified in their consolidated plans. The legislation that created the HOME program allows—but does not require—those receiving its funds to construct multifamily rental housing for the elderly. Although the legislation authorizing the HOME program directs that its funds address the housing needs of low-income people, it allows local communities to choose from a variety of ways of doing so. These include the acquisition, construction, and rehabilitation of rental housing; the rehabilitation of owner-occupied homes; the provision of homeownership assistance; and the provision of rental assistance to lower-income tenants who rent their homes from private landlords. Finally, the legislation requires that communities target the rental assistance they choose to provide. Specifically, jurisdictions must ensure that for each multifamily rental project with at least five HOME-assisted units, at least 20 percent of the residents in the HOME-assisted units have incomes at or below 50 percent of the area’s median income; the remaining residents may have incomes up to 80 per cent of the area’s median. The Section 202 program, far more often than the HOME program, is the source of funds for increasing the supply of multifamily rental housing for low-income elderly people. In comparison, through fiscal year 1996, participating jurisdictions have seldom chosen to use HOME funds to produce multifamily housing almost exclusively for the low-income elderly. This result is linked to differences in the purposes for which each program was created and the persons each was intended to serve. The Congress designed the Section 202 program to serve only low-income elderly households. In creating the HOME program, however, the Congress sought to give states and local communities the means and the flexibility to identify their most pressing low-income housing needs and to decide which needs to address through the HOME program. As is consistent with each program’s intent, the Section 202 program focuses its benefits on the elderly, while the HOME program benefits those whom local communities choose to serve—regardless of age—through various kinds of housing assistance. From fiscal year 1992 through fiscal year 1996, over 1,400 Section 202 and HOME program multifamily rental housing projects for the elderly opened nationwide. These projects included 1,400 Section 202 projects with 51,838 rental units, providing homes for at least 47,823 elderly individuals, and 30 comparable HOME projects with 681 rental units, providing homes for at least 675 elderly individuals. On average, the Section 202 projects had 37 units, while the HOME projects had 23 units. Figure 2 illustrates the proportion of the total number of projects attributable to each program. Although only a small portion of the HOME projects were comparable to Section 202 projects, participating jurisdictions used HOME funds to assist low-income elderly people in other ways. Most of the elderly households that obtained assistance from the HOME program—over 70 percent—used that assistance to rehabilitate the homes they already owned and in which they still lived. The remaining HOME assistance benefiting the elderly did so by providing tenant-based rental assistance; helping new homebuyers make down payments and pay the closing costs associated with purchasing homes; and acquiring, constructing, or rehabilitating single-family and multifamily rental housing. In total, the HOME program assisted 21,457 elderly households, approximately 40 percent as many as the Section 202 program assisted during the same 5-year period. Figure 3 illustrates how the HOME program assisted the elderly during fiscal years 1992 through 1996. In nearly all cases, Section 202 projects rely solely on HUD to pay the costs of construction and subsidize the rents of the low-income elderly tenants who occupy the buildings. In contrast, HOME-assisted multifamily rental housing projects rely on multiple sources of funding, including private financing, such as bank mortgages and equity from developers. At the HOME-funded projects we visited, the use of HOME funds reduced the amount that the projects’ sponsors had to borrow for construction or made borrowing unnecessary. Reducing or eliminating the need to go into debt to build HOME projects enables the projects to be affordable to households with lower incomes than would be the case otherwise. For the Section 202 projects that became occupied during fiscal years 1992 though 1996, HUD provided over $2.9 billion in capital advances and direct loans. The average cost of these projects was about $2.1 million. HUD expects this assistance to be the only significant source of funds for the development of Section 202 projects. Furthermore, when HUD awards Section 202 funds, it also enters into contracts with the sponsoring organizations to provide project-based rental assistance to the tenants who will occupy the buildings once they open. As a result, HUD expects that successful sponsors will be able to develop and build multifamily housing projects that will be affordable to low-income elderly households. The nonprofit sponsors of two of the eight Section 202 projects we visited said that the Section 202 funds were not sufficient to cover all of the costs associated with building their projects. HUD officials told us that this is usually the case when a sponsor (1) includes amenities in a project, such as balconies, for which HUD does not allow the use of Section 202 funds; (2) incurs costs not associated with the site on which the project is being built, such as costs to make the site more accessible to public transportation; or (3) incurs costs that exceed the amount HUD will allow, which can happen when a sponsor pays more for land than HUD subsequently determines the land is worth. Consequently, in some cases, sponsors of the projects we visited sought funding from other sources to make up for the shortfall. Those that found HUD’s funding insufficient primarily cited the high cost of land in their area or factors unique to the site on which they planned to build as the reason for the higher costs. For example, one sponsor in California said that the Section 202 funding was not sufficient to cover the high cost of land and of designing a project that was compatible with local design preferences. Several of the Section 202 projects we visited received additional financial support from their nonprofit sponsors or in-kind contributions from local governments (such as zoning waivers or infrastructure improvements). However, this added support was typically a very small portion of a project’s total costs. For example, the Section 202 funding for the construction of a project in Cleveland was nearly $3 million. However, Cleveland used $150,000 of its Community Development Block Grant (CDBG) funds to help the sponsor defray costs incurred in acquiring the land on which the project was built. Another nonprofit sponsor in California estimated that the development fee waivers and other concessions the city government made for its project were worth over $160,000. The total cost for this project was over $4 million. However, attempts to use other funds have not always been successful. For example, one of the Section 202 projects we visited obtained HOME and CDBG funds from the local county government, but officials from the HUD regional office subsequently reduced the final amount of the project’s capital advance to offset most of these funds. The project’s nonprofit sponsor had sought additional funding because the costs of land exceeded the appraised value that HUD had determined (and would thus agree to pay) and because the sponsor incurred additional costs to extend utility service onto the property where the project was being built. According to the sponsor, HUD reduced the project’s Section 202 capital advance because the sponsor was using other federal funds to meet expenses for which HUD had granted the Section 202 funding. The HOME program is not meant be a participating jurisdiction’s sole source of funds for the development of affordable housing. By statute, the local or state government must contribute funds to match at least 25 percent of the HOME funds the jurisdiction uses to provide affordable housing each year. Additionally, one of the purposes of the HOME program is to encourage public-private partnerships by providing incentives for state and local governments to work with private and nonprofit developers to produce affordable housing. As a result, HOME projects typically attract significant levels of additional public and private funding from sources such as other federal programs, state or local housing initiatives, low-income housing tax credit proceeds, and donations or equity contributions from nonprofit groups. While a participating jurisdiction could conceivably develop new multifamily rental housing using only its allocation of HOME funds, HUD officials questioned why any jurisdiction might choose to do so. Multifamily rental housing is costly to build, and one such project could easily consume a community’s entire allocation of HOME funds in a given year if no other funding were used. Furthermore, using HOME funds to leverage other funds can not only significantly increase the total funding available for housing assistance but also allow communities to offer more types of housing assistance than if they devoted their entire HOME allocation to a single multifamily rental project. Overall, with its current funding of $1.4 billion (for fiscal year 1997), the HOME program is a significant source of federal housing assistance. However, it has not been a major source of funds for new multifamily rental housing designed primarily or exclusively to serve the low-income elderly. From fiscal year 1992 through fiscal year 1996, such projects received a small percentage of the total HOME funds allocated to participating jurisdictions. During these 5 years, the jurisdictions built or provided financial support for 30 multifamily rental projects with 681 units, of which the elderly occupied at least 90 percent. These projects were financed with over $12 million in HOME funds. According to HUD’s data, these funds leveraged an additional $65 million in other public and private financing. Figure 4 illustrates the multiple funding sources used for these HOME projects. Six of the eight HOME projects we visited had received funding from multiple public and private financing sources, reflecting the national pattern at the local level. These projects’ developers and/or sponsors told us that using HOME funds in conjunction with other funding sources enabled them to reduce the amount of debt service on their projects (or eliminate the need for borrowing altogether) so that they could charge lower rents and be affordable to more people with lower incomes. Two of the projects we visited were quite unlike the other projects we visited because they did not use the federal Low-Income Housing Tax Credit program and did not have a conventional mortgage or other bank financing. The same participating jurisdiction developed both projects using only public resources, including HOME and CDBG funds, donations of city-owned land, and interior and exterior labor provided by the city’s work force. HUD does not pay for supportive services through the HOME program but does, under limited circumstances, do so through the Section 202 program. Information on the provision of services is generally not available because neither program collects nationwide data on the availability of such services at the projects each has funded. For most of the Section 202 and HOME projects we visited, some supportive services, such as group social activities or subsidized meals programs, were available to the residents on-site, but usually only to the extent that the projects could generate operating income to pay for them. Rather than provide such services themselves, the projects tapped into and availed themselves of various supportive, educational, social, or recreational services in their communities. Furthermore, most of the projects that we visited included common areas and activity rooms that gave the residents places to socialize and provided space for hosting community-based and other services. All eight of the Section 202 and seven of the HOME projects we visited ensured that their residents had access to supportive services. The range and nature of the services depended on the amount of operating income that was available to pay for the services and/or the proximity of community-based services to the projects. In addition, one of the Section 202 projects had a grant from HUD to hire a part-time service coordinator;the remaining Section 202 projects paid for a service coordinator from the project’s operating revenues, expected their on-site resident managers to serve as service coordinators, or provided services at nearby facilities. None of the HOME projects received outside support through grants from HUD and/or project-based rental assistance to pay for supportive services. Six of the eight HOME projects and all but one of the Section 202 projects that we visited expected an on-site manager to coordinate the provision of supportive services to elderly residents or relied on rent revenue to pay for a service coordinator. The costs of having on-site managers, like the costs of providing most of the service coordinators, were covered by the projects’ operating incomes. One of the Section 202 projects that relied on rent revenue provided few services on-site, but its residents had access to a wide variety of services, including a subsidized meals program, at another nearby Section 202 project developed by the same sponsor. In another case, the nonprofit sponsor of the Section 202 project consulted a nonprofit affiliate that has developed services for various housing projects developed by the sponsor. In addition to keeping up to date with the needs of their residents, the sponsors or management companies of the Section 202 projects we visited expected their service coordinators or resident managers to refer residents to community-based services as needed or to bring community-based services to their facilities on a regular or occasional basis. One of the Section 202 projects we visited had hired a part-time service coordinator using a grant from HUD’s Service Coordinator Program. According to HUD, resident managers cannot always provide supportive services because they may lack the resources to do so and/or the experience needed to provide such services. As a result, the Congress began funding the Service Coordinator Program in 1992 to help meet the increasing needs of elderly and disabled residents in HUD-assisted housing and to bridge the gap between these needs and resident managers’ resources and experience. The program awarded 5-year grants to selected housing projects to pay for the salaries of their service coordinators and related expenses. The managers of this Section 202 project doubted that their operating revenues would be sufficient to continue paying for the coordinator when their HUD grant expires. One Section 202 project that we visited was unique in that it did not have a service coordinator, but the project’s management company had structured the duties of the resident manager to include activities that a service coordinator performs. The project’s management company could do so because it manages over 40 Section 202 projects nationwide and handles nearly all financial, administrative, and recordkeeping duties in one central location so that its resident managers have time to become more involved with their residents. The two HOME projects we visited that had neither a service coordinator nor an expectation that a resident manager would fill this role were the two projects that housed both the low-income elderly and families. At one of these projects, a nearby city adult center offered numerous opportunities for supportive services similar to those other projects provided on-site. At the second project, a social worker from the city visited the project on a part-time basis to provide information about and referrals to community-based services. All of the Section 202 projects we visited had common or congregate areas for group activities, socializing, and supportive services. Six of the eight HOME projects we visited had similar common areas. At both the Section 202 and the HOME projects, these common areas were often the places in which residents could take advantage of the supportive services the project’s manager or service coordinator had provided directly or, in the case of community-based services, had arranged to come to the project on a regular or occasional basis. The only projects that did not have common or congregate areas were the two HOME projects that housed a mixture of low-income families and elderly residents. One was a traditional multifamily apartment building in which 19 of the 29 units were set aside for the elderly. Although this project had no congregate space, it was near one of the city’s adult centers that provides adult education, recreational classes, and other services for seniors and others from the community. The second was a single-room-occupancy project in which about 20 percent of the tenants were elderly, although the project did not set aside a specific number or percentage of the units for the elderly. This project had more limited common areas, parts of which were devoted to kitchen facilities on each floor because single-room-occupancy units do not have full kitchens themselves. We provided a draft of this report to HUD for its review and comment. HUD generally agreed with the information presented in this report but said that the report (1) understates the contributions of the HOME program in providing assistance to the elderly and (2) assumes that the Section 202 model is the preferred way of providing housing for the elderly, without giving sufficient recognition to the other kinds of assistance the elderly receive from the HOME program. In discussing the relative contributions of the HOME and the Section 202 programs, HUD said that comparable production of multifamily rental projects for the elderly could not have occurred in the first few years of the HOME program (which was first funded in fiscal year 1992) because of the lead time necessary for planning, selecting, and constructing projects. HUD also questioned whether our data included all HOME projects that might be comparable to Section 202 projects by taking into account the (1) projects developed through the substantial rehabilitation of existing buildings (as opposed to new construction), (2) projects in which vacant units might later be occupied by the elderly in sufficient numbers to achieve comparability with Section 202 projects, (3) projects in which 50 percent or more of the residents were elderly, and (4) projects that were under way but had not been completed at the close of fiscal year 1996. We agree that our review probably would have identified more comparable HOME projects if the program had been funded before fiscal year 1992, and we have added language to this effect in the report. Our analysis and the data we present include projects from the Section 202 and HOME programs that were substantial rehabilitations of existing buildings. We agree that filling vacant units with elderly residents could increase the number of comparable HOME projects in the future, but any such units in our analysis were vacant as of the close of fiscal year 1996, and our report discusses each program’s activity only through that date. Data on the HOME projects in which 50 percent or more of the residents were elderly are reflected in figure 3 of this report, which illustrates the different types of HOME assistance the elderly received. We did not compare these data with Section 202 data because, as we note, comparable HOME projects are those in which 90 percent or more of the households have one elderly resident. We agree that some HOME projects that were under way but had not been completed at the close of fiscal year 1996 might in the future be comparable to Section 202 projects, but we note that the number of comparable Section 202 projects would also be greater because projects funded by the Section 202 program were also under way but had not opened as of this date. In stating its belief that this report assumes the Section 202 model is the preferred way of providing housing for the elderly, HUD expressed concern that we did not give sufficient recognition to the assistance the HOME program provides the elderly by other means. HUD noted, for example, that the HOME program provides a viable alternative to multifamily rental housing by offering assistance to the elderly to rehabilitate the homes they own with special features that allow them to continue to live independently. HUD also noted that smaller rental projects than those we compared with the Section 202 program (projects with 1-4 units) also present a viable alternative to multifamily rental housing, provided adequate supportive services are available if needed. We disagree with HUD’s comment that this report assumes the Section 202 model is the preferred way of providing housing assistance for the elderly. In this report, we have described the operations of the two programs and presented data on the assistance each has provided nationally and at selected projects. We have not evaluated the manner in which either program provides assistance, and we have not expressed a preference for either approach to delivering housing assistance to elderly households. We have added statements to this effect to the report to address HUD’s concern. We acknowledge that the HOME program provides housing assistance to the elderly in several ways other than through the production of new multifamily rental housing that is set aside almost exclusively for the elderly. However, because this report describes comparable Section 202 and HOME-funded housing assistance and because the Section 202 program provides only one kind of housing assistance, we focused on the multifamily rental projects funded by the HOME program that are comparable to those funded by the Section 202 program. To address HUD’s concerns and to provide further recognition of the HOME program’s other types of housing assistance, we have revised the sections of the report cited by HUD to more prominently reflect the complete range of HOME-funded activities benefiting the elderly. HUD also provided several technical and editorial corrections to the report, which we have incorporated as appropriate. HUD’s comments are reproduced in appendix II of this report. The information we present in this report describes the need for assisted housing, discusses the operations of the Section 202 and HOME programs, and presents data on the assistance each program has provided. We did not evaluate the manner in which either program provides assistance, and we did not express a preference in the report for either one of the approaches to delivering assistance to elderly households. To determine the amount and types of new assisted housing that the Section 202 and HOME programs have provided for the elderly, we obtained and analyzed data from HUD headquarters on the Section 202 and HOME projects completed from fiscal year 1992 through fiscal year 1996. Fiscal year 1992 was the first year in which the HOME program received funding, and fiscal year 1996 was the most recently completed fiscal year for which data from the programs were available when we began our review. Our analysis of the HOME data also provided information on the amount and sources of funding for multifamily projects developed under the HOME program. The Section 202 data did not include information on any other federal or nonfederal funding these projects may have received because a Section 202 allocation is intended to cover 100 percent of a project’s development costs. In addition to using these data, we analyzed special HUD tabulations of Census data to identify the level of need among the elderly for housing assistance in each state. We examined HUD’s data on the HOME program to identify all types of housing assistance that the program has provided for elderly households, but we also analyzed these data by the type of assistance in order to obtain information on the HOME projects that are comparable to Section 202 projects. To do so, we focused our analysis on the HOME multifamily projects in which 90 percent or more of the residents are elderly because, at a minimum, 90 percent of the residents of Section 202 projects must be elderly (before 1991, 10 percent could be persons at least 18 years old with a handicap). Throughout our review, we also discussed housing assistance for the elderly with officials from HUD’s Section 202 and HOME programs, HUD’s Office of Policy Development and Research, and the Bureau of the Census. In addition, we reviewed relevant documents from each program and prior HUD and Census reports on housing needs of the elderly. We supplemented this national information on each program by visiting a total of 16 projects to obtain more detailed data than HUD collects centrally on the use of other federal and nonfederal funding and the presence or availability of supportive services for elderly residents. Using Section 202 and HOME program data, we judgmentally selected two Section 202 and two HOME projects in each of four states—California, Florida, North Carolina, and Ohio. We selected these states because they have relatively high concentrations of low-income elderly residents and numbers of Section 202 and HOME-funded projects. In each state, we selected individual Section 202 and HOME projects that were in the same vicinity and were roughly comparable in size. Nearly all of these projects were reserved exclusively for the elderly or had a portion of their units set aside for the elderly. In one case, about 20 percent of a HOME-funded project’s residents were elderly, although neither the project nor any portion of its units was explicitly reserved for elderly residents. At each project we visited, we discussed the project’s history and financing and the availability of supportive services with the sponsor or developer and relevant local and HUD officials. The observations we make about the individual projects we visited are not generalizable to all Section 202 or HOME-funded projects because we judgmentally selected these projects and did not visit a sufficient number from each program to draw conclusions about the universe of such projects. We did not assess the reliability of the data we obtained and analyzed from HUD’s Section 202 and HOME program databases. However, throughout our review we consulted with the appropriate HUD officials to ensure we were analyzing the relevant data elements for the purposes of this report. Furthermore, the information we obtained from these databases was generally consistent with our observations during our site visits to the projects we selected using these databases. We conducted our work from April through October 1997 in accordance with generally accepted government auditing standards. We are sending copies of this report to the appropriate congressional committees, the Secretary of Housing and Urban Development, and the Director of the Office of Management and Budget. We will make copies available to others on request. Please call me at (202) 512-7631 if you or your staff have any questions about the material in this report. Major contributors to this report are listed in appendix III. As part of our review, we visited 16 low-income, multifamily rental projects—4 each in California, Florida, North Carolina, and Ohio—to obtain information that the Department of Housing and Urban Development (HUD) does not collect centrally and to discuss with program participants their experience in applying for, developing, and operating these projects. In each state, two of the projects we visited were funded by the Section 202 program and two received funds from the HOME Investment Partnership (HOME) program. As we noted in the Scope and Methodology section of this report, we judgmentally selected these states because, compared with other states, they had relatively high concentrations of low-income elderly residents and numbers of Section 202 and HOME-funded projects. We selected individual Section 202 and HOME projects that were in the same vicinity and were roughly comparable in size. During each site visit, we discussed the history, financing, and availability of supportive services with the sponsor or developer of the project. We also discussed these issues with on-site management agents, local officials administering the HOME program, and HUD Section 202 and HOME field office officials. At each project, we walked through the grounds, selected residential units, and any common areas available to the residents for group activities. Typically, the Section 202 projects we visited were high- or mid-rise apartment buildings with elevators, laundry facilities, and one or more community rooms in which residents participated in group activities and, in some cases, meals programs. In one project, which consisted of more traditional garden apartments on a single level, each apartment had its own outdoor entrance and front porch. Ranging in size from 42 to 155 units, most of the projects (5 of 8) had a resident manager. Current Section 202 regulations require that all residents of these projects have very low incomes—that is, the must earn less than 50 percent of the median income for their area. The HOME projects we visited, ranging in size from 20 to 120 units, were more varied than the Section 202 projects. Several were high- or mid-rise buildings, although one of these was a single-room-occupancy hotel. In the single-room-occupancy hotel, the units were smaller than in a typical apartment building and much of the common space consisted of kitchen facilities, which were not included in the units themselves. At another project, the ground floor of the building housed a city-operated adult center offering a variety of educational and recreational programs. Other HOME projects we visited were multi-unit cottages or detached structures, each of whose units had its own outdoor entrance; one such project consisted of buildings scattered over three different sites. Unlike the Section 202 projects, two of the HOME projects housed both families and the elderly. As we noted earlier in this report, at a minimum, in each multifamily rental project with at least five HOME-assisted units, at least 20 percent of the residents in the HOME-assisted units must have very low incomes (at or below 50 percent of the area’s median income); the remaining units may be occupied by households with low-incomes (up to 80 percent of the area’s median income). At the HOME projects we visited, half designated all of their units as HOME-assisted, meaning that the HOME program’s regulations about tenants’ incomes applied to those units; the other half designated some but not all of their units as HOME-assisted, meaning that the remaining units in these projects were subject either to the rules associated with other sources of funding or to those established by the local jurisdiction. Gwenetta Blackwell The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the similarities and differences between the Department of Housing and Urban Development's (HUD) Section 202 Supportive Housing for the Elderly Program and HOME Investment Partnership Program, focusing on: (1) the amount and types of new multifamily rental housing that each program has provided for the elderly; (2) the sources of each program's funding for multifamily rental projects; and (3) the availability of supportive services for elderly residents. GAO noted that: (1) during fiscal year (FY) 1992 through FY 1996, the Section 202 program substantially exceeded the HOME program in providing multifamily rental housing that was set aside for elderly households; (2) over 1,400 Section 202 projects opened during this time, providing homes for nearly 48,000 elderly residents; (3) at the same time, the HOME program provided housing assistance to 21,457 elderly households, including 675 elderly residents in 30 multifamily rental projects comparable to those developed under the Section 202 program; (4) the Section 202 program produced new multifamily rental housing for low-income elderly households through new construction, rehabilitation of existing buildings, and acquisition of existing properties that the Federal Deposit Insurance Corporation obtained through foreclosure; (5) the HOME program provided housing assistance to address the most pressing housing needs that local communities and states identified among low-income people of all ages; (6) for the elderly, HOME assistance helped rehabilitate the homes they already owned and in which they still lived, provided tenant-based rental assistance, helped new homebuyers make down payments and pay closing costs, and made funds available to acquire, construct, or rehabilitate single-family and multifamily rental housing; (8) in the Section 202 program, the capital advance, which HUD provides to a project's sponsor, is the only significant source of funds for developing the project; (9) in general, a HOME project typically attracts significant levels of additional public and private funding; (10) HOME multifamily housing that is similar to Section 202 projects is usually financed with a combination of HOME funds and other federal and nonfederal funds; (11) HUD does not pay for supportive services, such as transportation or subsidized meals programs, through the HOME program but does do so under limited circumstances through the Section 202 program; (12) the extent to which the Section 202 and HOME projects provided these services on-site for their residents usually depended on each project's ability to generate the operating income needed to pay for the services; (13) these projects often depended on and referred their residents to community-based supportive services; (14) five of the eight Section 202 projects that GAO visited employed a staff person or expected their on-site resident manager to coordinate services; and (15) both projects in many cases had common areas or activity rooms that service providers or residents could use for community-based services, group social or educational activities, and dining.
Tax expenditures are tax provisions that are exceptions to the normal structure of individual and corporate income tax necessary to collect federal revenue. They represent revenue losses—the amount of revenue that the government forgoes—resulting from federal tax provisions that grant special tax relief for certain kinds of behavior by taxpayers or for taxpayers in special circumstances. For example, some tax expenditures are used to provide economic relief to selected groups of taxpayers, such as the elderly, the blind, and parents or guardians of children. Policymakers have also long used tax expenditures as a tool to accomplish national social and economic goals, such as encouraging people to save for retirement, promoting home ownership or investments, and funding certain research and development. Such goals are often similar to those of mandatory and discretionary spending programs, and tax expenditures may be used in combination with these types of spending to achieve national objectives. The Congressional Budget and Impoundment Control Act of 1974 identified six types of tax provisions that are considered tax expenditures when they are exceptions to the normal tax structure, as described in figure 1. The term tax expenditure has been used in the federal budget for four decades, and the tax expenditure concept is a tool that the federal government uses to allocate resources and achieve national priorities. In effect, many tax expenditures can be viewed as spending channeled through the tax code in that the federal government “spends” some of its revenue by forgoing taxation on some income. Many tax expenditures are comparable to mandatory spending programs, for which spending is determined by rules for eligibility, benefit formulas, and other parameters. Other tax expenditures, such as the low-income housing tax credit, resemble discretionary spending programs, for which Congress appropriates specific funding each year. Nonetheless, deciding whether a specific tax provision should be characterized as a tax expenditure is a matter of judgment, and disagreements about classification stem from different views about what should be included in the normal income tax structure. For example, some argue that the distinction between tax provisions labeled as tax expenditures, and provisions that are not, is arbitrary; thus, labeling some provisions as tax expenditures implies that all income inherently belongs to the government and could be taxed. Tax expenditures have a significant effect on overall tax rates as well as the budget outlook. The revenue the federal government forgoes from tax expenditures reduces the tax base and requires higher tax rates to raise any given amount of revenue. In addition, tax expenditures, like any federal program spending, reduce the amount of funding available for other federal activities, increase the budget deficit, or reduce any budget surplus. In recent fiscal years, revenue losses from tax expenditures have been similar to discretionary spending levels (see figure 2). Treasury’s Office of Tax Analysis and the congressional Joint Committee on Taxation (JCT) each annually compile their own lists of tax expenditures and estimates of their revenue losses. They estimate revenue losses for a specific tax expenditure by comparing the revenue raised under current law with the revenue that would have been raised if that provision did not exist, assuming all other parts of the tax code remain constant and taxpayer behavior is unchanged. However, tax expenditures’ revenue loss estimates do not necessarily represent the exact amount of revenue that would be gained if a specific tax expenditure was repealed, since repeal of the tax expenditure could change taxpayer behavior in some way that would affect revenue. While, in general, the tax expenditures lists Treasury and JCT publish annually are similar, Treasury and JCT also have different methods for estimating tax expenditures, which can result in differing estimates. Federal budgeting processes provide the means for the government to make informed decisions about priorities across competing national needs and policies, and to allocate resources to those priorities. For the purposes of this report, federal budgeting processes can be broken down into three major phases: executive budget formulation, congressional budgeting processes (during which Congress adopts its budget and enacts laws appropriating funds for the fiscal year), and evaluation. Executive budget formulation. Every year, the President is required to compile and submit a budget to Congress for consideration, a process which OMB manages with the assistance of agencies. During the formulation of the President’s budget, discretionary spending is reviewed in detail and funding levels are proposed for existing and new discretionary programs. In addition, mandatory spending or new revenue provisions may be proposed. Congressional budgeting processes. Once Congress receives the President’s budget request, it begins its own process to enact a budget for the U.S. government. This process includes developing appropriations legislation for discretionary spending; proposing authorizing legislation to establish or continue the operation of federal programs or agencies; and proposing revenue measures, including tax expenditures, which can be either permanent or temporary. Two of the major phases of federal budgeting—executive budget formulation and congressional budgeting processes—are further described in figure 3. Evaluation. GPRA and subsequently GPRAMA established a performance planning and reporting framework for the federal government. This framework provides important tools that can help inform congressional and executive branch decision making to address challenges the federal government faces, including the allocation of scarce resources across different policy tools. GPRAMA requires that OMB and agencies establish different types of government-wide and agency goals, including: Cross-agency priority (CAP) goals. These are government-wide, outcome-oriented goals—for example, improving science, technology, engineering and math education—that cover a limited number of policy areas, as well as goals for management improvements needed across the government, such as delivering world-class customer service. OMB develops these goals in coordination with agencies. Strategic objectives. These are long-term agency goals that reflect the outcome or management impact an agency is trying to achieve, and express the results or direction the agency will work toward. To illustrate, one of the strategic objectives identified in HUD’s 2014-2018 strategic plan is to ensure sustainable investments in affordable rental housing. Agency priority goals (APG). These are near-term goals to reflect agencies’ highest priorities and represent an achievement that agency leaders want to accomplish within 2 years through focused leadership attention. They are to have clear completion dates, targets, and indicators that can be measured or marked by a milestone to gauge process. GPRAMA requires that agencies identify how their respective APGs contribute to their long-term strategic goals. To illustrate, one of HUD’s APGs is to preserve and expand affordable rental housing to serve around 135,000 more households in fiscal years 2016 and 2017 than the baseline 5.5 million households already served. Under GPRAMA, OMB is to develop a federal government performance plan that sets the level of performance needed to achieve the CAP goals. In doing so, OMB is to identify the organizations, programs, and activities—including tax expenditures—that contribute to those goals. In addition, OMB is to periodically review and report on progress toward these goals. As part of these reviews, OMB is to assess whether the programs and activities—including tax expenditures—are contributing to the goals as planned. We have previously reported on OMB’s implementation of these requirements. OMB has tasked agencies with responsibility for identifying applicable tax expenditures for reporting progress on agencies’ priority goals and strategic objectives in GPRAMA guidance. While tax expenditures are included in the development of the President’s Budget, they are subject to fewer reviews, and less information is required to be provided on them. See figure 4 for a summary of key steps and how tax expenditures compare to spending in the executive budget formulation process. Tax expenditures and spending are treated differently in the first key step of the executive budget formulation process, where agencies prepare budget proposals to submit to OMB. OMB Circular No. A-11 requires that agencies provide written justifications for all discretionary and mandatory spending programs and activities of an agency, but does not require such written justification for existing tax expenditures. Further, under OMB Circular No. A-11, agencies are directed to submit less detailed information to justify new or modified tax expenditures than when they propose new spending programs to OMB. For example, OMB Circular No. A-11 directs agencies to provide, among other things, an analysis of financial and personnel resources required to enact a program, a comparison of total program benefits and costs, and supporting information from outside evaluation and analyses for spending programs. In contrast, for tax expenditure proposals, agencies are not directed to provide such detailed analyses in their written justifications. Instead, they are directed to generally justify why a tax expenditure is needed and why it is preferable to a spending program. After agencies submit their budget proposals, OMB and the President review them and decide what proposals and programs will be submitted to Congress. Because agencies are not directed to include a written justification of existing tax expenditures in their budget proposals, such information is not included in this step. OMB staff told us that tax expenditures are one of several policy tools considered when discussing new legislative proposals for the President’s budget. Treasury officials reported that an extensive process is conducted with OMB annually to develop the President’s revenue proposals, including tax expenditures, with a particular focus on expiring provisions. We asked OMB staff the extent to which tax expenditures are included in its budget-related discussions with Treasury officials, but staff declined to respond, stating that such information was predecisional. OMB then prepares the President’s budget, which presents aggregate spending and revenue levels and contains the President’s spending and revenue proposals. The President then approves the budget and submits it to Congress. Although tax expenditures contribute to budget functions, the tax expenditure estimates are not presented alongside summaries of other spending levels by functional category in the budget. Instead, tax expenditure revenue loss estimates are presented separately from other types of spending in the budget’s Analytical Perspectives. We have reported in the past that this presentation makes their relative contributions toward achieving national priorities less visible than spending programs. Tax expenditure proposals, which include both new tax expenditures and amendments to existing tax expenditures, are included in a separate chapter alongside other revenue proposals, though the budget does not identify which proposals are tax expenditures. The budget includes a short description of the proposal, along with a revenue estimate of each particular proposal. Descriptions of discretionary and mandatory spending and the President’s requests for those funds are also included in the budget Appendix, along with the outlays from refundable tax credits. Agencies also submit agency budget justifications and revenue proposals, including tax expenditure extensions or proposals, to Congress. Treasury submits the “General Explanations of the Administration’s Revenue Proposals” report to Congress, and the information required for those proposals is different than the information provided on spending programs in their congressional budget justifications. For all revenue proposals, including tax expenditures, the report generally provides a description of the current law related to each proposal, the reason the administration is proposing that law be changed, and a description of the proposal. It also includes a table of revenue estimates for each proposal, which is also included in the President’s Budget. Agency congressional budget justifications also provide information on their programs. While the form and content of budget justifications may vary by agency and appropriations subcommittee, these justifications may contain specific outputs of the spending program, agency performance measures that the spending supports, and the number of people taking advantage of a particular program. As part of the congressional budget process, tax expenditures are treated similarly to mandatory spending in that expiring tax expenditures being considered for extension and new proposals are subject to standard congressional controls for new legislation; but existing, non-expiring tax expenditures are not subject to annual congressional budget processes. In addition, while discretionary and mandatory spending are included in the Concurrent Resolution on the Budget (budget resolution), existing tax expenditures are not explicitly included, though the budget resolution may propose changes to tax expenditures or mandatory spending at Congress’s discretion. See figure 5 for a summary of key steps and how tax expenditures compare to spending in the congressional budget process. As part of the annual budget process, House and Senate Budget committees develop a budget resolution, which sets budget authority in aggregate and by functional category, as well as target aggregate revenues. Unlike with mandatory and discretionary spending, tax expenditures’ fiscal effects are not explicitly included in the budget resolution. Rather, the aggregate revenue targets takes into account that revenue will be forgone because of tax expenditures. Through reconciliation instructions, the resolution may direct committees to propose legislation to meet the spending and revenue targets set by the budget resolution. The resolution is not required to propose adjustments to tax expenditures or mandatory spending through these instructions to meet those targets, though it may do so. The proposal is voted on by both budget committees and the full House and Senate. As a plan for Congress, the resolution is not presented to the President for signature and does not have the force of law, but guides the work of the appropriations, authorizing, and tax-writing committees, whose legislation is evaluated against the resolution’s targets. After a budget resolution is passed, appropriations subcommittees in the House and Senate review the budget requests from their related executive agencies and develop appropriations acts that provide agencies the legal authority to incur obligations to fund discretionary spending programs. Appropriations acts do not apply to tax expenditures or, generally, to mandatory spending, and are considered by Congress on an annual basis. Separate from the annual appropriations process, the House and Senate authorizing committees develop authorizing legislation that establishes and continues the operation of federal programs or agencies. Authorizing committees have jurisdiction over specific policy areas, and both discretionary and mandatory spending programs must be authorized. For mandatory spending programs, authorizing legislation can define program eligibility and set benefit or payment rules, which then are paid automatically without the need for an annual appropriation. Also separate from the annual appropriations process, tax-writing committee legislation includes tax expenditure extensions and proposals. Revenue legislation may establish a tax expenditure indefinitely or for specific periods. Tax expenditures are not required to be reapproved unless they are set to expire, though changes may be proposed in any year. Currently, there are budget controls both in statute and in congressional rules that apply to spending and tax expenditure legislation. Pay-as-you- go (PAYGO) rules generally require that any law that would increase mandatory spending or decrease revenues make an additional change in spending or revenues so that the new law does not increase the deficit. For example, the Statutory PAYGO Act of 2010 requires OMB to record the budget effects of all of the revenue and direct spending legislation on scorecards to project budgetary effects of the legislation over 5- and 10- year periods. Sometimes this control is exempted by law, as was done in the Protecting Americans from Tax Hikes Act of 2015, which made various tax expenditures permanent. Both the House and the Senate have PAYGO rules. While the House’s PAYGO rule addresses only direct spending, the Senate PAYGO rule prohibits consideration of direct spending or revenue legislation that could increase the deficit in current and future years. OMB Circular No. A-11 directs agencies to identify tax expenditures, as appropriate, among the various federal programs and activities that contribute to their agency goals, specifically strategic objectives and agency priority goals (APG). Agencies are also required to review progress toward their goals. For strategic objectives, reviews are to occur annually in order to inform annual planning and budget formulation, among other things. For APGs, agency leaders are required to review APG progress in quarterly, data-driven performance reviews. In its guidance, OMB states that tax expenditures should be subject to the same level of review as spending programs, and often complement or substitute for agencies’ spending programs that contribute to their strategic objectives. OMB also directs agencies to work with Treasury’s Office of Tax Analysis (OTA) to develop data and methods to evaluate the effects of tax expenditures that affect, or are directed toward, the same goals as agency programs. Our review of agencies’ performance planning and reporting documents found that most agencies did not identify tax expenditures as contributors to agency goals, consistent with what we have reported previously. We found that 7 of the 24 CFO Act agencies identified tax expenditures as contributors to their missions (two instances) or to specific goals (five instances). The tax expenditures these seven agencies identified accounted for 11 of the 169 tax expenditures included in the President’s Budget for Fiscal Year 2017, representing an estimated $31.9 billion of $1.23 trillion in forgone revenues for fiscal year 2015. In addition, three of these seven agencies developed performance measures to gauge the contributions of tax expenditures towards progress on agency goals. See figure 6 for a summary of the tax expenditures identified as contributing to agency goals or missions. One commonality among agencies that linked tax expenditures to performance measures is that many of these agencies have a defined role in administering the related tax expenditures, and therefore collect data on the tax expenditures. See table 1 for a summary of the agencies that identified tax expenditures contributing to agency missions and goals in their performance documents as of January 2016. As we have previously reported, one key impediment to including tax expenditures in agency performance reviews is the continuing lack of clarity about the roles of different federal agencies in conducting reviews of tax expenditures. This lack of clarity can lead to inaction in identifying tax expenditures’ contributions to agency goals. Office of Tax Analysis (OTA). Treasury’s OTA provides economic and policy analyses leading to development of the President’s tax proposals. It also assesses major congressional tax proposals, which can include tax expenditures. OTA prepares Treasury’s revenue and revenue loss estimates for tax proposals and tax expenditures, respectively. According to OTA officials, OTA does not conduct systematic reviews of tax expenditures, but may conduct them when policymakers are considering changes to tax policy. Internal Revenue Service (IRS). As Treasury’s bureau responsible for determining, assessing, and collecting taxes, IRS administers and supervises the execution and application of internal revenue laws or related statutes, including for tax expenditures. However, IRS is not responsible for managing federal housing, energy, or any of the many other policies to which tax expenditures may contribute. At times, this can lead to insufficient oversight of tax expenditures. For example, in July 2015, we reported that IRS conducted minimal oversight of state housing finance agencies (HFA), on which IRS relies to administer and oversee the low-income housing tax credit. We found that IRS conducted minimal oversight in part because the tax credit is a peripheral program for IRS, in terms of its compliance responsibilities, mission, and priorities for resources and staffing. As part of that report, we suggested Congress consider designating HUD as a joint administrator of the program, including responsibilities for HFA oversight. Other agencies. If agencies do not have a defined role in administering a tax expenditure, they may choose not to identify the tax expenditure’s contributions to agency goals. At the three agencies we selected to interview—USDA, DOE, and HUD—officials reported that they sometimes do not include tax expenditures as contributors to goals in their performance processes or reporting, because the agencies do not have a defined role in administering the tax expenditures. For example, in HUD’s fiscal year 2016 Performance Plan and fiscal year 2014 Performance Report, the agency describes a strategic objective involving “green and healthy homes.” Though HUD does not identify related tax expenditures, agency officials told us they understood that many homeowners and residential property owners who undertake energy efficiency retrofits or install solar panels take advantage of, or benefit from, energy-related tax credits, particularly for solar investment. Agency officials explained that HUD publishes resources on its web portal on renewable energy to help owners walk through the steps of financing options, some of which include tax credits. Officials at all three agencies we spoke to said that their performance conversations focused on programs they administer, which was why tax expenditures were not always included. OMB staff told us they give agencies flexibility to determine the appropriate programs and tax expenditures that should be identified as contributors to agency goals. They also said that it is the agencies’ responsibility to incorporate tax expenditures, as appropriate, in their progress updates for agency goals, and that they do not provide any further clarity on agencies’ roles. Further, OMB staff told us that they do not track agencies’ identification of tax expenditures that contribute to agency goals. As a consequence, OMB does not have a process in place to determine the extent to which agencies are capturing the contributions of all federal commitments, including tax expenditures, toward agency goals. Given OMB’s government-wide purview and Treasury’s familiarity with administering the tax code, these agencies are well positioned to assist other agencies in identifying tax expenditures that contribute to their goals. To assist agencies, OMB’s 2013 and 2014 Circular No. A-11 guidance noted that OMB would work with Treasury and agencies to identify where tax expenditures align with their goals, and that this information was to be published on Performance.gov and included in relevant agency plans, beginning in February 2014. However, OMB subsequently removed the language about working with Treasury and agencies to align tax expenditures with agency goals in the June 2015 update to its guidance. OMB staff told us they removed the language because the agency had no immediate plans for focusing on this effort, nor did they have the capacity to consider it, despite seeing the benefit of such an effort. Without additional OMB and Treasury assistance, agencies may continue to have difficulty identifying whether, or which, tax expenditures contribute to their goals. We have reported previously that challenges with performance measurement limit agencies’ ability to identify the contributions of tax expenditures to agency goals, and our work found that those challenges for agencies continue to exist. In September 2005, we found that a lack of performance information was a challenge in assessing the performance of tax expenditures. We recommended that OMB, in consultation with Treasury, identify ways to address the lack of credible tax expenditure performance information. The President’s fiscal year 2012 budget stated that the administration planned to focus on addressing some of these data- availability challenges and analytical constraints so it can work toward crosscutting analyses that examine tax expenditures alongside related spending programs; however, as of February 2016, OMB had not provided an update on these efforts. In June 2013, we found that agencies face difficulties in measuring performance across various program types, including tax expenditures. We recommended that OMB work with the Performance Improvement Council (PIC) to develop a detailed approach to examine difficulties agencies face in measuring the performance of these various types of federal programs and activities, including identifying and sharing any promising practices from agencies that have overcome difficulties in measuring the performance of these program types. While OMB and PIC officials reported that they have taken some steps to address this recommendation in a few areas, as of June 2016, they have not yet developed a comprehensive and detailed approach to address these issues as envisioned in our report, and these efforts have not included tax expenditures. We continue to believe that implementing these recommendations would help agencies evaluate how tax expenditures contribute to agency goals. One factor that continues to impede agencies’ abilities to assess the contributions of tax expenditures to their goals is the limited availability of tax expenditure data. Officials from Treasury and the three agencies—USDA, DOE, and HUD—that we selected to interview told us that limited tax data can be a barrier to conducting performance reviews or analyzing tax expenditures. We reported in April 2013 that tax forms did not capture who claimed a tax expenditure and how much they claimed for 63 percent of tax expenditures in 2011. Likewise, we have reported that tax expenditure data that are collected on tax forms are not always sufficiently detailed to assess or describe tax expenditure results. For example, in April 2015, we found that IRS data did not include key project-level information on the Investment Tax Credit and Production Tax Credit, which is necessary to describe how many projects these credits supported and to evaluate the credits’ effectiveness. We reported in April 2013 that these data challenges can be remedied to some extent by data from other agencies or other sources, such as public records, state agency records, and surveys. However, these solutions do not completely overcome the data challenges associated with evaluating tax expenditures. For example, we reported that HUD community level data can be used to partially remedy limited IRS tax data on Empowerment Zone (EZ) employment tax credits, which did not identify which specific communities received EZ tax credits. However, HUD only tracks a portion of EZ employment tax credits, and thus its data do not completely mitigate the limitations of tax form data in identifying which communities benefit from EZ tax credits. For some tax expenditures, evaluating performance may require collecting additional tax form data. For example, in April 2015, we reported that because basic information on the Investment Tax Credit and Production Tax Credit is unavailable, it will be difficult for Congress to evaluate the effectiveness of these tax credits or compare them with spending or loan programs as it considers reauthorizing or extending them. We suggested that Congress consider directing IRS to collect additional tax form data to help evaluate the effectiveness of the credits. Further incorporating tax expenditures into federal budgeting processes could help achieve various broad benefits, based on our assessment of the roundtable discussion we held with budget and tax experts and on our prior work (see figure 7). Below we discuss options that may help achieve these broad benefits. We identified the options based on insights from experts during our roundtable discussion and interviews—as well as related literature—and our prior work. For each option, we indicate the broad benefit— additional transparency, opportunity for review, or greater control—that could be achieved, and issues that policymakers would need to consider when evaluating the merits of each option (which we present below as design considerations). We do not assess the relative feasibility of these options, nor do we recommend implementing any of these options in this report. Finally, these options are not exhaustive, nor are they mutually exclusive, as some options could be implemented together and may even complement each other. While these options offer a range of potential benefits, there are challenges and tradeoffs for policymakers to consider in whether or how to implement any policy to further incorporate tax expenditures into federal budgeting processes. For example, agency officials and experts said that implementing these options—in particular those requiring additional information on, or reviews of, tax expenditures—would require resources. Moreover, as previously mentioned, and as experts told us, there is not always a consensus on which tax provisions constitute tax expenditures. Finally, it is difficult to measure the size of some tax expenditures that are only loosely linked to actual tax filing data. While the JCT and Treasury use modeling to estimate tax expenditure revenue losses, in some cases little data are available and assumptions are made. These broad challenges would need to be considered when assessing how to approach the different options for further incorporating tax expenditures into federal budgeting processes. Tax expenditures could be presented, for informational purposes, in the congressional budget resolution alongside other spending by budget function. A similar presentation could be included in the President’s Budget. Currently, tax expenditure estimates are presented in congressional budget documents and the President’s Budget separate from other types of spending. Moreover, they are not included within, or shown alongside, summaries of spending levels by functional category; rather, they are an undifferentiated component of total federal revenues, which obscures their size. OMB previously presented tax expenditure revenue loss totals alongside outlays and credit activity for each budget function in the federal budget from fiscal year 1998 through fiscal year 2002, in response to one of our prior recommendations. However, it discontinued the practice in fiscal year 2003. In 2005, when we recommended that OMB resume presenting tax expenditures in the budget together with related spending programs, OMB told us that the current presentation of tax expenditure information—that is, presenting tax expenditures separately from other spending in the budget—was sufficient for providing the public and policymakers with what is useful to know about these provisions of the tax code. In our roundtable, experts noted that tax expenditure estimates by functional category were already included in budget documents, but also acknowledged that having the information in different parts of budget documents does not facilitate a side-by-side comparison. As we have previously reported, presenting tax expenditure estimates alongside discretionary and mandatory spending levels could increase transparency and better communicate to the public the levels of spending being allocated to national priorities. What methodology would Treasury and JCT use to estimate aggregated tax expenditures by budget function? Tax expenditure estimates represent the revenue losses associated with particular tax provisions, assuming the rest of the tax code and taxpayer behavior remain unchanged. If all tax expenditures within a budget function were removed at once, there would be interactions among the tax expenditures that would change the estimates. As a result, aggregating current tax expenditure estimates for a budget function would likely result in an imprecise estimate. However, Treasury and JCT officials told us that it was methodologically feasible to prepare aggregated revenue loss estimates representing those tax expenditures related to a specific budget function that would account for these interactions. This option would institute a systematic approach to evaluating tax expenditures on an ongoing basis. Evaluations are studies that use research methods to address specific questions about program performance. In particular, evaluations can be designed to isolate the causal impacts of programs from other external economic or environmental conditions to assess a program’s effectiveness. Treasury does not regularly conduct evaluations of tax expenditures; rather, it conducts some evaluations on an ad hoc basis. Other researchers and organizations may also conduct evaluations of tax expenditures on an ad hoc basis. OMB has previously encouraged agencies to strengthen their program evaluations and expand their use of evidence and evaluation in budget, management and policy decisions to improve government effectiveness. In 2005, we reported that the results of tax expenditure evaluations could help identify how well tax expenditures are working, both to identify ways to better manage specific tax expenditures and to decide how best to ensure prudent stewardship of taxpayers’ resources. As previously discussed, we recommended that OMB develop and implement a framework for conducting performance reviews of tax expenditures, but OMB has not reported on progress made on this recommendation since the President’s fiscal year 2012 budget. Moreover, since fiscal year 2011, the President’s budget has recognized that a comprehensive evaluation framework that examines incentives, direct results, and spillover effects would benefit the budgetary process by informing decisions on tax expenditure policy. Finally, this option may improve the information available to policymakers on the effectiveness of specific tax expenditures, which policymakers could then use in deciding how to best allocate resources to achieve national priorities. Tax expenditure evaluations could also complement ongoing GPRAMA-related performance measurement and reporting by measuring results that are too difficult or expensive to assess annually, explaining the reasons why performance goals were not met, or assessing whether one approach is more effective than another. Which tax expenditures would be evaluated and how frequently? We have previously identified options for selecting tax expenditures for evaluation, including: (1) selecting on a judgmental basis, (2) selecting based on established criteria, and (3) evaluating an existing temporary tax expenditure before it is extended. Some state and foreign governments conduct tax expenditure evaluations grouped by policy focus or industry, or focus evaluations on tax expenditures with the largest revenue losses. For example, Germany focuses on its largest tax expenditures, with one budget-related document reporting that 27 of 102 tax expenditures—accounting for 75 percent of the value of all tax expenditures—were evaluated between 2011 and 2014. Meanwhile, the Netherlands established a 5-year schedule to review all tax expenditures and makes the goals and standards of the reviews publicly available. Who would be responsible for conducting the evaluations? Executive branch agencies could conduct these evaluations within the GPRAMA framework. We have previously recommended that OMB develop and implement a framework for conducting performance reviews of tax expenditures, though OMB has not reported on progress made on this recommendation since the President’s fiscal year 2012 budget, as previously mentioned. Experts also suggested that Treasury would be well equipped to analyze available taxpayer data related to specific tax expenditures. However, they also noted the potential for political influence in having any executive branch agency conduct the evaluations, and that hiring nongovernmental organizations to conduct these evaluations could be an alternative to executive branch reviews. One expert suggested that a tax expenditure commission with appointees from a balance of political parties could identify which of these organizations would conduct each review. How might policymakers address data challenges? Agencies may not have access to confidential taxpayer data that they might need to conduct evaluations, and existing IRS data may not be sufficient for evaluating the efficiency, equity, and other effects of specific tax expenditures. For these challenges with lack of access to data, Congress has allowed some exceptions to laws limiting the disclosure of taxpayer information to third parties. For the challenge of lack of tax expenditure data, any party conducting an evaluation could, to some extent, use data from other agencies or other sources, such as public records, state agency records, and surveys, as previously discussed. IRS could also be directed to collect more data that could be used to evaluate a given tax expenditure, although such an effort could increase taxpayer burden. Policymakers could also weigh the availability of data on specific tax expenditures when deciding which tax expenditures should or should not be evaluated. OMB could conduct reviews of portfolios of programs and other policy instruments—including tax expenditures—used to help pursue similar objectives. Currently, OMB generally reviews spending by agency rather than by policy area. We have previously reported that while the evaluation of spending programs in isolation may be revealing, it is often critical to understand how federal spending programs fit within a broader portfolio of tools and strategies—such as regulations, direct loans, and tax expenditures—to advance federal missions and achieve federal performance goals. Likewise, such an analysis could help Congress and executive agencies identify whether a program complements and supports other related programs, whether it is duplicative or redundant, or whether it actually works at cross-purposes to other initiatives. For example, we previously reported that 20 federal government entities administered 160 programs, tax expenditures, and other tools that supported homeownership and rental housing in fiscal year 2010. We have also previously reported that coordinated reviews of tax expenditures with related spending programs could help policymakers reduce overlap and inconsistencies and direct scarce resources to the most effective or least costly methods to deliver federal support. Experts identified federal aid for higher education as another example of an area for which portfolio review could be valuable, as there are multiple federal grants, loans, and tax expenditures that serve this policy area. How would policy areas be selected for review? One expert noted that the selection of policy areas for review could be accomplished in Congress, either by the leadership in consultation with the President or by a budget committee as a part of the congressional budget process. Alternatively, GPRAMA could serve as the framework for selecting areas for review, specifically through CAP goals—the long- term, outcome-oriented crosscutting priority goals for the federal government. In September 2015, OMB staff told us that OMB had determined that there were no tax expenditures that were critical to support achievement of current CAP goals. However, as new CAP goals are established, tax expenditures may support their achievement. Further, we have previously reported that OMB and agencies are required to consult with Congress when establishing or adjusting CAP goals, which could provide Congress with opportunities to encourage further incorporation of tax expenditures into the goals, if desired. How would the federal resources that contribute to a policy area be identified? As we have reported previously, creating a comprehensive list of federal programs along with related funding information is critical for identifying potential duplication, overlap, or fragmentation among federal programs or activities. GPRAMA requires OMB to publish a list of all federal programs on a central government-wide website. In October 2014, we reported that although OMB and agencies have taken some initial steps to develop program inventories with related budget and performance information, the ensuing result has not produced a useful tool for decision making. We made various recommendations to OMB to better present a more coherent picture of all federal programs and ensure that the information agencies provide in their inventories is useful to federal decision makers. For example, one of our recommendations was that OMB, in coordination with Treasury, develop a tax expenditure inventory that identifies each tax expenditure and describes its definition, purpose, and related performance and budget information; as of November 2015, no action had been taken on that recommendation. Such information could help agencies identify the contributions of tax expenditures to achieving national priorities. Policymakers could help ensure greater coordination between the tax- writing and authorizing committees by having authorizing committees consider new or modified tax expenditure legislation for which they have program-related expertise. When proposed legislation is introduced in the House or Senate, it is generally referred to the committee with subject- matter expertise—thus, in the case of tax expenditures, tax-writing committees are referred proposed tax legislation that can span national priorities across the government. Authorizing committees, on the other hand, are referred proposed legislation for policy areas within their jurisdiction, but generally do not provide direct and formal input on legislation of related tax provisions in those policy areas. By being involved in developing or modifying tax expenditure legislation, authorizing committees would have a broader picture of federal resources being allocated to national priorities. This may further help policymakers identify potential duplication between spending programs and tax expenditures, and may allow for greater comparison of how well specific policy tools help achieve a national priority. While bringing other committees into the process of reviewing tax expenditures could be institutionally cumbersome, having tax-writing and authorizing committees review tax legislation may still be appropriate when a provision has both programmatic purposes, like stimulating an activity, and tax policy purposes, like taking into account taxpayers’ ability to pay taxes. What path through committees would new tax legislation take? Proponents of this option suggested various ways that both tax-writing and authorizing committees could provide input when considering proposed tax legislation. For example, some proponents suggested that tax-writing committees could continue to draft new tax legislation, but that authorizing committees would then need to approve the legislation before it goes to the full House or Senate for a vote—a process similar to the sequential referral. Under this option, policymakers would also need to consider how to account for tax expenditures serving varied purposes that fall within the jurisdiction of multiple committees. Congress could authorize the Senate and House Budget Committees to include tax expenditures with other spending, both in total and by functional category, in budget resolution levels. Currently, the budget committees—via the budget resolution—set forth revenue and spending targets, including spending targets for each major functional category of the budget. Tax-writing committees then decide how to achieve revenue targets through, if needed, changes to overall tax rates or to specific tax expenditures. Meanwhile, appropriations and authorizing committees decide how to achieve spending targets for discretionary and mandatory spending, respectively. Depending on how it was designed, this option would either result in the budget committees setting targets for tax expenditure levels, or it would give authorizing (and potentially appropriating) committees greater or full control over determining the appropriate levels of spending for tax expenditures and other programs within a specific budget function. Broadly speaking, some experts said that options that further integrate tax expenditures into congressional budgetary controls, such as the budget resolution, could result in the greatest change in how Congress approaches decisions on federal investments. Depending on how policymakers designed this option, it could potentially allow for this movement of federal resources across policy instruments. Moreover, some experts noted it was important for policymakers to have increased flexibility to budget across policy instruments—moving resources toward the most efficient and effective means of making progress toward a national priority. How would tax expenditure estimates be allocated to committee(s)? From our expert interviews and roundtable discussion, and within the literature we reviewed, experts identified various ways to further integrate tax expenditures into the budget resolution, each of which would result in a different shift in committee powers. For example, tax expenditures could be (1) allocated to the tax-writing committees; (2) allocated to, or shared between, both the tax-writing and relevant authorizing (and potentially appropriating) committees; or (3) allocated to the relevant authorizing (and potentially appropriating) committee to determine how to achieve those spending levels across multiple policy instruments. The first design alternative would continue to leverage the expertise of tax- writing committees in deciding tax policy. While the third design alternative would likely be complex to design and implement, it would potentially allow for the movement of federal resources across policy instruments. The second design alternative may help achieve both those benefits, but would also be complex to design and implement. How might this change affect committee powers? This option could result in a substantial change in congressional committee jurisdictional powers, depending on how it was designed. The budget committees, and potentially authorizing and appropriations committees, would have greater input into tax policy decisions than previously, while tax-writing committees would have less input. One expert noted that dispersing tax expenditure decisions across multiple committees could impede any opportunity for fundamental tax reform, since the powers to make such changes would be distributed among more congressional actors. Whether or not this change in committee powers is an optimal outcome is a policy decision. Could policymakers overcome tax expenditure measurement challenges? Policymakers would need to overcome the challenges of treating tax expenditures on the same footing as spending programs—particularly with respect to budget controls that are enforced. Specifically, some tax expenditure estimates are only loosely linked to actual tax filing data. Thus, any prior-year levels of spending for these tax expenditures are only estimates. In contrast, spending programs are measured in obligations and outlays, which are generally measured by the specific amounts of cash committed and disbursed by the government, respectively. However, this measurement challenge is not unique to tax expenditures. For example, experts noted that credit programs are also difficult to measure, but policymakers have made progress in measuring the costs of these types of programs through increased scrutiny and requirements for agencies. Policymakers could require that all, or some subset of, tax expenditures expire after a finite period. This option would result in Congress periodically considering whether to allow tax expenditures to expire or to extend them, similar to the subset of tax expenditures that currently expire unless extended. Most other tax expenditures are enacted permanently and continue without change unless amended. Under Pay- As-You-Go (PAYGO) rules, this option would result in the tax-writing committees needing to find revenues to offset the cost of extending expiring tax expenditures. However, Congress could incorporate the cost of extending expiring tax expenditures in its baseline revenue estimates, thus negating the need to find offsetting revenues, or it could waive the PAYGO rules on a case-by-case basis. This option could result in greater oversight of tax expenditures, as policymakers would be required to explicitly decide whether or not to extend more or all tax expenditures. Which tax expenditures should expire, when, and in what way? Congress could develop a general rule establishing which tax expenditures should expire and how often. It could also consider the added legislative workload imposed on the tax-writing committees by such a policy when deciding when tax expenditures might expire, and which ones would expire. We have previously reported that frequent changes in the tax code, such as from extended or expired tax provisions, can create uncertainty and contribute to compliance burden by making tax planning more difficult. Policymakers could consider if policies should be developed to mitigate this effect. For example, grandfathering could be provided for assets purchased under tax-preferred regimes. Policymakers could also examine how other countries that require tax expenditures to periodically expire have designed these policies to minimize this burden. Could or should the option be designed to be revenue neutral? All else equal, if a tax expenditure expires, the result would be additional revenue that the federal government would collect. Policymakers could consider whether to design this option in a way that would automatically lower tax revenues to achieve revenue neutrality. In recent years, revenue losses from tax expenditures have reached levels similar to discretionary spending; approximately $1.23 trillion in revenue was forgone for fiscal year 2015. Even as discretionary spending has decreased, in real terms, since 2010, the amount of revenue forgone via tax expenditures has increased. Yet tax expenditures do not receive the same level of scrutiny within federal budget processes as discretionary spending. Moreover, the executive branch has made little progress toward increasing transparency of the budgetary effects of tax expenditures—compared to related spending—or toward implementing a framework to gauge their performance, as we have recommended previously. We continue to believe in the merit of those recommendations. GPRAMA provides an existing framework for the executive branch to exercise greater oversight over tax expenditures and could help facilitate further incorporating tax expenditures into performance and budget discussions. In implementing GPRAMA, OMB has directed agencies to take tax expenditures into account when identifying programs and activities that contribute to their goals. However, despite this guidance, agencies have identified few tax expenditures that contribute to such goals. We have found that a lack of clarity about the roles of different agencies in conducting reviews of tax expenditures impedes their ability to identify tax expenditures that contribute to agency goals. OMB—given its government-wide purview—and Treasury—given its familiarity with administering the tax code—are well positioned to assist agencies in identifying tax expenditures that relate to their goals. Without additional OMB and Treasury assistance, agencies may continue to have difficulty identifying whether or which tax expenditures are relevant to their goals, and may be limited in their understanding of how the range of federal investments and policy tools contribute to agency goals. More broadly, we identified various other options to further integrate tax expenditures into both the executive and congressional budgeting processes based on our review of literature and our past work, and our interviews and roundtable discussion with experts. If implemented, these options could increase transparency on how the federal government allocates resources, provide additional means for policymakers to review tax expenditures’ effectiveness, and create additional controls over spending through the tax code. However, the options come with a range of challenges and tradeoffs that policymakers would need to consider. To help ensure that the contributions of tax expenditures toward the achievement of agency goals are identified and measured, the Director of OMB, in collaboration with the Secretary of the Treasury, should work with agencies to identify which tax expenditures contribute to their agency goals, as appropriate—that is, they should identify which specific tax expenditures contribute to specific strategic objectives and agency priority goals. We provided a draft of this report for review and comment to the Secretaries of Agriculture, Commerce, Defense, Education, Energy, Health and Human Services, Homeland Security, Housing and Urban Development, the Interior, Labor, State, Transportation, the Treasury, and Veterans Affairs; the Attorney General of the United States; the Directors of the Office of Management and Budget, and the National Science Foundation; the Administrators of the Environmental Protection Agency, General Services Administration, National Aeronautics and Space Administration, Small Business Administration and the U.S. Agency for International Development; the Acting Director of the Office of Personnel Management; the Acting Commissioner of the Social Security Administration; and the Chairman of the Nuclear Regulatory Commission. OMB staff spoke with us about their comments on our report and generally agreed with the recommendation and further noted that implementing it could be beneficial. However, they also said it is not an effort they are currently pursuing due to competing priorities, as well as capacity and resource constraints. They also provided technical comments, which we incorporated as appropriate. The following agencies provided technical comments that were incorporated into the draft as appropriate: The Departments of Energy, Health and Human Services, Housing and Urban Development, the Interior, and Labor. The following agencies had no comments on the draft report: The Departments of Agriculture, Commerce, Defense, Education, Homeland Security, Justice, State, Transportation, the Treasury, and Veterans Affairs; the Environmental Protection Agency; the General Services Administration; the National Aeronautics and Space Administration; the National Science Foundation; the Nuclear Regulatory Commission; the Office of Personnel Management; the Small Business Administration; the Social Security Administration; and the U.S. Agency for International Development. We are sending copies of this report to the Director of OMB, Secretary of the Treasury, and the heads of the other agencies we reviewed, as well as appropriate congressional committees. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-6806 or krauseh@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in the appendix. Heather Krause, (202-512-6806) or krauseh@gao.gov. In addition to the contact named above, Jeff Arkin (Assistant Director), Amy Radovich (Analyst-in-Charge), Amy Bowser, Hannah Dodd, Robert Gebhart, Carol Henn, Susan Irving, Benjamin Licht, Donna Miller, Ed Nannenhorn, Cynthia Saunders, MaryLynn Sergent, Timothy Shaw, and Emily Upstill made significant contributions to this report.
Tax expenditures—special credits, deductions, and other tax provisions that reduce taxpayers’ tax liabilities—represent a substantial federal commitment. If Treasury’s estimates are summed, an estimated $1.23 trillion in federal revenue was forgone from the 169 tax expenditures reported for fiscal year 2015, an amount comparable to discretionary spending. Tax expenditures are often aimed at policy goals similar to those of federal spending programs. GAO was asked to identify the extent to which tax expenditures are incorporated into federal budget processes. This report (1) compares the treatment of tax expenditures and other spending in federal budgeting processes; (2) evaluates the extent to which OMB and agencies have identified tax expenditures’ contributions to agency goals; and (3) examines options to further incorporate tax expenditures in federal budgeting processes. To address these objectives, GAO reviewed agency budget documents, OMB guidance, prior GAO reports, and performance plans and reports available as of January 2016 for all 24 CFO Act agencies. GAO also held a roundtable discussion with budget and tax experts to examine options for further incorporating tax expenditures into budgeting processes. Federal budget formulation processes include fewer controls and reviews, and provide less information on tax expenditures—which represented an estimated $1.23 trillion in forgone revenues in fiscal year 2015—than for discretionary or mandatory spending. For example, in the President’s budget, tax expenditure revenue loss estimates are presented separately from related spending, making their relative contributions toward national priorities less visible than spending programs. Likewise, only proposed tax expenditures or those that expire are subject to review within congressional budget processes, similar to mandatory spending. Existing, non-expiring tax expenditures are not subject to such review. The Office and Management and Budget (OMB) and agencies have made limited progress identifying tax expenditures’ contribution to agency goals. As of January 2016, 7 of the 24 Chief Financial Officer (CFO) Act agencies identified tax expenditures as contributors to their agency goals—as directed in OMB guidance—or agency missions. The tax expenditures they identified accounted for only 11 of the 169 tax expenditures included in the President’s Budget for Fiscal Year 2017, representing an estimated $31.9 billion of $1.23 trillion in forgone revenues for fiscal year 2015. Based on interviews with agencies and reviewing past GAO work, GAO found that a lack of clarity about agencies’ roles leads to inaction in identifying tax expenditures that contribute to agency goals. To address this, OMB guidance previously stated that it would work with the Department of the Treasury (Treasury) and other agencies to identify where tax expenditures align with agency goals. OMB removed that language in June 2015, citing capacity constraints. Without additional OMB and Treasury assistance, agencies may continue to have difficulty identifying whether, or which of, the remaining 158 tax expenditures—representing $1.20 trillion in forgone revenues—contribute to their goals. Based on an assessment of budget and tax experts’ input and prior GAO work, GAO found that options to further incorporate tax expenditures into budgeting processes could help achieve various benefits; but policymakers would need to consider challenges and tradeoffs in deciding whether or how to implement them. For example, one option is to require that all, or some subset of, tax expenditures expire after a finite period. This option could result in greater oversight, requiring policymakers to explicitly decide whether to extend more or all tax expenditures. However, this option could lead to frequent changes in the tax code, such as from extended or expired tax expenditures, which can create uncertainty and make tax planning more difficult, as GAO has reported previously. GAO recommends that OMB, in collaboration with Treasury, work with agencies to identify which tax expenditures contribute to agency goals. OMB generally agreed with GAO’s recommendation.
IRS uses multiple channels to provide customer service to taxpayers and process tax returns: Telephone service: Taxpayers can speak with IRS assistors to obtain information about their accounts throughout the year or to ask basic tax law questions during the filing season. Taxpayers can also listen to recorded tax information or use automated services to obtain information on the status of refund processing as well as account information such as balances due. Since fiscal year 2011, IRS has received an average of about 116 million calls from taxpayers each year. In 2015, we reported that IRS’s telephone service had continued to deteriorate from prior years and we suggested Congress require the Secretary of the Treasury to develop a comprehensive customer service strategy. Correspondence: Taxpayers may also use paper correspondence to communicate with IRS, which includes responding to IRS requests for information or data, providing additional information, or disputing a notice. IRS assistors respond to taxpayer inquiries on a variety of tax law and procedural questions, and handle complex account adjustments such as amended returns and duplicate filings. IRS tries to respond to paper correspondence within 45 days of receipt; otherwise, such correspondence is considered “overage.” Last year, we reported that about half of the 19 million pieces of correspondence IRS received was overage. Minimizing overage correspondence is important because delayed responses may prompt taxpayers to write again, call, or visit walk-in sites. IRS then would be required to pay interest on refunds owed to taxpayers if it did not process amended returns within 45 days. Online services: IRS’s website is a low-cost method for providing taxpayers with basic interactive tools to, for example, check refund status, make payments, and apply for plans to pay taxes due in scheduled payments (installment agreements). Taxpayers can use the website to print forms, publications, and instructions, and can use IRS’s interactive tools to get answers to tax law questions without calling or writing to IRS. Total visits to IRS’s website in fiscal year 2016 were about 500 million. Face-to-face assistance: Face-to-face assistance remains an important part of IRS’s service efforts, particularly for low-income taxpayers. Taxpayers can receive face-to-face assistance at IRS’s walk-in sites or at thousands of sites staffed by volunteer partners during the filing season. At walk-in sites, IRS staff provide services including answering basic tax law questions, reviewing and adjusting taxpayer accounts, taking payments, authenticating Individual Taxpayer Identification Number applicants, and assisting IDT victims. At sites staffed by volunteers, taxpayers can receive free return preparation assistance as well as financial literacy information. Nearly 4.5 million taxpayers visited an IRS walk-in site in fiscal year 2016. Tax return processing: Every year since 2011, IRS has processed more than 140 million paper and electronically-filed (e-filed) returns and approximately $300 billion in refunds. When IRS processes returns, it checks for errors and corrects those that it can. If needed, IRS corresponds by mail with the taxpayer to request additional information, such as a missing form or other documentation. IRS expends significant resources correcting errors. The process can affect how long it takes IRS to issue refunds. IRS’s fiscal year 2016 appropriation was $11.24 billion. This is about $900 million (7 percent) less than its fiscal year 2011 appropriation of $12.12 billion. The change in appropriation varied significantly by appropriation account. Specifically, IRS’s Taxpayer Services account— used to fund taxpayer service activities and programs—increased about 2 percent from $2.29 billion to $2.33 billion between fiscal years 2011 and 2016. In contrast, the Enforcement account decreased about 11 percent (about $620 million) from $5.49 billion to $4.87 billion between fiscal years 2011 and 2016. IRS’s fiscal year 2016 appropriation included a $290 million increase over fiscal year 2015, which IRS was directed to allocate to improve taxpayer services ($178.4 million), cybersecurity ($95.4 million), and IDT prevention ($16.1 million). In addition to annually appropriated resources, IRS has permanent, indefinite authority to obligate user fee collections, which allows the agency flexibility in the use of these funds. The amounts that IRS has obligated in user fee funds from the Taxpayer Services account has varied considerably in the last 3 years, from $183 million in fiscal year 2014 to $45 million in fiscal year 2015 to $70 million in fiscal year 2016 (down from a planned $103 million). Viewed broadly, IDT refund fraud is composed of two crimes: (1) the theft or compromise of PII, and (2) the use of stolen (or otherwise compromised) PII to file a fraudulent tax return and collect a fraudulent refund. Figure 1 presents an example of how fraudsters may use stolen PII and other information, real or fictitious (e.g., sources and amounts of income), to complete and file a fraudulent tax return and successfully receive a refund. In this example, a taxpayer may alert IRS of IDT refund fraud. Alternatively, IRS can detect IDT refund fraud through its automated filters that search for specific characteristics as well as through other reviews of taxpayer returns. In October 2015, IRS formed an IDT reengineering team that is focused on improving the taxpayer experience for victims of IDT. IRS improved its telephone level of service—which is defined as the percentage of people who want to speak with an assistor and were able to reach one—from 37 percent during the 2015 filing season to 72 percent during the 2016 filing season (7 percentage points higher than forecast). This was the highest level of service reached during this time since 2011. As it has historically done, IRS reduced the level of service before and after the filing season, which IRS officials explained was to increase IRS’s attention to customer service in others areas, such as responding to taxpayer correspondence. During the 2016 filing season, taxpayers waited an average of about 11 minutes to speak to an assistor, which was substantially better than IRS expected. By comparison, during the fiscal year, callers waited an average of about 18 minutes, which was an improvement over last year and better than IRS had expected this year. Figure 2 shows that IRS provided a better level of service and shorter average wait time to speak to an assistor during the 2016 filing season compared to the fiscal year. Compared to last year, total call volume increased about 2 percent to slightly more than 114 million calls. At the same time, IRS increased the number of full-time equivalents (FTE) answering phone calls by about 23 percent (which includes about 250 FTEs from its Identity Theft Victims Assistance unit) and assistors answered about 40 percent (or 7.3 million) more calls from taxpayers. Total calls where taxpayers abandoned the call, were disconnected, or received a busy signal declined by about 10 percent (from 56.2 million in 2015 to 50.6 million in 2016). IRS officials attributed many of these improvements to additional appropriations funding and user fee funds, which in part allowed for more hiring and use of overtime compared to last year. With the additional $178.4 million in appropriated funds for taxpayer services, IRS hired approximately 1,000 more assistors. However, IRS officials noted the agency received its appropriated funds in December 2015, which caused delays in hiring and training assistors. Also, IRS assistors who answer telephone calls and respond to correspondence from taxpayers collectively worked significantly more overtime than last year (about 600 FTEs of overtime in fiscal year 2016 compared to about 60 FTEs the prior year). As in prior years, IRS maintained high accuracy rates for assistors’ responses to taxpayer questions via telephone, which have remained well above 90 percent for answering both account and tax law questions. To improve telephone service, we have made several recommendations to IRS such as to set its level of service based on a comparison to private-sector organizations providing a comparable or analogous service—or the “best in the business”—to identify gaps between actual and desired performance. As of December 2016, IRS officials reported that they completed a study to benchmark IRS’s telephone service against the best in business in June 2016, and were reviewing the results. The measures that IRS uses to report its performance in answering telephone calls include level of service, wait time, and demand to speak to an assistor, among others. Several of these measures are broken down by type of call. IRS uses this information to track which types of calls, if any, require more resources to handle or could be readily automated. IRS officials said they believe that, taken together, the measures IRS uses provide an overall picture of the resources it dedicates to the different types of calls. According to IRS estimates, the average cost per call answered by IRS assistors increased from $32 to $56 between 2011 and 2015. IRS officials attributed this increase to answering about half as many calls, combined with only slightly lower costs, in 2015 compared to 2011. However, in 2016, IRS estimated this cost declined to $42 per call. Officials attributed this decline to having more assistors and answering more calls that were shorter in average length. For automated calls, IRS estimated an average cost of about $0.79 per call in 2011, which decreased to $0.50 per call in 2016. While IRS is answering fewer calls through automation, we have previously reported that identifying more calls that IRS can answer through automation is important because it reduces demand for assistor- answered calls and saves IRS money. IRS does not break down the average cost per call by type of call received. According to IRS officials, its costs are generally consistent across the different types of calls because assistors’ pay does not vary significantly by location, and assistors can generally answer all types of calls after receiving the necessary training. Therefore, officials do not believe it would be useful to calculate and track the dollar cost per type of call in addition to the measures they currently use. IRS has taken a multi-pronged approach to improving service. For example, IRS expanded its appointment service pilot to all its walk-in sites, which allowed taxpayers to call IRS to schedule an appointment. IRS officials reported that by doing this, it addressed the taxpayer’s question on the phone or directed them to its website without needing to schedule an appointment about half the time. This also contributed to a 20 percent decrease in total visits to IRS walk-in sites. See appendix II for information on use of IRS walk-in sites. Additionally, IRS has seen growth in the use of certain online services, which include its web site, mobile application tools, and select self-service tools (see appendix III for data showing the increased use in these areas since fiscal year 2011). This growth occurred despite two key applications being offline. In May 2015, IRS disabled its Get Transcript service after fraudsters used personal information obtained from other sources outside IRS to pose as legitimate taxpayers to access their tax return information. More than a year later, in June 2016, IRS relaunched the service. IRS stated the new version provides a more rigorous e-authentication process for taxpayers, which was intended to significantly increase protection against identity thieves. IRS also expects that this enhanced authentication process will provide a foundation for additional online services. In June 2016, IRS discontinued its e-file Personal Identification Number (PIN) tool, with which taxpayers could retrieve their e-file PINs online or via telephone. This action followed IRS’s announcement in February 2016 that cybercriminals had stolen more than 100,000 e-file PINs through the tool. An ongoing challenge for IRS is balancing the need for strong security with taxpayers’ ability to access their personal taxpayer information through IRS’s online services. External stakeholders, such as third-party software providers, and the National Taxpayer Advocate have expressed concerns that IRS’s e-authentication procedures limit the number of taxpayers who can use these services. In a fiscal year 2017 report to Congress, the National Taxpayer Advocate raised a number of concerns about IRS potentially ignoring the needs of taxpayers who either have no access to the online services or choose not to use an online account system for various reasons. For example, the report noted that not all taxpayers have credit cards or access to the technology required to authenticate online, such as a smartphone or email account. IRS officials acknowledged these challenges and in June 2016 included an option for a taxpayer to authenticate his or her identity through the mail, which eliminates some of the requirements to gain online access to these services. Additionally, in December 2016 IRS launched an online tool that allows taxpayers to view their account balance. IRS said it plans to add additional capabilities to this tool in the future. IRS also continues to struggle with processing correspondence in a timely manner. IRS received more than 19.4 million pieces of correspondence in fiscal year 2016, a 3 percent increase over last year. While IRS has continued to reduce the time needed to close correspondence cases, declining from its peak of 67.4 days in fiscal year 2013 to 45.5 days in fiscal year 2016, its correspondence overage rate continues to remain high at nearly 50 percent. Accordingly, during the first half of fiscal year 2016, customer satisfaction scores for correspondence were substantially lower than for toll-free telephone service (62 percent and 87 percent, respectively). As of November 2016, the Department of the Treasury (Treasury) had not implemented our 2015 recommendation that it update the department’s performance plan to include overage rates for handling correspondence as a part of Treasury’s performance goals. IRS officials told us that, in June and August 2016, they met with Treasury and that based on these discussions, Treasury and IRS agreed to include language in the Treasury fiscal year 2018 Congressional Justification regarding correspondence overage rates. In addition to our recommendations on telephone and correspondence, implementing our prior recommendations in other areas could help IRS improve service. In April 2013, we recommended that IRS develop a long- term online strategy that should, for example, include business cases for all new online services. Such a strategy would help ensure that IRS is maximizing the benefit to taxpayers and reduce costs in other areas, such as for IRS’s telephone operations. In addition, in December 2015, we suggested that Congress consider requiring that Treasury work with IRS to develop a comprehensive customer service strategy. Without such a strategy, Treasury and IRS can neither measure nor effectively communicate to Congress the types and levels of customer service taxpayers should expect and the resources needed to reach those levels. As of December 2016, Congress had not yet taken action on our suggestion. However, in April 2016, IRS officials told us that the agency established a team to consider our prior recommendations in developing a comprehensive customer service strategy or goals for telephone service. As noted above, IRS officials have completed the benchmarking study, and are reviewing the results. IRS has a “Future State” vision for agency-wide operations, which aims to improve services across different taxpayer interactions such as individual account assistance, exams, and collections. IRS requested funding in the fiscal year 2017 budget justification to enhance web applications, including the online account component of its Future State initiative. However, it is unclear the extent to which the Future State initiative will address our recommendations. We will continue to assess the initiative as it develops. IRS provides key stakeholders, including Congress and federal oversight agencies, historical performance data and forecasts concerning what it expects to deliver during the fiscal year, such as telephone level of service. However, this information is not necessarily designed for or accessible to taxpayers. One exception is that, on the telephone, IRS provides taxpayers with an expected wait time to speak with an assistor. In addition, IRS has issued press releases for several years in February cautioning that the month’s President’s Day weekend is one of the busiest times of the year to call IRS, and providing alternative sources for taxpayers to get the information they need. However, this information is largely directed to the media to disseminate to the public, and key performance information, such as level of service and average wait time, is not easily available to taxpayers when they access IRS’s website. Similarly, IRS internally forecasts and tracks how long it expects to take when processing different types of correspondence, but does not publicize this information. Moreover, IRS does not have a central, readily available location—for example on its website—to provide customer service information that informs taxpayers what type and level of service to expect when interacting with IRS. Both Congress and the executive branch have taken steps to improve customer service. The GPRA Modernization Act (GPRAMA) requires agencies to, among other things, establish a balanced set of performance indicators to measure progress toward each performance goal, including, as appropriate, customer service. Similarly, several Executive Orders, Presidential Memorandums, and OMB guidance require agencies to take steps to strengthen customer service and describe a number of actions agencies can take to improve their customer service. Specifically, these include informing customers what they have a right to expect when they request services and providing customer service standards that are understandable and easily available to the public. Additionally, OMB established a cross-agency priority (CAP) goal to improve customer service—in part through utilizing technology—to keep pace with the public’s expectations. This would involve efforts by the federal government to transform customer services by streamlining transactions, developing standards for high-impact services, and utilizing technology to improve the customer experience. In 2016, a CAP team whose goal is to make it faster and easier for individuals and businesses to receive customer service noted that specific attention is needed to improve taxpayer assistance. The team noted that improved transparency would help citizens set expectations and hold government accountable for improvements. It added that failure to meet those expectations creates unnecessary hassle and cost for citizens and the government. Other federal agencies have used dashboards to convey information to the public. For example, we have issued a series of reports related to the IT Dashboard, which OMB deployed in 2009 to display federal agencies’ cost, schedule, and performance data. We noted that the public display of data allows oversight bodies, including Congress, and the general public to hold government agencies accountable for progress and results. When we asked IRS officials about not having an online dashboard, they said they had not previously considered the idea given that some customer service and performance information is publicly available in various locations on irs.gov. In addition, officials were concerned about spending resources to update a dashboard using real-time data, such as expected wait time and level of service for each of IRS’s 52 toll-free telephone lines. They also noted that providing certain information could potentially lead taxpayers to call IRS instead of remaining online where it is less expensive for IRS to provide taxpayer service. However, when we pointed out that a dashboard does not need to be updated on a real time basis to be useful, IRS officials subsequently indicated a better understanding of the value of such a dashboard, and agreed it could be possible to develop one as we described. Further, a dashboard updated to reflect historic performance for specific date ranges during the year could benefit taxpayers by informing them of what to expect, without requiring significant agency resources. In addition, providing taxpayers with easily accessible customer service information has the potential to drive taxpayers to IRS’s website, which IRS officials have said is their preferred method of communication because of its inherently lower cost. For example, if taxpayers could learn in advance that potentially calling to speak with an assistor would result in an excessive wait time, taxpayers may elect to spend more time on IRS’s website looking for the information. Without easily accessible customer service information, taxpayers are less likely to be informed on what to expect when requesting services from IRS. IRS officials and other stakeholders reported that IRS generally experienced few problems processing returns during the filing season. In addition, from January 2016 through September 2016, IRS processed about 147 million individual income tax returns and 109 million refunds (see appendix IV). However, there were two processing interruptions in 2016 that each lasted about 1 day. In early February, IRS experienced a major system failure that prevented it from processing returns and prevented taxpayers from accessing several online tools, including “Where’s My Refund?” In mid-May, a number of critical systems used to process returns shut down shortly before a milestone date IRS set for itself to complete return processing. IRS returned these systems to full operation in time to meet its targets as planned. While still able to process returns and refunds smoothly, IRS officials characterized some aspects of the filing season as challenging, noting they struggled to hire in certain processing sites and for specific seasonal jobs. For example, IRS has three sites that primarily process paper tax returns for individuals, and officials said they had challenges hiring at the site in the Austin, Texas, metropolitan area because the region had a relatively low unemployment rate. In addition, IRS faced shortages filling certain data transcriber and clerical positions, which IRS officials reported overcoming by adjusting staff resources and using more overtime. For example, IRS officials said that total overtime increased about 60 percent (from 55 to 88 FTEs between fiscal years 2015 and 2016) for staff working at the three centers that process tax returns from individuals. Another challenge IRS faced this filing season involved processing returns from taxpayers who did not correctly report advance Premium Tax Credit (PTC) payments they received during 2015. The PTC is a refundable tax credit designed to help eligible individuals and families with low- or moderate-income afford health insurance purchased through the Health Insurance Marketplaces. When individuals enroll through a marketplace, they can elect to have the marketplace estimate the amount of the PTC, based on information they provide when enrolling, and have it paid in advance to their health insurance company to lower monthly insurance premiums. Alternatively, they can elect to claim all of the credit when they file their tax return. For individuals who elect to receive the credit in advance, the amount they receive may differ from the amount they are eligible for, which they calculate at the time they file their return. Taxpayers who enroll in a marketplace and receive advance payments of the PTC must file a tax return and reconcile the amount they received by completing Form 8962, Premium Tax Credit. In the 2 years that the PTC has been available, many taxpayers did not reconcile the amount they received when they filed their return. Beginning in 2015, IRS used third-party data from the marketplaces to conduct pre-refund matching and verify whether taxpayers had reconciled the advance PTC. To address any discrepancies, in 2015, IRS first processed returns that did not reconcile the PTC and then notified those taxpayers informing them that they needed to reconcile. These taxpayers had to file an amended return reconciling the correct PTC amount they received in advance before they could receive health insurance through the marketplaces in 2016. For the 2016 filing season, IRS changed its procedures so that when taxpayers did not reconcile the PTC, IRS corresponded with them explaining they needed to reconcile before IRS continued processing the return. While IRS’s new procedures delayed processing for these returns, IRS officials explain that it helped both IRS and PTC recipients. For example, according to an IRS analysis, IRS’s correspondence brought about half of these taxpayers into compliance. IRS officials also stated that IRS will save resources by not having to process as many amended returns that taxpayers submit to reconcile PTC. They added this also benefits those taxpayers who respond to IRS’s correspondence because they would not have to file an amended return before they could receive health insurance through the marketplaces in 2017. IRS officials anticipate using this procedure again for the 2017 filing season. In cases where a taxpayer did not reconcile the advance PTC, IRS does not have the authority to automatically correct the tax return and notify the taxpayer of the change. In other circumstances, IRS has statutory math error authority to fix easily correctable calculation errors and check for other obvious noncompliance in limited circumstances. According to IRS officials, having authority to correct PTC errors would allow IRS to process the return more quickly without having to correspond with the taxpayer or expend further resources to audit taxpayers’ compliance. However, as we reported in 2015, the marketplace data IRS uses for prerefund matching of PTC data were incomplete and not fully accurate. In June 2016, IRS officials told us that, while the completeness and quality of the marketplace data have improved, they have not yet fully assessed whether the data are reliable to use in correcting returns. We have previously suggested that Congress authorize math error authority on a broader basis with appropriate controls. For each year beginning with fiscal year 2015, legislative proposals were submitted that, among other things, would establish a category of correctable errors. Under the proposals, Treasury would be granted regulatory authority to permit IRS to correct errors in cases where information provided by a taxpayer does not match corresponding information provided in government databases. Congress has not granted this broad authority. Correctable error authority could help IRS meet its goals for the timely processing of tax returns, providing taxpayers with refunds quicker, and reducing the burden on taxpayers of responding to IRS correspondence. It can also reduce the need for IRS to resolve discrepancies in post-filing compliance, which, as we previously concluded, is less effective and more costly than at-filing compliance. However, the third-party data IRS uses for matching should be sufficiently complete and accurate. IRS opens IDT cases when (1) it identifies potential IDT through its automated filters and other reviews of taxpayer returns, or (2) taxpayers alert IRS to potential IDT, such as when they are unable to file a tax return electronically because a fraudster already filed one for that taxpayer. From 2012 to 2015, IRS opened a relatively steady number of new IDT cases. According to IRS officials, in fiscal year 2016, the number of new IDT cases declined because IRS improved its ability to detect fraud before processing the return. In fiscal year 2012, IRS experienced a backlog of more than 370,000 IDT cases with an overage rate of about 57 percent. Since then, IRS has generally processed cases more quickly and reduced the overage rate to 10 percent or less. In late fiscal year 2015, IRS formed an IDT Victims Assistance Unit, dedicating 322 FTEs to it for that portion of the year and 1,270 FTEs for fiscal year 2016. Timely resolution of IDT cases reduces burden to taxpayers who must deal with delayed refunds as they authenticate their identities with IRS. It can also reduce the amount of refund interest IRS pays to some taxpayers, which is required if IRS takes longer than 45 days after the filing deadline, or in the case of a return filed after the deadline, within 45 days of the date the return was filed, to issue a refund. IDT has been among the top reasons for the largest of such payments for the past 4 years. IRS continues to work toward improving its processing of IDT cases, in part by forming the IDT reengineering team to improve customer service for IDT victims. The team interviewed employees, executives, and other stakeholders to identify potential improvements. Since its formation, IRS implemented recommendations the team has made that focus on IRS’s streamlining and efficiency efforts, including: Consolidating inventory. IRS merged its IDT compliance inventory with some of its other IDT inventory. IRS officials said this allows IRS to close cases faster since all of its cases are in one system and they no longer need to transfer paper documents to different locations. Managing case flow. IRS developed a matrix for assistors and managers to determine which functional area, such as exams or accounts management, can best work a case. According to IRS, this matrix reduces the frequency with which cases are transferred among units and gets IRS closer to establishing a single point of contact for taxpayers who are IDT victims. Developing plans to improve the Identity Theft Affidavit (Form 14039). If a taxpayer believes he or she has been a victim of IDT refund fraud, IRS instructs the taxpayer to complete and submit Form 14039, Identity Theft Affidavit. IRS officials said it is planning to revise the form to streamline processing and reduce taxpayer burden. The new form is to provide an option for the taxpayer to include a secondary taxpayer’s Social Security number who was also affected by the identity theft. This additional information will help IRS assistors better identify the true taxpayer. Additionally, it can help prevent multiple forms submitted separately by primary and secondary taxpayers, which can be burdensome for the taxpayer and result in processing delays for IRS. According to IRS officials, implementing these changes recommended by the IDT reengineering team contributed to IRS closing IDT cases faster. IRS reduced the time an IDT case is open from an average of 242 days in fiscal year 2012 to 106 days in fiscal year 2016. Nonetheless, overage rates have increased from 0.7 percent in fiscal year 2015 to 8.8 percent in fiscal year 2016. IRS officials attributed this to normal fluctuations. To examine customer service for IDT victims, we reviewed 16 IDT cases that were either open or closed between July 2015 and May 2016 (see appendix V for details on the case reviews). In addition, we conducted five discussion groups with 15 IRS assistors and 13 managers responsible for handling IDT cases in Atlanta and Kansas City, Missouri. The findings from the file review and discussion groups cannot be generalized to all IDT cases or the perspectives of all IDT assistors and managers. Further, because IRS recently implemented some improvements, their effect may not be fully reflected in the cases we reviewed. During our review we observed several areas that contributed to delays in resolving cases (see table 1). Of the reasons for delays we observed, in addition to the complexity of cases, the following most frequently contributed to delays of a month or longer in handling the case. Assistors and managers in our discussion groups generally agreed that each of these issues were primary contributors to delays. Reassignments. In 6 of 16 cases that we reviewed, we found that IRS’s policies and procedures contributed to the length of time it took for IRS to close cases. In these cases, IRS transferred work multiple times between different units and assistors. IRS officials explained this occurs to help IRS balance its workload and identify either the best-suited assistor or one with availability to work the case. The officials said that its reengineering team has been addressing this, and will continue to do so in the future. Inventory Management. In 5 of 16 instances, the case remained in inventory while waiting for an assistor to review it. For example, case 15 was in the queue for more than a month awaiting an assistor’s review before it was transferred to another assistor, and closed in August 2016 after 193 days. IRS officials explained that declining resources have contributed to the length of time it takes to close a case. File retrieval and scanning. In 3 of 16 cases we reviewed, file retrieval and scanning contributed to delays and unnecessary requests for documents. For cases 10 and 13, resolution was delayed by at least 1 month while the assistor waited for another unit to retrieve and scan documents into IRS’s inventory system to use in reviewing the case. For case 10, the assistor waited about 5 weeks to receive the documents and closed the case about 2 days afterward. In case 13, the assistor requested documents twice and it took IRS about 7 weeks to retrieve and scan the documents. During this time, IRS reassigned the case to another assistor who closed it without receiving the documents. Similarly, in case 11, IRS took about 6 weeks to retrieve and scan the documents into IRS’s systems, but the assistor closed the case about 3 weeks before receiving the documents. IRS officials explained that the assistors may not require the documents to close the case, but many assistors prefer to have the documents. These officials noted that in June 2016, IRS revised some of its guidance to assistors on when to request a specific type of documentation to use in determining which tax return is legitimate. In our discussion groups, 14 of 28 assistors and managers generally agreed that delays in receiving scanned documents were a primary factor that delayed case resolution. Assistors and managers described a typical waiting period of more than 30 days for document requests to be fulfilled. IRS officials noted that some documents must be retrieved from IRS’s paper records storage facilities, which can take time to locate and then scan. In its fiscal year 2014-2017 strategic plan, one of IRS’s objectives is to provide prompt assistance to support IDT victims. Federal agencies can achieve their objectives and missions, and improve accountability by having an effective internal control system. As set forth in Standards for Internal Control in the Federal Government, internal controls comprise the plans, methods, and procedures used to meet an entity’s mission, goals, and objectives, which support performance-based management. Internal controls help agency program managers achieve desired results and provide reasonable assurance that program objectives are being achieved through, among other things, effective and efficient use of agency resources and ensuring that personnel have the required knowledge, skills, and abilities. While IRS has taken some steps to more quickly resolve IDT cases, IRS is missing an opportunity to potentially reduce delays and unnecessary requests related to retrieving and scanning documents. IRS officials stated that they are not reviewing the retrieval and scanning processes to identify efficiencies, such as prioritizing requests or providing guidance and training to assistors on which documents are required to close a case. Without identifying efficiencies, it is more likely cases could be delayed, which can delay the processing of returns and refunds in those cases where a legitimate refund is due, and contribute to increased interest paid by IRS on late refunds. When we discussed this with IRS officials, they agreed that it was reasonable to review the file retrieval and scanning processes, and said the IDT reengineering team could evaluate it as part of its ongoing efforts. Based upon our case reviews and discussion groups, we identified one weakness in IRS’s internal control processes that resulted in refunds paid to fraudsters and another potential weakness that could lead to additional releases of fraudulent refunds. Internal control standards require management to, among other things, design appropriate types of control activities, analyze and respond to changing conditions that affect the agency and its environment, and effectively manage the agency’s workforce, including ensuring that personnel have the required knowledge, skills, and abilities to achieve organization goals. Refunds were released automatically. In one case we reviewed, case 10, IRS released a fraudulent refund of about $9,900 even though the tax return was flagged for potential IDT. IRS screens all tax returns for characteristics that it identified in previous IDT refund fraud schemes. If flagged for review, IRS stops processing the return, places a hold on the refund, and sends a letter asking the taxpayer to confirm his or her identity. However, IRS designed several refund holds to expire after a certain amount of time, ranging from 1 to 11 weeks. When holds expired, IRS’s computer systems automatically release the refund. In the case we identified, a hold had been placed on the account, but it expired before an assistor had completed the review and IRS computer systems automatically processed the refund. IRS has processes to recoup refunds issued to the wrong person, and in the case we reviewed, IRS has taken steps to do so. IRS reported it identified this problem in October 2015 and removed the automatic expirations of this type of hold. Assistors may release refunds before closing a case. According to IRS assistors and managers who participated in our discussion groups, some assistors may release refunds that could be paid to fraudsters in spite of having a refund hold in place on the taxpayer’s account. This can occur even when indicators on the account show that the tax return is under review for identity theft or that two returns have been filed for that taxpayer’s account (duplicate return filing). In three of our discussion groups, we asked how refunds could be released before a case is closed, and all 17 participants in those groups agreed that assistors may be releasing refunds when answering telephone inquiries about them. Several of these assistors and managers described this as a common occurrence, and stated that, due to a lack of training, assistors may not understand the codes on the taxpayer’s account. Moreover, some discussion group participants surmised that some of these callers could be fraudsters. In contrast to the assistors and managers in our discussion groups, IRS senior officials told us they do not consider this a widespread error or a result from a lack of training. IRS officials further stated that the assistors and managers we spoke to might have been observing automatic refund releases, such as the one described above, and assumed that assistors were manually releasing refunds. Officials also said that the culture at IRS is such that assistors are reluctant to release a refund incorrectly. Therefore they tend to be cautious in taking such steps. To support their position, IRS officials provided us with data IRS collects and analyzes on duplicate refunds, such as instances where assistors manually processed a refund although one already had been issued. IRS also provided data it uses to assess the quality of assistors’ work and to inform training needs. However, both sets of data do not include sufficient information for IRS officials to determine the extent to which one assistor may release a refund before another assistor closes an IDT or duplicate return case. In addition, in the data IRS uses to assess the quality of assistors’ work, IRS undercounted the total number of erroneous refunds. Officials later explained they generated that data in response to our findings, but stated that the data were not routinely collected and did not reliably count the errors. We were therefore unable to use any of these data to support IRS’s position that such errors were minimal and assistors did not need training. After several discussions with IRS officials about the weaknesses we identified in the data, officials acknowledged these weaknesses but maintained that their current methods are sufficient. Without appropriate data to determine the extent to which assistors release refunds before an IDT or duplicate return case is closed, and the reasons for doing so, IRS is missing critical information on the effectiveness of its controls. If IRS cannot ensure its controls are effective, it risks losing revenue to IDT refund fraud that could be prevented. IRS notifies primary and secondary taxpayers when it learns that either have been a victim of IDT refund fraud, but it does not notify taxpayers that their dependents’ information may have been used to commit fraud. In one case we reviewed (case 11), a fraudster included the same dependents as the legitimate taxpayer had claimed that year. However, when IRS notified the taxpayer that he or she had been a victim of IDT, the notice did not mention that a thief had also stolen the dependents’ identities and used them in the fraudulent return. According to IRS officials, the agency treats dependents as IDT victims if their SSN had been used fraudulently as either a primary or secondary taxpayer. However, this is not the case when a dependent’s identity is used as a dependent on a fraudulent return, as we observed in case 11. In such instances, dependents do not yet have taxpayer accounts, so IRS officials stated that there are no protections that IRS can provide, such as issuing an Identity Protection Personal Identification Number (IP PIN) or flagging those taxpayers’ SSNs for use in its filters or other reviews of taxpayer returns. IRS has previously provided guidance to taxpayers when a dependent was a victim of identity theft. After the Get Transcript data breach, IRS wrote affected taxpayers whose dependents were also victims. In the letter, IRS provided information on actions that parents or guardians could take to protect a minor’s identity. While IRS did not provide an IP PIN or other protections to dependents, it was proactive in notifying taxpayers of the stolen identities and offering guidance. For the case we reviewed (case 11), the fraudster used the same dependent identities on the fraudulent return as the legitimate taxpayer did on his or her return. In this case, IRS assistors could determine if the dependents were victims as they reviewed the case. However, sometimes fraudsters use the identities of dependents that may not be associated with the taxpayer as a means to increase the refund amount. IRS officials explained that, in such cases, they might not be able to verify whether or not the taxpayer was responsible for the dependent on the fraudulent return. However, in such cases, IRS need not confirm the relationship, but inform taxpayers of the potential that a fraudster might have compromised their dependents’ identities so the taxpayer can take further action. IRS has a program that could help taxpayers determine if their dependents’ information appeared on a fraudulent return. Since 2015, IRS has allowed taxpayers to request a redacted copy of the fraudulent return that was filed using their identities. In those redacted copies, IRS will provide the first four letters of the last names of the primary taxpayer, secondary taxpayer, and dependents included on the fraudulent return. This information could allow a taxpayer to determine if their dependents’ names, if any, were included on the fraudulent return. However, IRS does not include information about this program in its notices to victims of IDT. IRS’s practice to notify the primary and secondary taxpayers when it learns that either have been a victim of IDT refund fraud is an important aspect of its customer service efforts and protections against IDT refund fraud; it allows the taxpayers to take action to protect their identities and for IRS to protect against future fraud. However, by not notifying the taxpayers that their dependents’ information may have been used to commit fraud, IRS is limiting taxpayers’ ability to take action to protect the dependent’s identity. IRS has seen significant improvement in telephone service this year in part due to budget increases. However, IRS still faces challenges in providing online services and processing correspondence in a timely manner. While IRS has taken steps to strategically manage its operations, information about IRS’s expected performance is not easily accessible to taxpayers, which limits their ability to make more informed decisions about how and when to contact IRS. IRS has made strides in combatting IDT refund fraud, which has widespread consequences for victims and their dependents. However, we found instances where IRS’s processes for document retrieval and scanning delayed case resolution. Further, IRS does not have sufficient data to monitor whether fraudulent refunds are released before a case is closed. Finally, IRS does not notify taxpayers of potential exposure of dependents that could lead to future fraud. Protecting federal dollars, while enhancing IRS’s case management and protecting taxpayer dependents, can help bolster the public’s confidence in the tax system. We recommend that the Commissioner of Internal Revenue take the following four actions: 1. Develop and maintain an online dashboard to display customer service standards and performance information such that it is easily accessible and improves the transparency of its taxpayer service. 2. Review its document retrieval and scanning processes to identify potential training or guidance needs or other potential efficiencies. 3. Improve existing data and collect new data, as needed, to effectively monitor how often, and why, IRS assistors release refunds before closing an IDT or duplicate return case. Based upon these data, IRS should take corrective steps to reduce refund errors, such as providing training or immediate guidance to assistors. 4. Revise IRS’s notices to IDT refund fraud victims to include information such as (1) whether any dependents were claimed on the fraudulent return, (2) to the extent possible, if those dependents match any of those the taxpayer claimed the same tax year, and (3) how to request a redacted copy of the fraudulent return. We provided a draft of this report to the Commissioner of Internal Revenue. IRS provided written comments, which are summarized below and reprinted in appendix VI. IRS also provided technical comments, which we incorporated where appropriate. IRS agreed with our recommendations to develop and maintain an online dashboard to convey customer service standards and performance information; review its document retrieval and scanning processes to provide additional training and guidance to ensure documents are not requested unnecessarily; and revise its notices to IDT victims to alert taxpayers of the need to protect dependent accounts from potential fraud and supplement information on its website. IRS disagreed with the finding that it does not know the extent to which its internal control processes prevent the release of fraudulent refunds and with the related recommendation that it improve existing data and collect new data to effectively monitor how often IRS issues refunds before closing an IDT or duplicate return case. In its letter, IRS stated that GAO concluded that frozen refunds were being erroneously released to fraudsters by customer service employees. This is incorrect. As stated in our report, we identified a potential weakness that could lead to releases of fraudulent refunds. IRS also reported that it was aware that some refunds are released by assistors prior to the case being closed. Further, IRS maintains that its current methods are sufficient for detecting such errors and the problem is not widespread. However, as we noted, our review of both sets of data that IRS provided do not include sufficient information for IRS to determine the extent to which the problem exists or the total number of erroneous refunds. After several discussions with IRS officials about the weaknesses we identified in the data, officials acknowledged these weaknesses and explained that they generated some of these data in response to our findings. Nevertheless, officials maintained that their current methods are sufficient. We stand by our finding that the data IRS provided are not sufficient to monitor how often and why assistors are releasing refunds before IDT or duplicate return cases are closed, and we believe that the associated recommendation is warranted. In response to our draft report, in January 2017 officials provided another analysis of IRS data that they said showed this type of error does occur but may not be as widespread as the discussion group participants suggested. We will continue to work with IRS to determine if these additional data are sufficient to address our recommendation. We are sending copies of this report to the appropriate congressional committees, the Commissioner of Internal Revenue, the Secretary of the Treasury, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9110 or lucasjudyj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VII. Our objectives in this report were to assess how well the Internal Revenue Service (IRS) provided customer service compared to its performance in prior years and describe what is known about the cost of calls on selected IRS telephone lines; how well IRS processed individual income tax returns compared to its performance in prior years; and IRS’s efforts to improve customer service for (IDT) victims, including selected internal control processes. To answer the first two objectives, we obtained and analyzed IRS documents and data, including performance, budget, and workload data for taxpayer services and return processing, and used this information to compare IRS’s performance in 2016 to 2011 through 2015, which allowed us to identify trends and anomalies over a 6-year period; collected data and interviewed IRS officials who manage IRS toll-free telephone lines to understand how IRS plans and allocates its resources managing its telephone service, and what data IRS has available to achieve this, such as the average cost per call; interviewed officials from IRS’s Wage and Investment division (which is responsible for managing filing season operations) and external stakeholders to obtain contextual information about IRS’s performance. The external stakeholders we selected are major companies that prepare millions of tax returns and organizations in the tax preparation industry that frequently interact with IRS on key aspects of the filing season; identified federal standards for evaluating customer service, such as the Government Performance and Results Act (GPRA) Modernization Act and Executive Orders, Presidential Memorandums and Office of Management and Budget guidance to strengthen customer service, and compared IRS actions to those standards; and reviewed prior GAO reports, including filing season and IRS budget reports, reviews on the premium tax credit, our evaluation of IRS’s website, an agency dashboard, and evaluated IRS’s actions to implement selected prior recommendations. To answer the third objective, we reviewed prior GAO reports on IDT refund fraud and interviewed IRS officials who oversee customer service for IDT victims, including members of the IRS reengineering task team which is tasked with reviewing IRS processes and procedures to identify ways to improve the identity theft taxpayer experience. We also collected and reviewed data on IDT cases, such as the total number of IDT cases and average number of days each case was open. Further, we conducted a file review of 16 IDT victim case files in Atlanta, where IRS’s Wage and Investment Division is located. This division plays a key role in IDT prevention and case management, and is one of eight locations where IRS assistors handle IDT cases. The findings from this file review cannot be generalized to all IDT cases. We identified these cases by using a stratified random sample from an IRS-provided list of all IDT cases open at any point between July 2015 and May 19, 2016. Since the focus of this file review was to better understand the characteristics on a variety of types of IDT cases and the steps that IRS takes to resolve them, we designed the selection process to include cases with varied statuses. Specifically, we drew cases from three groups: (1) open (unresolved) cases, (2) cases closed in less than 120 days (short cases), and (3) cases closed in 120 or more days (long cases). In 2015, IRS reported that a typical IDT case could take 120 days to resolve, so we used this length of time as a threshold for separating short and long cases. For the open case sample, we excluded IDT cases that were open for less than 120 days to ensure that enough casework had occurred to observe in our file review. We sorted the remaining open cases by the IDT case type and sampled randomly within each case type. For each case type category, we oversampled to account for any cases that had recently closed and to select cases with refund interest, which is an extra cost to the government. We used a similar process for the short and long closed case selection. We sorted closed cases by IDT case type and length and selected at random within each category, with cases oversampled to ensure a sufficient number were available for our review. We sent 225 IDT case numbers to IRS with instructions about the order they should pull the files for review. During our file review, we verified these steps to ensure that IRS officials completed our instructions accurately. We conducted a file review of 16 IDT cases, using a standardized data collection instrument (DCI) developed for the review. To develop the DCI, we conducted a pilot test and made revisions based on the pilot and comments from IRS officials. To ensure that our efforts conformed to GAO’s data quality standards, another team member reviewed each of the 16 DCIs that we completed. The reviewers compared the data recorded within the DCI entry to the data in the corresponding case file to determine whether they agreed on how the data were recorded. When the analysts’ views on how the data were recorded differed, they met to reconcile any differences. In addition, IRS assistors who regularly work IDT cases, and other officials, assisted us by explaining the cases and answering our questions while we completed and confirmed information in the DCIs. Due to the complexity and uniqueness of each case, we took detailed notes about the cases in addition to the completed DCI. We used the information collected to summarize the 16 case study reviews presented in appendix V. To ensure we correctly understood the information, we sought input and review from IRS officials and included their comments as appropriate. Finally, we assessed whether IRS’s procedures for working IDT cases follow standards from Standards for Internal Control in the Federal Government. We selected the most relevant control standards as criteria. Additionally, to obtain the perspectives of IRS assistors and managers who are responsible for handling and reviewing IDT cases, we held five discussion groups with selected employees who are employed at IRS campuses in Atlanta and Kansas City, Missouri. We selected these locations based upon the combination and availability of staff that manage IDT-related work, as described below. We held three discussion groups with assistors and two groups with managers that oversee assistors who handle IDT cases. The findings from these discussion groups cannot be generalized to the perspectives of all IDT assistors and managers. All participants worked in one of the following groups: IRS’s Return Integrity Compliance Services, which reviews returns for potential IDT prior to processing; Accounts Management, which reviews IDT cases as part of adjusting taxpayer accounts when they have been victims of IDT; or Field Assistance, which provides service to taxpayers who are possible victims of IDT that visit an IRS walk-in site. To identify participants in Atlanta, we asked an IRS official to locate participants and arrange the discussion groups with assistors and managers who met the criteria mentioned above and who work in the IRS facility we visited as part of our IDT case file review. To identify participants in Kansas City, IRS officials provided us with contact information for employees who met the criteria mentioned above and we contacted those employees directly to schedule and organize the discussion group. We conducted the Atlanta discussion groups in person and the Kansas City discussion groups via conference call. For each group, we used a standardized discussion guide, one for the managers and a different one for assistors, in order to improve the consistency and quality of information gathered. Each group contained between 4 and 9 participants. To encourage participants to speak openly, we ensured that no senior IRS management officials were present during the discussions. At the beginning of each group, we explained that any comments and opinions provided would be reported in summary form, and individual assistors would not be identified. We used a standardized set of questions when interviewing the assistors and managers which focused on their experiences reviewing IDT cases and suggestions, if any, for how IRS can more efficiently provide assistance to taxpayers who are IDT victims. We did not administer one question about releasing refunds to the first two discussion groups in Atlanta because we identified the issue during the course of our file review and after conducting the first two groups. Data on filing season processing and customer service, as well as IDT casework, is provided by IRS in a variety of different reports. Accordingly, we used various IRS telephone reports (the telephone product line snapshot, enterprise snapshot, interactive performance template, busy signals and disconnects, and tax law and phone accuracy) to analyze and report on key elements of IRS’s telephone service, such as the level of service, wait time, and call volume. Similarly, we reviewed IRS’s processing reports to analyze and report key aspects, including the number of returns processed and refund data. We reviewed reports on IDT case inventory and closures. In reviewing these reports, we examined the data to identity obvious errors or outliers and assessed potential data limitations that would affect use of the data for assessing IRS’s performance during the filing season. We also reviewed IRS’s responses to questions we asked about the accuracy and reliability of these data. We determined that the data presented in this report are sufficiently reliable for the purposes of our reporting objectives. We conducted this performance audit from January 2016 to January 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Legend: FY = fiscal year; n/a = not applicable Fiscal year 2014 data for walk-in sites reported in this table differ from data reported in our report on the 2014 IRS filing season (see GAO-15-163). Specifically, the data in this table reflect IRS data reported through September 30, 2014, while data in our report reflect IRS data reported through September 27, 2014. Walk-in site return preparation counts include both individual and business contacts. Account work notices includes assistance to taxpayers who need to pay taxes owed and victims of identity theft. Beginning in fiscal year 2012, IRS accounted for contacts where taxpayers made payments separately from other account work notices. Other contacts include responding to correspondence, scheduling appointments, authenticating Individual Tax Identification Numbers, and providing self-assistance services, which do not fall into the defined categories. The number of individual tax returns processed includes forms 1040, 1040A, and 1040EZ. In March 2015, a fraudster electronically-filed (e-filed) a 2014 tax return that the Internal Revenue Service (IRS) accepted. Later that month, the fraudster received a refund of more than $10,000 via direct deposit. In early April 2015, the legitimate taxpayer attempted to e-file a return which IRS rejected because it had already received one for that taxpayer. That same month the legitimate taxpayer filed a paper 2014 tax return claiming a refund of about $3,500. In July 2015, IRS opened a case because it had received two tax returns for the same taxpayer. It sent the legitimate taxpayer a letter asking if he/she had filed two 2014 returns. About a month later, the taxpayer responded stating he/she only filed one 2014 return. An assistor reviewed the case in late September and confirmed the taxpayer was an identity theft (IDT) victim. IRS released the taxpayer’s $3,500 refund and also paid about $23 in interest. In early October 2015, IRS closed the case and sent the taxpayer a letter stating he/she had been an IDT victim. IRS took 125 days to close the case because it remained in inventory while IRS waited for the taxpayer to respond. In early January 2016, IRS sent the primary and secondary taxpayers Identity Protection Personal Identification Numbers (IP PIN) to use for filing their 2015 tax return since the fraudster had both of their Social Security numbers (SSN). In February 2015, a fraudster e-filed a 2014 tax return that IRS accepted. Later that month, the fraudster received a refund of approximately $9,600 via direct deposit. Also in February, the legitimate taxpayer attempted to e-file a return. IRS rejected the return because it had already received one for that taxpayer. In April 2015 the legitimate taxpayer and his/her spouse filed a paper 2014 tax return. The taxpayer reported owing about $200, which was included with the return. The taxpayer also included Form 14039, Identity Theft Affidavit, for the primary taxpayer only (although the form does not have an option to include others affected, such as a spouse or dependent, IRS procedures require the same treatment for a spouse if his/her information was also used on the fraudulent return). In May 2015, IRS sent a letter to the taxpayer acknowledging receipt of the affidavit. In early June 2015, IRS opened an IDT case because it received the affidavit. In August, IRS confirmed IDT for both the legitimate taxpayer and spouse. IRS officials attributed the 2- month delay, in part, to resource constraints in which IRS held the case in inventory waiting to assign it to an available assistor. After 65 days, IRS closed the case in August 2015 on the same day it was assigned to an assistor, and sent the taxpayer a letter stating he/she had been an IDT victim. In early January 2016, IRS sent the primary and secondary taxpayers IP PINs to use for filing their 2015 tax return. In March 2015, a fraudster e-filed a 2014 tax return that IRS accepted. Later that month, the fraudster received a refund of about $3,700 via direct deposit. The legitimate taxpayer attempted to e-file a return. IRS rejected the return because it had already received one for that taxpayer. The legitimate taxpayer then mailed IRS a 2014 tax return claiming a refund of about $1,200. In May 2015, IRS opened a case because it had received two tax returns for the same taxpayer. In late July 2015, IRS closed the case and sent the taxpayer a letter stating the taxpayer had been an IDT victim. IRS closed the case in about 3 months. However, because of the timing of when IRS received the fraudulent and legitimate returns, it inadvertently continued to hold the refund. About 9 months later, in April 2016, the taxpayer called IRS to inquire about the status of the refund. At the end of April 2016, nearly 1 year after IRS opened the IDT case, IRS released the refund hold and the taxpayer received a refund of about $1,200. IRS also paid the taxpayer about $40 in interest. IRS later determined a third party had obtained unauthorized access to the taxpayer’s tax return information through the “Get Transcript” application. In early January 2016, IRS had sent the taxpayer an IP PIN to use for filing his/her 2015 tax return. In February 2015, a fraudster e-filed a 2014 tax return that IRS accepted. A few weeks later, the fraudster received a refund of about $4,800 via direct deposit. In April 2015, the legitimate taxpayer attempted to e-file a return. IRS rejected the return because it had already received one for that taxpayer. The taxpayer and his/her spouse then mailed a paper 2014 tax return requesting a refund of $4,200. In May 2015, IRS opened a case because it had received two tax returns for the same taxpayer, confirmed it was IDT, and released the legitimate taxpayer’s refund. The legitimate taxpayer’s bank refused the direct deposit refund because the taxpayer reported an incorrect bank routing or account number; IRS then sent the taxpayer the refund via paper check in June. IRS also paid the taxpayer about $15 in interest. However, IRS held the case in inventory and did not assign it to an IDT assistor to complete processing until the end of July. In September 2015, IRS closed the case and sent the taxpayer a letter confirming he/she was an IDT victim. IRS took 134 days to close this case, in part, because it remained in inventory. Officials primarily attributed this delay to resource constraints. The case then remained open about 3 months after issuing the refund, which officials attributed to multiple assistors completing the final steps to close the case. In early January 2016, IRS sent the primary and secondary taxpayers IP PINs to use for filing their 2015 tax return. In February 2015, a legitimate taxpayer e-filed a 2014 tax return that IRS accepted, claimed two dependents, and requested a refund of about $5,100. IRS put a hold on the refund because its IDT filters identified suspicious information. IRS sent the taxpayer a letter asking the taxpayer to confirm their identity. However, the taxpayer did not respond so IRS did not post the return information to the taxpayer’s account. In June and July 2015, IRS received two copies of the same 2014 paper tax returns, which it identified as potential IDT. In August 2015, the legitimate taxpayer visited an IRS walk-in site and was told to allow 6 weeks to receive a refund. IRS then opened an IDT case because the taxpayer confirmed his/her identity. Between October 2015 and January 2016, IRS reassigned the case to multiple assistors seeking one to work the case. In January 2016, IRS received multiple IDT affidavits for tax years 2011, 2013, and 2014. However, IRS determined there was no IDT for these years. In March 2016, the taxpayer called IRS saying he/she had not yet received a 2014 refund. IRS gave the taxpayer the phone number for the National Taxpayer Advocate, which can provide expedited assistance. In June 2016, IRS released the 2014 refund, which included almost $200 in interest. A month later, IRS closed the case and sent the taxpayer a letter confirming the taxpayer was an IDT victim in prior years, but not for those included on the IDT affidavits. IRS took 329 days to close this case because it remained in inventory. Officials primarily attributed this delay to the difficulty of determining which years the taxpayer was a victim of IDT. Since the case included multiple tax years, IRS policy did not permit closing the case until IDT for all tax years had been resolved. Because IRS closed this case in 2016, it did not send the taxpayer an IP PIN to use for filing his/her 2015 tax return. In January 2015, a fraudster e-filed a 2014 tax return that IRS accepted. A month later, the fraudster received a refund of about $1,500 via direct deposit. In March 2015, the legitimate taxpayer attempted to e-file, which IRS rejected because it had already received a return for that taxpayer. The legitimate taxpayer mailed a paper 2014 tax return requesting a refund of more than $1,600 and included an IDT affidavit. IRS mailed a letter to the legitimate taxpayer acknowledging receipt of the affidavit and, in May 2015, opened an IDT case. The case remained in inventory for about 3 months and was also reassigned to multiple assistors seeking one to work the case. Officials primarily attributed these delays to resource constraints. In August 2015, IRS closed the case after 103 days and sent the legitimate taxpayer a letter confirming the taxpayer was an IDT victim. About 1 month later, after confirming the taxpayer’s address because of a recent move, IRS processed a paper refund check to the taxpayer for more than $1,600 plus about $20 interest. In early January 2016, IRS sent the taxpayer an IP PIN to use for filing his/her 2015 tax return. In February 2015, a fraudster e-filed a 2014 tax return that IRS accepted. IRS attempted to process a direct deposit refund of about $5,400 to the fraudster’s bank account in early March 2015, but the bank rejected it. IRS did not suspect fraudulent activity so it wrote to the legitimate taxpayer, whose correct address was on the fraudulent return, stating it was unable to process the refund via direct deposit and would send a paper check. IRS issued that check at the end of March. The legitimate taxpayer received the check, called IRS stating he/she had not yet filed, and returned the check in early April 2015. The legitimate taxpayer then sent IRS a paper 2014 tax return, requested a refund of about $10,700. In May 2015, IRS opened an IDT case because it had received two tax returns for the same taxpayer. About 1 month later, an assistor confirmed IDT and a few weeks later sent the legitimate taxpayer a letter stating that IRS was reviewing the return further. In July, the assistor sent the case to IRS’s international unit for further review. It remained in inventory for more than 5 months. During this time, the legitimate taxpayer called IRS several times to check on the status of the case and refund. In January 2016, IRS closed the case and the next month sent the taxpayer a letter confirming the taxpayer was an IDT victim. IRS then issued the taxpayer’s refund of about $10,700 plus about $265 in interest. IRS took 261 days to close the case, which officials attributed in part to resource constraints. In early January 2016, IRS had sent the taxpayer an IP PIN to use for filing his/her 2015 tax return. In early February 2015, a legitimate taxpayer e-filed a 2014 tax return requesting a refund of more than $600 that IRS accepted. One month later, the taxpayer mailed IRS a duplicate copy of the return. IRS put a hold on these refunds because its IDT filters identified suspicious information and froze the taxpayer’s account. In early June 2015, the taxpayer called IRS regarding the 2014 return, but the assistor could not provide information due to possible IDT concerns. The taxpayer had not filed a return since 2008. In mid-August 2015, the taxpayer submitted an IDT affidavit claiming IDT in 2008, 2010, 2011, and 2013 (for purposes of our study, we reviewed only the 2011 case). In August 2015, IRS sent a letter to the taxpayer acknowledging receipt of the affidavit and opened a case. The next month, IRS confirmed the taxpayer was an IDT victim for tax year 2011. IRS had processed a fraudulent tax return associated with the taxpayer for that year with about a $500 refund. However, the fraudster did not receive the refund because the legitimate taxpayer had a balance due, so IRS applied that amount toward the balance. While reviewing the case, IRS removed the fraudulent refund information from the taxpayer’s account for 2011, which reinstated the taxpayer’s 2008 balance due. IRS then applied the taxpayer’s 2014 refund of about $600 plus $15 of interest to the taxpayer’s account. In September 2015, IRS closed the case after about 1 month and sent the taxpayer a letter confirming the taxpayer was an IDT victim in 2011. In early January 2016, IRS sent the taxpayer an IP PIN to use for filing his/her 2015 tax return. In September 2011, a fraudster e-filed a 2010 tax return that IRS accepted requesting a refund of about $4,800 via direct deposit. IRS applied about $2,900 of this refund to the legitimate taxpayer’s balance from 2008. The fraudster received the remaining balance of about $1,900 via direct deposit. In August 2013, the legitimate taxpayer received a notice for not filing and paying taxes owed for tax year 2011. In April 2015, the taxpayer filed tax returns for 2009 through 2015, including for 2010 for which a fraudster had already filed. About 11 months later, IRS opened a case in March 2016 because IRS had already received a return that it later determined was fraudulent. It took 11 months to open the case, in part, because the taxpayer had submitted the return several years late, prompting IRS to process it as an amended return. The same month, IRS determined that tax year 2010 was the only year that was an IDT case. In September, IRS closed the case after 524 days. According to officials, it took IRS this long to close the case due to the multiple tax years involved. IRS assessed the taxpayer owed about $2,900 plus penalties. Further, the taxpayer was due a refund of about $4,700 for tax year 2010, which he/she did not receive because he/she filed outside the statute of limitations for that tax year. Because IRS closed this case in 2016, the taxpayer did not receive an IP PIN to use for filing his/her 2015 tax return. In February 2015, a fraudster e-filed a 2014 tax return that IRS accepted requesting a refund of about $9,900 via direct deposit. IRS sent the taxpayer a letter because of income that appeared suspicious. IRS put a hold on the refund because its IDT filters identified potential fraud. The taxpayer responded the following month. About that same time, the legitimate taxpayer attempted to e-file a return. IRS rejected it because it had already received one for that taxpayer. At the end of March 2015, IRS received a paper copy of the 2014 tax return and an IDT affidavit from the legitimate taxpayer. That same month, IRS opened an IDT case because it received the affidavit. IRS reduced the legitimate taxpayer’s refund from $3,600 to about $3,400 after correcting errors made by the taxpayer. This process took about 3 months. Before IRS could confirm whether the return it received in February 2015 was fraudulent, the hold it placed on the refund automatically expired in early May 2015. At this time, IRS’s systems released a direct deposit of almost $9,900 to the fraudster’s bank account. However, the bank declined it. In accordance with IRS processes, IRS then sent a paper check. The check was delivered to the legitimate taxpayer because the fraudster used the taxpayer’s correct address on the fraudulent return. The taxpayer cashed this check, although it was about $6,300 more than he/she had claimed. In late July 2015, IRS confirmed the e-filed return was fraudulent—5 months after being flagged by IRS filters and 4 months after receiving an IDT affidavit and a paper tax return from the legitimate taxpayer. In early September 2015, IRS processed the legitimate taxpayer’s return after correcting several errors. At that time, an assistor requested a copy of the paper documents to review the errors and assess the case for identity theft. It took about 5 weeks for IRS to retrieve and scan these documents. About 2 days after receiving the documents, in October 2015, IRS closed the IDT case and sent the taxpayer a letter confirming the taxpayer was an IDT victim. It then froze the account since the taxpayer had received an excess refund. IRS took 232 days to close the case, primarily because of correcting errors on the taxpayer’s return, and delays in retrieving documents from IRS’s paper records storage facilities and scanning them into IRS’s systems. In November 2015, IRS sent a letter to the taxpayer stating he/she must repay the erroneous refund within 21 days or interest will be charged after this time. When the taxpayer filed his/her 2015 tax return, it included a refund, which IRS used to pay the balance of the erroneous 2014 refund. In early January 2016, IRS sent the primary and secondary taxpayers IP PINs to use for filing their 2015 tax return. In February 2015, a fraudster mailed a paper 2014 tax return claiming two dependents that IRS accepted. IRS later issued the fraudster a paper check refund of about $4,300. A few weeks later, IRS received a paper 2014 tax return from the legitimate taxpayer claiming the same two dependents and requesting a refund of about $3,900. In March 2015, IRS opened a case because it had received two paper tax returns for the same taxpayer. IRS reassigned the case through various managers, team leads, and assistors for about 3 months before it identified someone who could work the case. In mid-June, the assistor requested that both paper returns be retrieved from IRS’s paper records storage facilities and scanned into IRS systems to verify if a return was a duplicate or if it was IDT. This was done because the taxpayer’s identification number on the attached Form W-2s differed from the numbers on both tax returns. Although the requested paper returns had not yet been received, about 3 weeks later, IRS closed the case and sent the taxpayer a letter confirming the taxpayer was an IDT victim. However, the letter did not notify the taxpayer that the dependents’ information may have been used to commit fraud. IRS reduced the legitimate taxpayer’s refund to about $2,500 because it could not verify one of the dependents claimed on the return— the legitimate taxpayer made an error entering a dependent’s name. However, the fraudster was able to claim both dependents because that return had the correct name and SSN for both dependents. IRS took 127 days to close this case, in part because of delays requesting and scanning documents and resource constraints. At the end of July, IRS had retrieved and scanned the requested documents into its systems, about 6 weeks after the assistor requested them. In August, IRS processed the legitimate taxpayer’s refund, which included about $23 in interest. IRS did not issue an IP PIN for this taxpayer to use for filing his/her 2015 tax return, because the taxpayer filed the return using an individual tax identification number, not a SSN. In February 2015, a fraudster e-filed a 2014 tax return that IRS accepted. The fraudster received a refund of about $9,600 via direct deposit. In April 2015, IRS received a paper 2014 tax return from the legitimate taxpayer that included a payment of about $1,500. In June, IRS opened a case because it had received two tax returns for the same taxpayer. One month later, IRS assigned an assistor to the case. In August, IRS closed the case after 64 days. It then sent the taxpayer a letter confirming he/she was an IDT victim and acknowledged IRS had received the payment. Thus, the taxpayer had no remaining balance due. IRS later determined a third party obtained unauthorized access to the taxpayer’s tax return information through the “Get Transcript” application. In early January 2016, IRS sent the primary and secondary taxpayers an IP PIN to use for filing their 2015 tax return. In April 2015, a legitimate taxpayer filed for an extension for tax year 2014 and sent in a multimillion dollar payment. IRS put a hold on the account because of the large payment. About that same time, a fraudster e-filed a 2014 tax return that IRS accepted and requested a refund of about $5,200. However, the fraudster did not receive a refund because of the hold. In October 2015, the legitimate taxpayer attempted to e-file the 2014 return. However, IRS rejected the return because it had already received one for that taxpayer. In early November, the legitimate taxpayer filed a paper 2014 return and requested the refund be applied as an estimated tax payment for 2016. In November 2015, IRS opened a case because it had received two tax returns for the same taxpayer. Between December 2015 and January 2016, IRS reassigned the case through multiple managers and assistors. It closed the case in January, after 52 days, and sent the taxpayer a letter confirming he/she was an IDT victim. However, IRS quality review staff reopened the case in late January 2016, in part, because the assistor who closed the case did not properly review the taxpayer’s foreign income credit. In March 2016, IRS reassigned the reopened case to the international unit where an assistor determined that the amount of foreign income did not meet IRS’s criteria for further review. It sent the case back to an assistor who was involved in the case in January. Between March and June 2016, IRS reassigned the reopened case to multiple assistors, one of whom requested documents be retrieved and scanned in early May. Despite an additional request for scanned documents, none were received. At the end of June 2016, IRS closed the case again. For both the initial case and the subsequent review, IRS took a total of 216 days to close the case, which officials primarily attributed to resource constraints. Because IRS closed this case in 2016, it did not send the taxpayer an IP PIN to use for filing his/her 2015 tax return. In March 2015, a fraudster e-filed a 2014 tax return that IRS accepted and requested a refund of about $7,200. Later that month, IRS held the refund because its IDT filters identified potential fraud. IRS sent a letter to the taxpayer stating IRS would be reviewing the return. The legitimate taxpayer received the letter because the fraudster used the legitimate taxpayer’s address. In April 2015, the taxpayer called IRS and stated he/she had not yet filed a return. IRS put another hold on the refund and wrote to the taxpayer again, this time confirming it would hold the refund. This same month, the legitimate taxpayer filed for an extension. In July 2015, the legitimate taxpayer’s power of attorney called IRS to say that the taxpayer would be filing a return. A week later, IRS received an IDT affidavit from the legitimate taxpayer’s spouse. In October 2015, the taxpayer sent a paper 2014 tax return to IRS requesting a refund of about $3,500. IRS opened a correspondence case because the taxpayer included several pieces of written correspondence with the return that he/she had previously received from IRS. IRS took 5 months to process the return. Because the taxpayer included a copy of the letter IRS sent to him/her in March, IRS initially treated the return as correspondence rather than a tax return. During this time, IRS reassigned the case through various managers, team leads, and assistors. In March 2016, IRS closed the case and sent the taxpayer a letter confirming he/she was an IDT victim. About 3 weeks later, IRS processed a refund of about $3,400. This represents the full refund minus a balance of less than $100 from a prior year plus about $100 in interest. IRS took 144 days to close the case, in part due to misclassification of the case as correspondence and reassigning it through multiple assistors. Officials primarily attributed these delays to resource constraints. Because IRS closed this case in 2016, it did not send the taxpayer an IP PIN to use for filing his/her 2015 tax return. In February 2014, a fraudster e-filed a 2013 tax return that IRS accepted and requested a refund of about $5,500. Over the next 6 weeks, IRS put multiple holds on the refund because its IDT filters identified potential fraud. As a result, the fraudster did not receive a refund. In April 2014, the legitimate taxpayer filed for an extension for tax year 2013. Later that month, IRS sent the taxpayer a letter stating the return was under review. In October 2015, IRS received a paper 2013 tax return from the legitimate taxpayer showing a balance owed. However, IRS was unable to post return information to the taxpayer’s account until February 2016 because it had to correct a taxpayer error before it could process the return. IRS then opened an IDT case, and assigned the case to an assistor. The assistor did not take action for more than 2 months. IRS reassigned the case to a different assistor who determined the legitimate taxpayer was an IDT victim, had a balance due on the 2013 return, and that a penalty and interest should be assessed due to the late filing. IRS then reassigned the case several times to identify an assistor with availability and the training to calculate the applicable penalties. IRS closed this case in August 2016 and the taxpayer owed tax of about $6,500. IRS also assessed penalties and interest of more than $3,500. IRS took 193 days to close the case, in part due to taxpayer errors it had to correct as well as reassigning the case to multiple assistors. IRS attributed these delays primarily to resource constraints. Because IRS closed this case in 2016, it did not send the taxpayer an IP PIN to use for filing his/her 2015 tax return. In January 2015, a fraudster e-filed a 2014 tax return that IRS accepted. The following week, the fraudster received a refund of about $1,900 via direct deposit. At the end of January 2016, 1 year later, the taxpayer called IRS and was told that the 2014 refund had been issued last year via direct deposit. However, the taxpayer stated he/she had not yet filed a 2014 return. The assistor advised the taxpayer to submit an IDT affidavit. IRS received the affidavit in early February 2016 along with the taxpayer’s paper 2014 tax return. In March 2016, IRS opened an IDT case and sent the taxpayer a letter stating it had received the affidavit. In mid-April 2016, IRS closed the case and sent the taxpayer a letter stating he/she had been an IDT victim. IRS closed the case in 71 days. In mid-May, IRS sent the legitimate taxpayer the refund plus about $60 in interest. Because IRS closed this case in 2016, it did not send the taxpayer an IP PIN to use for filing his/her 2015 tax return. In addition to the contact named above, Joanna Stamatiades, Assistant Director; Erin Saunders-Rath, Analyst-in-Charge; Jehan Chase; James Cook; Robert Gebhart; Kirsten B. Lauber; Kimberly Madsen; and Robert Robinson made key contributions to this report.
GAO was asked to review IRS's 2016 filing season. This report assesses, among other things, how well IRS provided service to taxpayers compared to its performance in prior years, and its efforts to improve service for IDT victims, including selected internal control processes. GAO analyzed IRS documents and data for fiscal years 2011 through 2016 and reviewed 16 randomly selected IDT cases open or closed during a 10-month period in 2015 and 2016. GAO also conducted 5 discussion groups with 15 IRS assistors and 13 managers who handle IDT cases, and interviewed IRS officials and external stakeholders, such as representatives from the tax preparation industry. The results of the case studies and discussion groups are not generalizable. GAO compared IRS actions to federal standards for evaluating performance and internal control. The Internal Revenue Service (IRS) provided better telephone service to callers during the 2016 filing season—generally between January and mid-April—compared to 2015. However, its performance during the full fiscal year remained low. IRS does not make this nor other types of customer service information easily available to taxpayers, such as in an online dashboard. Without easily accessible information, taxpayers are not well informed on what to expect when requesting services from IRS. IRS has improved aspects of service for victims of identity theft (IDT) refund fraud. However, inefficiencies contribute to delays, and potentially weak internal controls may lead to the release of fraudulent refunds. In turn, this limits IRS's ability to serve taxpayers and protect federal dollars. While IRS has reduced its backlog of IDT cases and formed a team to improve its handling of these cases, GAO has identified areas for potential improvement. Specifically: File retrieval and scanning processes contributed to delays and unnecessary requests for documents. For example, in 2 of 16 cases, resolution was delayed by at least 1 month while an assistor waited for another unit to retrieve and scan documents into IRS's system. In one of those cases, plus one other, the document request was unnecessary because the assistor closed the case without the document. Inefficient processes and unnecessary requests to retrieve and scan documents can delay case resolution and refunds to the legitimate taxpayer. Potential weaknesses in IRS's internal control processes could lead to IRS paying refunds to fraudsters. In discussion groups with GAO, IRS assistors and managers said some assistors may release refunds even if indicators on the account show that the tax return is under review for IDT, or two returns have been filed for that taxpayer. Some participants said assistors answering telephone calls can release these holds because they do not understand the codes on the taxpayer's account. IRS officials said that these errors are not widespread and provided data to support their position. However, GAO identified weaknesses in those data, which IRS officials acknowledged. In response to this report, in January 2017 officials provided another analysis of IRS data that they said showed this type of error does occur but may not be as widespread as staff and managers suggested. GAO will continue to work with IRS to determine if these additional data are sufficient to address its recommendation. IRS does not notify taxpayers when a dependent's identity appears on a fraudulent return. According to IRS officials, the agency does not consider a dependent to be a victim if his or her Social Security number had been used as a dependent on a fraudulent return. However, IRS has previously provided guidance to taxpayers when a dependent was a victim of identity theft. After one data breach in 2015, IRS notified taxpayers and provided information on actions that parents could take to protect a minor's identity when their dependents were also victims. By not notifying taxpayers that their dependents' information may have been used to commit fraud, IRS is limiting taxpayers' ability to take action to protect their dependents' identity. GAO recommends IRS display customer service standards and performance online; review its retrieval and scanning processes; improve existing data or collect new data to monitor how and why assistors release refunds before closing an IDT or duplicate return case; and revise its notices to IDT victims. IRS disagreed with GAO's recommendation to improve data for monitoring refund releases, stating that the problem is not widespread and current processes are sufficient. GAO maintains that the data IRS uses are not sufficient to make such a determination. IRS agreed with the remaining three recommendations.
The Congress established the REACH program to minimize the health and safety risks that result from high energy burdens prevent homelessness as a result of inability to pay energy bills, increase the efficiency of energy usage by low-income families, and target energy assistance to individuals who are most in need. The REACH legislation requires that project plans provide a variety of services and benefits, which may include energy efficiency education, residential energy demand management services, counseling related to energy budget management and payment plans, and negotiations with home energy suppliers on behalf of eligible households. The legislation further requires each state’s plan to describe performance goals for its projects and the indicators the state will use to measure whether each project has achieved its performance goals. OCS’ stated purpose for the REACH projects is to demonstrate the long- term cost-effectiveness of supplementing energy assistance payments with nonmonetary benefits that can increase the ability of eligible households to meet energy costs and achieve energy self-sufficiency. REACH is part of a much larger program, LIHEAP, which primarily provides financial assistance to low-income households for home heating and cooling. REACH projects may target their services to a portion of the population eligible for LIHEAP, such as a geographic area or type of client (households with elderly people, for example). Since fiscal year 1996, REACH funding has ranged from $5.5 million to $6.8 million annually, while the total funding for LIHEAP has ranged from $1.2 billion to $2.1 billion. Over the course of the REACH program, its funding has averaged about one-half of 1 percent of total LIHEAP funds. The legislation establishing REACH provided for funding it from LIHEAP’s incentive program for leveraging nonfederal resources. This incentive program provides additional monies to states that, in the previous year, obtained additional assistance for low-income households’ energy needs from such sources as state funds, utility companies, and private charities. The REACH legislation provides that, for each fiscal year, the Secretary of HHS may allocate up to 25 percent of the funding for this incentive program to the REACH program, and HHS has allocated approximately this amount each year since the REACH program was first funded in fiscal year 1996. Through the REACH program, OCS awards grants to states and to tribes, tribal organizations, and insular areas (generally referred to in this report as “tribal organizations”). OCS issues annual requests for grant proposals and then awards grants on a competitive basis. On average, about one-third of the applicants from fiscal year 1996 through fiscal year 2000 received a REACH grant. OCS’ Division of Community Demonstration Programs manages the REACH program. State and tribal organization grants differ in a few ways. OCS has established a 3-year time frame for states to complete their projects, in part to allow time for states to contract with the community-based organizations that carry out the states’ REACH projects. Tribal projects have a 17-month time frame if the tribal organization carries out the project itself, or a 3-year time frame if the tribal organization chooses to use a community-based organization. Unlike states, tribal organizations are not required to conduct evaluations on the effectiveness of their project approaches, but tribal organizations do submit final reports on their projects to OCS. In addition, OCS has established differing maximum grant amounts for state and tribal grants. According to the director of OCS’ Division of Community Demonstration Programs, these caps on grant amounts reflect the differing amounts that states and tribal organizations receive in their LIHEAP allotments and resulted from focus meetings with tribal representatives held by OCS’ LIHEAP staff prior to the first REACH request for grant proposals. OCS’ annual requests for grant proposals, referred to as program announcements, provide criteria for program eligibility, proposals, selection of projects to fund, and states’ evaluations of their projects. Only states and tribal organizations that receive LIHEAP grants are eligible to participate in REACH, and the households that may receive REACH project services are those eligible for LIHEAP assistance. In its program announcements for fiscal years 1996 through 2000, OCS required states’ and tribal organizations’ proposals for REACH grants to address the following elements:the organizational experience and capability of the community-based or tribal organization(s) conducting the project; project staff skills and responsibilities; state-level management and organization of the project; project theory and design, including the target population and needs to be addressed, activities, expected outcomes, goals, and work plan; budget appropriateness and justification; expected beneficial impact; “holistic” strategies addressing the economic and social barriers to self- sufficiency, and project innovations (required for state proposals only); community empowerment of areas characterized by severe poverty, high unemployment, or other indicators of socioeconomic distress (required for state proposals only); and evaluation of the project. OCS uses the above proposal elements in its criteria for selecting which projects to fund. OCS convenes panels to review and score the grant proposals against the criteria; state and tribal organization proposals are assessed separately. Factors in addition to the scoring may be used in making selections, such as geographic distribution and OCS’ past experiences with the applicants. States’ evaluations of their projects are to address how effectively each project was implemented, and whether and why the expected results and goals were or were not achieved. OCS requires states to use third-party evaluators to conduct the evaluations. Such evaluators are individuals or firms that are organizationally distinct from the state agency or community-based organizations involved in the REACH project. Through the end of fiscal year 2000, OCS had awarded $30 million in REACH grants to states and tribal organizations for 54 projects (29 state and 25 tribal projects) to address the home heating and cooling needs of low-income households using a variety of approaches. The most commonly used approaches have been energy efficiency education, home energy audits, home weatherization, and budget counseling, while innovative approaches have included forming consumer cooperatives for energy purchasing and using solar and/or wind power. Several projects have included activities not directly related to home energy—for example, job skill or employment development services and financial assistance toward overdue rent or mortgage payments. After its REACH grant period is over, a state or tribal organization may wish to continue using the approaches tested in the REACH project. Some activities could be replicated in a state’s or tribal organization’s LIHEAP program, if the state or tribal organization chose to do so. OCS has awarded REACH grants for 29 state and 25 tribal projects. Table 1 below shows REACH grant funding and numbers of grants by year and type of grant (state or tribal organization). In addition to the grant funding shown in the table, HHS has provided a total of $1.1 million to states for their costs in administering the grants, overseeing the community-based organizations that carry out REACH projects, and contracting for the third- party evaluations. (Tribal organizations have not received such funding because, to date, the tribal organizations have chosen to carry out their REACH projects themselves and are not required to contract for third- party evaluations.) Including both the grant funding and administrative funding for states, HHS provided $31 million in REACH program funds in fiscal years 1996 through 2000. Tables 2 and 3 below list the grant awards made to states and tribal organizations, respectively, in fiscal years 1996 through 2000. Four states—Michigan, Nebraska, Oregon, and Pennsylvania—have received more than one grant, for separate projects. Similarly, five tribal organizations and one insular area have received more than one grant. State grants ranged in amount from $166,667 to $1.6 million and averaged $931,108. Tribal organization grants ranged in amount from $50,000 to $199,276 and averaged $118,051. The activities conducted under the demonstration grants have varied. Almost all of the REACH projects included multiple activities and addressed more than one of the following areas: reducing energy use for home heating and cooling by increasing efficiency; helping clients pay for energy bills through budget planning, consumer education, consumer cooperatives, and other methods; reducing the use of other utilities (water and electric lighting); and providing social services not directly related to home heating and cooling needs. Appendix I provides further details about the types of activities included in REACH project plans and the number of projects conducting each type of activity. The most common project activity has been energy efficiency education, which was included in 45 of 53 projects. This education has been provided through group workshops for the projects’ clients or through in-home counseling of individual households. Some grantees have also developed energy educational materials, such as pamphlets or videos, some of which were designed for children. Other activities aimed at reducing energy use have included home energy audits and weatherization. Home energy audits were used by 25 REACH projects, often in conjunction with weatherization of the home or education in how to reduce energy use. In a home energy audit, a detailed inspection of the home is carried out to identify repair and weatherization needs and practices of the household that result in inefficient energy use. For example, doors and windows are inspected for drafts, attics for insulation needs, and furnaces for maintenance needs. Families may be advised of energy-saving practices such as turning down their thermostats at night or using fans to reduce the need for air conditioning. Twenty REACH projects have provided home weatherization work by construction contractors. While the Department of Energy’s (DOE) Weatherization Assistance Program helps many low-income families, some homes need repairs that exceed the per-house dollar limit of the DOE program. The REACH projects that included weatherization typically went beyond the services provided by the DOE program or served clients who had not been addressed by the DOE program. Some REACH projects worked in conjunction with DOE’s program. In addition, 20 REACH projects provided clients with do-it-yourself weatherization kits and training in how to install the weatherization measures. The kits included items such as caulk, weather-stripping, and plastic coating for windows. Budget counseling was provided in 30 of the REACH grant projects. Such counseling helps clients to plan ahead for paying for their essential needs, including energy bills; to identify areas where they could cut costs; and to become better informed about credit practices. Budget planning can help low-income households avoid utility cutoffs due to unpaid bills, which are followed by the need to pay reconnection fees and past-due bills. Utility cutoffs are not uncommon for low-income households. According to OCS data, during the 1992-93 heating season, 1 million LIHEAP-eligible households (3.3 percent of such households) reported that they were unable to use their main source of heat for 2 hours or more because they were unable to pay for their main heating fuel. Sixteen REACH projects also negotiated with energy vendors on behalf of their clients to obtain payment plans or forgiveness of past-due bills, and four projects provided funds to help pay past-due bills and/or reconnection fees. Seven projects provided consumer education on utility deregulation and the consumer choices provided by deregulation. Some REACH projects addressed the use of utilities for purposes other than heating and cooling. Thirteen projects provided energy-efficient light bulbs. Energy-efficient lighting was the only focus of the two grants received by American Samoa because the Samoan government determined that this was the one measure most likely to reduce the high cost of electricity for their low-income households. Six projects addressed water conservation in the home by providing devices such as low-flow showerheads or repairing plumbing. Such plumbing measures also reduce energy usage for hot water heaters. Some REACH projects included social services not directly related to home energy needs. Seventeen of the REACH projects provided social services through case management. For example, Nebraska’s fiscal year 1996 grant project included caseworker visits to help families identify and address their concerns in areas such as improving job prospects, eating nutritional food, and obtaining health services. Only Indiana’s project provided funds to help clients pay past-due rent or mortgage payments. Six projects included job skill or employment development services. For example, Indiana’s fiscal year 1997 grant included the use of REACH funds for job skill training, transportation to work, and day care; $258,000 of Indiana’s planned project budget addressed such services to help clients move from welfare to work. In many states, services that support the transition of welfare recipients to the workforce—such as job training, transportation, and child care—are provided by the much larger Temporary Assistance to Needy Families program ($22.6 billion in federal and state funding in fiscal year 1999). “OCS is interested in having Applicants approach the energy needs of low-income families within a holistic context of the economic, social, physical, and environmental barriers to self-sufficiency. Thus applicants should include in their REACH Plan an explanation of how the proposed projects(s) will be integrated with and support other anti-poverty or development strategies within the target community or communities.… Thus REACH initiatives are expected to be closely coordinated with other public and private sector programs involved with community revitalization, housing rehabilitation and weatherization, and family development.” Similar language has been included in each program announcement requesting REACH grant proposals. Furthermore, the criteria used to select grant proposals to fund have included holistic program strategies and project innovations. REACH grant proposals are scored on a number of criteria—such as the organization’s experience and the project strategy and design—and each criterion has a maximum number of possible points. Out of a total of 100 possible points, state proposals for fiscal years 1996 through 2000 could receive up to 10 points in the selection process for the criterion “holistic program strategies and project innovations.” When asked about this criterion and the non-energy-related activities, REACH program officials said that the criterion might need to be clarified and that they also had some concerns about the emphasis on nonenergy activities in a few projects. However, they also noted that the legislation sets broad purposes for REACH, including preventing homelessness and minimizing health and safety risks resulting from high energy burdens on low-income households. Because energy burden is defined as home energy expenditures divided by household income, the officials stated that some grantees had chosen to address increasing clients’ incomes. Some REACH projects have involved innovative activities. Examples of innovative activities include the installation of solar and/or wind power for low-income households in two tribal projects: the Grand Traverse Band of Ottawa and Chippewa Indians (fiscal year 2000 grant) and the Cherokee Nation (fiscal year 1997 grant). Another innovative activity used in three state projects was the formation of consumer cooperatives to reduce the cost of energy to low-income households. Vermont’s fiscal year 1997 project formed a cooperative and purchased a home heating oil company. The other two state projects—New York’s and Connecticut’s—are forming cooperatives to purchase energy from deregulated utilities, allowing low- income households to aggregate their purchasing power and negotiate lower prices. Pennsylvania’s fiscal year 2000 project is addressing cooling needs in urban areas, following a number of heat-related deaths among elderly residents, particularly in urban row houses. The project is both introducing an innovative use of heat-reflective coatings on roofs of inner city row houses and providing fans and safety devices that permit windows to be locked in both open and closed positions. REACH grantees have also developed innovative ways of working with other organizations and delivering services. The project of the Central Council of the Tlingit and Haida tribes in Alaska trained AmeriCorps personnel to provide energy efficiency education. Because these personnel remain in the Alaskan villages, the project proposal stated that the impact of the services would continue beyond the completion of the grant period. Nevada’s project for its fiscal year 2000 grant is using the concept of individual development accounts to provide clients with matching funds for their savings. The REACH clients’ savings accounts are to be used for replacing the furnace or appliances, changing to a lower cost fuel, or paying higher winter utility bills. Kentucky’s project used volunteers from local organizations to install weatherization kit materials in the homes of disabled and elderly clients who could not do the work themselves. After REACH projects are completed, some project activities may be replicated in a state’s or tribal organization’s LIHEAP program, if the state or tribal organization chooses to do so. (The state or tribal organization may also choose to use its own funding to continue the activities.) The legislation authorizing LIHEAP allows states to spend up to 5 percent of their allotted LIHEAP funds for services that encourage and enable households to reduce their home energy needs and therefore their need for energy assistance. According to LIHEAP program officials, such services may include energy efficiency education, budget counseling, and assistance in obtaining discounts or payment plans from energy vendors. According to states’ fiscal year 2001 plans, 25 of 50 states and the District of Columbia planned to use a portion of their LIHEAP funds for energy use reduction services, and their planned services included energy education, energy needs assessment, liaison with energy vendors, budget counseling, and case worker services for clients. The legislation also allows states to spend up to 15 percent of their allotted LIHEAP funds for low-cost weatherization or other energy-related home repair for low-income households. States may also apply to HHS for waivers to use up to 25 percent of their allotment for this purpose. Forty-four of 50 states and the District of Columbia planned to provide weatherization in fiscal year 2001. Several types of activities used in REACH projects may not be covered by these provisions of LIHEAP, such as forming or expanding energy consumer cooperatives, installing solar and wind power units, providing efficient light bulbs, and providing matching funds for clients’ savings to be used for energy purposes. It is uncertain whether states may use LIHEAP funds if they wish to continue or expand such activities after the conclusion of their REACH project. LIHEAP officials noted that because LIHEAP is a block grant program, states decide how to design their programs and interpret the statutory provisions regarding energy use reduction services and weatherization. Furthermore, LIHEAP’s funding caps in these areas, as well as the need among low-income households for direct assistance with energy bills, would determine the extent to which the activities tested in a REACH project could be provided more broadly to a state’s or tribal organization’s LIHEAP clients. The legislation authorizing REACH identifies performance goals to be used by individual REACH projects. For the program as a whole, however, HHS has not developed performance goals and measurable indicators that define the results that it expects to achieve. Furthermore, HHS’ performance plans do not address how the REACH program relates to the larger LIHEAP program of which it is a part. The legislation authorizing the REACH program requires that REACH project plans describe performance goals for each project, which are to include a reduction in the energy costs of participating households over 1 or more an increase in the regularity of home energy bill payments by eligible an increase in energy vendor (such as utility companies) contributions towards reducing energy burdens of eligible households. The legislation further requires that project plans include a description of the indicators that each state will use to measure whether the performance goals have been achieved. For example, in addressing the performance goal of more regular energy bill payments, Oregon’s proposal for its fiscal year 1999 grant specified the following indicator: 50 percent of households with past-due energy bills will reduce their arrearages by 40 percent after 12 months of participation. In addition, OCS requires states to address how results will be measured in their plans for evaluating their projects. Despite the program’s use of project-level performance goals and indicators, the REACH program as a whole lacks performance goals and measurable indicators. According to a REACH official, the Administration for Children and Families decided not to address the REACH program in its performance plan, which was developed to meet the requirements of the Government Performance and Results Act of 1993. While agencies’ performance plans do not have to include the complete array of goals and measures used in managing individual programs, the development and use of performance goals and indicators can be beneficial to any federal program. Specifically, performance goals and measurable indicators can help a program to focus its efforts on achieving results and on those activities most closely linked to program goals, provide a clearer basis for selecting projects to fund, provide a basis for determining how well the program is performing and what has been achieved in return for the resources invested in it, and facilitate reporting to the Congress and the public on the program’s performance. For the REACH program in particular, performance goals and measurable indicators could address two needs: They could both provide a basis for grant selection and enhance effectiveness in carrying out federal roles. First, as described above, OCS has selected several grant proposals that used a substantial portion of their grant funding for activities that were not directly related to home energy needs, such as job skill or employment development services. The use of performance goals could help OCS to better target grant selection to projects that are closely linked to program goals. Second, the federal roles in the REACH program are not fully addressed by the project-level goals. For example, the federal role in the REACH program currently includes selecting grantees, providing program guidance, helping to strengthen project design and states’ evaluations of their projects, and providing information to grantees and others involved in providing energy assistance. The effectiveness of federal efforts in the program could be enhanced by results-oriented performance measures addressing these roles. Furthermore, the performance plan for the Administration for Children and Families (which includes OCS) does not address the relationship between REACH and LIHEAP, or between REACH and other related federal programs, such as DOE’s Weatherization Assistance Program. Performance plans can be useful tools for identifying the need for coordination among programs, and for ensuring that the goals of related programs are congruent and that crosscutting efforts are mutually reinforcing. In the case of REACH, HHS could use this performance plan to articulate REACH’s purpose in demonstrating and providing incentives for LIHEAP grantees to try new approaches. Without HHS’ addressing the relationship between REACH and LIHEAP, in particular, it is unclear whether HHS expects the REACH program to have a broader impact on the activities of LIHEAP grantees, beyond a particular REACH grant project and its limited time frame. The six states that received funding in fiscal year 1996 have prepared evaluation reports: California, Maryland, Massachusetts, Michigan, Nebraska, and Oregon. However, only Nebraska’s evaluation report, with some qualification, fairly reflects the REACH project’s effects on energy use. The remaining five evaluation reports have substantial design and implementation shortcomings that compromise the validity of the reports’ findings. In addition, all six evaluation reports have other shortcomings that preclude an overall assessment of the projects’ effectiveness. (Appendixes II through VII summarize each state’s evaluation report, including its goals and measures, project assumptions, evaluation design, services provided, participant and control group selection, evaluation findings, reported limitations, and GAO’s observations on analytical problems and other shortcomings.) OCS is aware of the shortcomings in these initial evaluations and has been taking several steps to improve future project evaluations. The design and implementation of Nebraska’s fiscal year 1996 project allowed statistically valid conclusions to be made in the state’s evaluation report about the effects of project services on participant energy use. The project, which was carried out by a community-based organization, focused on changing the behavior of low-income households to achieve economic self-sufficiency through decreased energy use. This approach was based on the assumption that most of the target population were renters who frequently moved and that the housing available to them was generally substandard. It was assumed that the only way to reduce energy use was to change household behavior through teaching energy-efficient practices. The evaluation report noted that REACH participants significantly decreased their use of natural gas from preproject to postproject compared with a control group of nonparticipants, whose consumption showed no statistical change. The outcome for electricity use was that while REACH participants showed, on average, no change in electricity use, nonparticipants increased electricity use significantly over the course of the project. The evaluation report attributed the project’s successful implementation to the project design and the collection of sufficient data to the community-based organization’s experience, expertise, and knowledge of the target population, as well as to the early involvement of the evaluator in the project’s design. Cash incentives provided to the test and control groups also appeared to help limit participant attrition (leaving the project before its completion). In addition, administrative controls designed to ensure access to utility bills seem to have played a role in obtaining sufficient data for analysis. The other five states’ REACH projects experienced design and implementation constraints that decreased the confidence that can be placed on the findings contained in their evaluation reports. For example, Massachusetts and Michigan did not use control groups as a means of assessing whether project participants fared differently from similar households that did not receive project benefits. California and Maryland experienced difficulties in collecting complete income or utility data and experienced client attrition at rates that call into question the likelihood that project effects could be assessed. To some extent, these state REACH projects faced common challenges: (1) forming an adequate control/comparison group against which to compare project outcomes; (2) maintaining client participation in project activities; (3) collecting complete, accurate, and reliable data (such as client income and utility bills); and (4) adjusting (“normalizing”) energy consumption data for changes in weather. A properly formed control group, composed of people who did not receive REACH project services but who are similar in other respects, allows a comparison of what might have happened in the absence of the project. Project designs that lack a control group can demonstrate that changes occur in such factors as energy use, but the changes cannot be attributed to the project since other factors might be responsible for the changes. Any preexisting differences between a test group and a control group could also be the cause of any observed project effects and, therefore, should be addressed in the project’s design. Attrition of participants can reduce the amount of data available for analysis and introduce biased results. Attrition bias can occur if those remaining in the project systematically differ from those who dropped out in ways that are likely to affect the outcome of the project. Lack of complete, reliable, and accurate data will result in imprecise or biased results. Data that are not collected for an entire heating or cooling season, for example, can lead to faulty assessments of typical energy use. Failure to normalize energy use data makes it difficult to determine the extent to which changes in energy use result from project activities or from changes in weather that affect the need to heat or cool a residence. For example, if the postproject data on energy use reflect a warmer winter than the preproject data, a valid comparison of energy use for these two periods should adjust the data to help determine if perceived energy savings were due to warmer winter temperatures or to project activities. In addition, other shortcomings precluded an overall assessment of the projects’ effectiveness. None of the evaluation reports addressed one of the three project-level performance goals stated in REACH legislation and program announcements: increasing energy suppliers’ contributions to reduce the energy burdens of eligible households. In addition, although the REACH program announcements have stated that an objective of every state project plan should be to measure whether its activities are more cost-effective in the long term than energy assistance payments alone, none of the project evaluation reports provided such an analysis. Finally, most of the evaluation reports did not report lessons learned or best practices that could be valuable to other projects. For example, if the report had discussed the strategies used to successfully collect energy use data, this information could have been useful to other projects. This lack of data was often cited in evaluation reports as a principal reason for not being able to measure project effectiveness. The REACH program is aware of the challenges noted above and has been taking several steps to improve future project evaluations, including providing guidance and technical assistance. First, OCS has developed guidance for project planning and evaluation in its demonstration programs, which include the REACH program. According to REACH officials, this guidance was first given to REACH grantees in 1997 or 1998. While this was too late to help the fiscal year 1996 grantees plan their evaluations and related data needs, the guidance addresses many of the evaluation limitations noted above and, if followed by state grantees, could help to improve future evaluations. For example, the guidance discusses selecting an evaluator—indicators of a good evaluator and questions to ask in an interview; using a logic model (described below) to design and evaluate a project; addressing challenges to gathering data on project participants, such as their reluctance to provide income data; using comparison groups that are as similar as possible and over- recruiting for comparison groups to allow for attrition; and developing evaluation reports that are clear and complete. The guidance was revised in 2000 to increase its emphasis on the logic model, according to REACH program officials. A logic model identifies underlying assumptions of the project, project activities (such as energy efficiency education), immediate and intermediate outcomes expected from the activities (such as improved understanding of behaviors that can affect energy use in the home), and final project goals (such as a reduction in energy use). The REACH program encourages grant applicants to use a logic model in their applications and project planning. Second, the REACH program provides technical assistance to grantees in their project and evaluation planning. States that receive grants are required to submit evaluation plans, which are reviewed by REACH program officials and the consulting firm that is providing technical assistance to the program. The REACH officials and consultant then discuss with the project team any improvements that are needed, and the grantee submits a revised evaluation plan. Once an acceptable evaluation plan is completed, the REACH program sends a letter to indicate approval of the plan. According to officials of the REACH program’s consulting firm, they emphasize improving the logic model to strengthen the projects’ designs and their ability to measure whether expected outcomes and goals are achieved by project activities. REACH officials also review draft evaluation reports and may ask states to revise and improve them. Third, guidance and assistance with evaluations have also been provided through program conferences. The REACH program held conferences for evaluators in July 2000 and July 2001. At the July 2000 evaluators’ conference, the participants discussed logic models, common issues in gathering and analyzing data, evaluation methods, and lessons learned. Developing quality evaluations has also been discussed at the annual REACH program conferences, and evaluators as well as representatives of states, tribal organizations, and community-based organizations may participate in the conference. For example, the REACH conference held in January 2001 included a presentation and discussion on developing logic models, indicators, and evaluation plans. In addition, as the state grantees reported on their completed fiscal year 1997 projects, the conference participants discussed many specific evaluation issues. While the REACH program and its consultants can assist grantees in planning their evaluations, conducting the evaluation is a responsibility shared among the states, the community-based organizations with whom the states contract, and third-party evaluators. State officials monitor and oversee the projects and the evaluation reports. Staff of the community- based organizations are responsible for collecting the data needed to complete the evaluation. The staff of the community-based organizations may need training to understand the importance of data gathering and to carry out this task effectively. Third-party evaluators help to design the evaluation, analyze data, and prepare the evaluation report. Some of the problems we noted in the six completed evaluations related to reporting, such as not clearly explaining methods of analysis or data limitations. For future REACH evaluations to improve, the states, community-based organizations, and evaluators—as well as the REACH program and its technical assistance consultants—must carry out their functions well. OCS has not yet planned how it can best communicate information to state officials and others about the results of REACH projects, such as what approaches prove to be the most successful in meeting the home energy needs of low-income households. As the REACH program proceeds and state grantees complete additional project evaluations, more information will become available about which approaches demonstrated in REACH projects were successful and which were less successful. In addition to evaluating the results of their projects, grantees report on the processes and procedures they used to carry them out. Completed REACH projects will provide information and tools such as pamphlets, videos, and other materials for energy efficiency education; forms for collecting data from clients; and experiences and lessons learned in such areas as how to leverage contributions from energy vendors and how to coordinate among various organizations involved in energy assistance. A comprehensive communications plan would help ensure that this information is put to good use by identifying what information to communicate, to what audiences, and by what methods. A communications plan would also estimate the amount of funding needed for communications. REACH program officials realize that publications such as summaries of successful approaches could provide information in a form more readily accessible than the individual evaluation reports. Other OCS programs have developed summary publications of best practices and have found them to be frequently requested, according to the director of OCS’ Division of Community Demonstration Programs. For instance, one publication provided lessons learned from 8 years of OCS demonstration programs (not including the REACH program, which had not yet begun at the time). As the REACH program matures and more information becomes available, OCS will have a better basis for identifying and summarizing best practices. For instance, information will become available from six project evaluations that are due during the fall of 2001 (from the state grants awarded in fiscal year 1997), and by the end of 2002, a total of 19 state projects will probably have completed their evaluations. To date, the REACH program’s communications efforts have included developing a Web site and providing information through conferences. According to program officials, the REACH Web site, located within OCS’ Web site for its demonstration programs, is expected to become available in the summer of 2001. Currently, a pilot version of the REACH Web site is available at the Web site of the REACH program’s consulting firm. According to REACH program officials, the OCS Web site will provide the same types of information as the pilot Web site. The pilot REACH Web site includes the current OCS program announcement requesting proposals for REACH grants, summaries of past and ongoing REACH projects, summaries of the REACH program conferences, and listings of contact points for grants and evaluations. Conferences are also used to communicate project results and lessons learned. The REACH program hosts an annual conference for certain people involved with REACH projects. According to the REACH program announcements, state grantees are expected to fund travel by the state project directors, community-based organization project directors, and chief evaluators to the annual conference in each of the 3 years of the project. Tribal organizations, which have a shorter project term, are to fund travel to one conference. At the conferences, grantees that are finishing their projects make presentations about their project approaches and results. REACH program officials also provide information geared to new grantees about program and reporting requirements. The conference has also included topical presentations; for example, the conference in January 2001 included presentations on the characteristics of good project evaluation methods, solar power, and designs for low-energy-usage homes. According to REACH officials, they and their consulting firm have also arranged for presentations on REACH projects at the annual LIHEAP conference and other energy-related conferences. Grantees also have responsibilities for communicating about their REACH projects. OCS requires that states’ REACH project plans address disseminating results of the individual projects among LIHEAP grantees, utility companies, and others interested in increasing the self-sufficiency of the poor. States are allowed to budget up to $5,000 of each grant for dissemination purposes. Tribal organizations are allowed to budget up to $1,000 for dissemination purposes. Such state and tribal efforts are important, but they address only individual projects, not the broader compilation of learning from a number of projects over a period of years. Because the REACH program lacks performance goals and measurable indicators, HHS has not defined the relationship between REACH and its parent LIHEAP program and cannot assess the program’s overall effectiveness. Considering the recent rise in home heating and cooling costs, the REACH program’s role in testing approaches to help low-income families to meet their home energy needs is an important one and should be clearly articulated. We believe that the development of performance goals and measurable indicators could provide the Congress with better information about what has been accomplished for the resources expended. Furthermore, performance goals could provide HHS’ Office of Community Services with a clearer basis for selecting grant proposals to fund. In addition, by addressing the relationship between LIHEAP and REACH in its performance plan, HHS’ Administration for Children and Families could clarify the role of the REACH program and whether it expects REACH to have an effect on the activities of LIHEAP grantees, beyond a particular REACH grant and its limited time frame. Some of the REACH projects funded to date have included activities other than addressing clients’ home heating, home cooling, and energy payment needs, such as job skill or employment development services. The REACH program’s requests for proposals with “holistic approaches” may have been misconstrued by grant applicants and by those reviewing proposals to allow projects to use REACH funds for non-energy-related activities. However, with only about $6 million in funding annually, we question whether non-energy-related activities—including some activities typically addressed through other, much larger social service programs—are an effective use of limited REACH program funds. While the evaluations conducted on the first year of state grants have many shortcomings that limit their usefulness in assessing project effectiveness, the REACH program recognizes the problems and has been taking steps to help improve future project evaluations. It is too soon to gauge the effectiveness of these efforts, but we noted some improvements in states’ plans for the next set of evaluations. We encourage the REACH program to continue its efforts to improve the design and methodology of evaluations because valid evaluations are vital to realizing the potential of the REACH program in testing new approaches. With only six project evaluations completed, there is currently not enough information to reach a conclusion about the effectiveness of the REACH program. By the end of 2002, a total of 19 state projects will probably have completed their evaluations, and OCS will have awarded a total of about 80 grants, roughly half of them to states, which will eventually report on project results. While the legislation authorizing the REACH program requires this GAO review, it does not require HHS to report to the Congress as more information on project results becomes available. Developing program performance goals and measurable indicators and obtaining better data on the REACH program’s effectiveness would enable HHS to provide the Congress the information that it needs to assess the program. Furthermore, the legislation did not specify an ending date for the REACH program, as is sometimes the case for demonstration programs. It may be appropriate to reassess the REACH program during 2003 and consider whether and how long it should continue. Finally, the lack of a comprehensive plan for communicating the results of REACH projects and fostering the further use of effective approaches could limit the impact of the REACH program. Without well-planned and adequately funded communications, the results of REACH projects may fail to have an impact on LIHEAP and other programs that provide energy assistance to low-income families. Summaries reporting on best practices could be more effective as communication tools than the state evaluation reports themselves for reaching the state, federal, and community officials involved in addressing the home energy needs of low-income households. To the extent that OCS shares such information among the state, tribal, and community-based organizations that provide energy assistance, the organizations can make better-informed decisions about replicating successful approaches and avoiding problematic ones. Furthermore, well- informed organizations can avoid “reinventing the wheel” by, for instance, not investing in developing energy education materials that may already be available. In light of additional project evaluations and other information that will become available over the next several years, OCS needs to plan for its communications efforts. To better target the use of the limited resources of the REACH program and provide for reporting on program performance, we recommend that the Secretary of HHS direct the Administration for Children and Families and its Office of Community Services to develop program performance goals for REACH that are objective, measurable, and quantifiable; address the relationship between the REACH and LIHEAP programs in its ensure that REACH funds are used for activities directly related to the home energy (heating and cooling) needs of low-income households. To ensure that the results of REACH projects are effectively communicated to the government agencies and private organizations involved in addressing low-income households’ energy needs, we recommend that the Secretary direct the Office of Community Services to develop a communication plan for the REACH program describing intended audiences, types of information to be communicated, communication methods appropriate to the intended audiences, and the funding needed. With HHS developing information for performance reporting and obtaining additional project evaluations, the Congress may want to consider requiring HHS to report on REACH program effectiveness and project results in several years, after the projects funded in the first 3 years of the program have completed their evaluations by the end of 2002. Once HHS has reported, the Congress may also wish to consider whether the REACH program should continue indefinitely or whether the program should have an end date after a sufficient number of demonstration projects. The Department of Health and Human Services provided written comments on a draft of this report. These comments are reprinted in appendix VIII, along with our responses. HHS generally agreed with our recommendations. However, while HHS agreed with our third recommendation that it ensure that REACH funds are used for activities directly related to the home energy needs of low-income households, it made two comments related to this recommendation. First, HHS disagreed with our assessment that language in its program announcements on holistic program strategies may have been misconstrued to encourage non-energy-related activities. We continue to believe that HHS should review the language of its program announcement, as it plans to do, to ensure that the REACH program does not fund activities that are not directly related to the home energy needs of low-income households. The language in the program announcement is used as criteria for reviewing and selecting grant proposals, as well as for providing guidance to applicants for grants. Our recommendation concerns only REACH program funds; grantees would not be precluded from using other sources of funds for such activities or from coordinating with other social service efforts. Second, HHS stated that the term “residential energy” is understood to include all household energy use, not just home heating and cooling. However, we note that the authorizing legislation does not define the term “residential energy.” Therefore, we believe that HHS should apply the definition of home energy in the authorizing legislation—namely, a source of heating or cooling in residential dwellings. We have changed the wording of our recommendation to make this more clear. HHS suggested that our matters for congressional consideration include the possibility of expanding the REACH program, as well as the options of continuing it or setting an end date for the program. We have not incorporated this comment because the effectiveness of the REACH program has not yet been determined and because we believe that HHS should be required to report to the Congress in 2003 on the REACH program’s effectiveness. HHS also made a number of technical comments, which we have incorporated as appropriate. We reviewed all of the REACH grants that have been awarded since the program was first funded in fiscal year 1996. However, in reviewing project evaluations, we focused only on states’ projects, because tribal organizations are not required to do project evaluations. We also reviewed documents about the LIHEAP program and interviewed LIHEAP officials in order to better understand the context and purpose of the REACH program. Specific actions that we took to accomplish each of our objectives are listed as follows: To obtain information about grant amounts, recipients, and project activities, we reviewed REACH program summary documents and grant proposals; interviewed state officials responsible for most of the state grants awarded in the first 2 years of the program; and attended the REACH program’s annual conference, where we heard presentations by grantees. To obtain information about REACH program goals, objectives, and performance measures, we reviewed the authorizing legislation, HHS performance plans, requirements and guidance related to the Government Performance and Results Act of 1993, and REACH program announcements that request grant proposals. We also interviewed program officials. To analyze the results of state evaluations, we reviewed the project design and implementation of the six completed evaluations. We assessed the adequacy of key aspects of project design and implementation since these factors determine the confidence that can be placed on an evaluation’s findings. By determining whether the evaluation reports contained critical methodological flaws, we ascertained whether the reported findings were so qualified as to preclude their use in assessing the REACH project’s effectiveness. In addition, we summarized these evaluations, contacted the state officials responsible for these six projects to discuss the evaluations, and discussed efforts to improve evaluations with REACH program officials and the program’s consultants. To identify the REACH program’s communications efforts, we interviewed program officials and reviewed the material made available on the REACH program’s pilot Web site and through REACH program conferences. We did not assess the communications efforts of REACH grantees. We conducted our review from December 2000 through July 2001 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretary of HHS, the Director of the Office of Management and Budget, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. If you or your staff have any questions about this report, please call me on (202) 512-3841. Other key contributors to this report are listed in appendix IX. Table 4 summarizes the activities planned for the 53 Residential Energy Assistance Challenge Option (REACH) projects that have been funded since the beginning of the program: 28 state and 25 tribal organization projects. (The fiscal year 1999 North Carolina grant is not included in the table because it was being redesigned at the time of our review because the state had not deregulated utilities as had been expected when the project was originally being designed.) Most project plans included multiple activities. Several activities used by only one or a few projects are not included below. The primary goal of the California REACH project was to help low-income households reduce their energy use by providing energy conservation and other services. Specific project goals follow: Dwellings would become more energy-efficient. Health and safety risks would be mitigated. Families would become knowledgeable about energy use and conservation. Families would make and keep energy conservation goals. Families would reach a stable level of energy consumption. Families would reduce their demand for Low-Income Home Energy Assistance Program (LIHEAP) payment assistance. Eligible recipients would enroll in utility-supported rate discount programs. Families that needed additional help in other areas would receive assistance. Families would become stable and develop problem solving skills and coping abilities. The evaluation report did not clearly identify major project assumptions, other than recognizing an expected participant attrition rate of 20 to 25 percent. To compensate for expected attrition, the project design reflected plans for increasing the participant base. The project was designed to determine which combinations of services (described below) were the most effective in reducing energy use of low- income households. The project was designed to use test groups and control group comparisons. It was to use six test groups (one for each of six combinations of services), with corresponding subgroups located at three separate locations. Services were to be provided to REACH participants by four different community-based organizations. A seventh group—the control group—was not to receive services other than energy assistance payments. Utility bill data and other information were to be collected for all seven groups before and after project implementation. The results of the REACH groups were to be compared and evaluated. The evaluation report identified five services to be provided by community-based organizations through case management: (1) basic and (2) enhanced energy conservation education, (3) basic and (4) enhanced home weatherizing, and (5) family intervention (including referral to other agencies when needed). The control group was to receive only energy assistance payments. In addition to the above services that would be evaluated, the community- based organization was to provide a variety of other services, such as outreach, eligibility determinations, residential assessments, and family assessments. Eligible project participants were households that received energy assistance payments, had lived at the same residence for the past year, and were not expected to move in the current year. In addition, households were required to have metered energy—electricity, gas, or propane—for their individual residence and could not have received previous home weatherizing services. Project participation was further restricted to households that spoke limited English, had a high energy burden, received public assistance for children, and/or were unable to pay current utility bills. The evaluation report stated that participant eligibility was verified using the standard process for determining eligibility for energy assistance payments. The evaluation report indicated that households were to have been randomly selected and assigned to the various test and control groups on the basis of the services they were to receive. According to the evaluation report, there was no statistically significant reduction in the use or cost of energy among the test groups. The report found that the average monthly household energy use and cost actually increased slightly during the project. Overall, average monthly electrical and natural gas use was also found to have increased. The results of the individual test groups, however, were reported to be mixed. For example, 6 groups reduced their electrical use, 7 groups reduced their natural gas use, and 10 groups reduced their total energy cost. The evaluation report also concluded that data were insufficient to determine whether the project had helped increase the regularity of utility bill payments. In addition, the evaluation report noted that according to the results of an energy awareness survey, participant knowledge of energy conservation measures had increased slightly. The report did not specifically assess the project’s impact on the other eight project goals. Data collection problems were identified by the evaluation report as a major problem that could affect conclusions about the project’s outcomes. None of the community-based organizations provided all of the data needed for the evaluation. One community-based organization’s work was not included in the evaluation because it did not collect any follow-up data. The information provided by the other three community-based organizations was incomplete because they either lost or did not completely collect the data. The evaluation report also noted that the project did not collect data on the regularity of utility bill payments, and the report provided no reason for this. The evaluation report attributed these data collection problems to a lack of controls over the data collection activities of the service providers— noting that procedures for data collection were limited, and the service providers had no incentives to collect follow-up data. The report also said that community-based organization staff turnover and the lack of training in data collection requirements contributed to the problem and recommended that in the future final award payments be withheld until all the data are provided. In addition, the evaluation report noted that energy use data provided by the community-based organization was not adjusted for changes in weather and concluded that this could have significantly limited the evaluation’s findings and conclusions. The effectiveness of the California REACH project cannot be determined on the basis of the information presented in the project evaluation report because of limitations in the evaluation’s implementation and analysis. The report also provided insufficient information to determine the validity of the findings and did not provide other information needed to assess the project’s overall effectiveness. Several key issues compromised this REACH evaluation effort. For example, the evaluation did not compare the test groups with the control group as called for in the project design. Without a control group there is no assurance that changes found in the test groups did not result from factors other than the combination of services provided. Moreover, while the report noted that the project’s impact on utility bill payments could not be assessed because of insufficient data, it did not identify or explain the effect of attrition and the loss of data on the statistical validity of its findings. The substantial household attrition that occurred and the resulting loss of data could have further compromised the validity of the project results because households that completed the project may differ from those that dropped out in ways likely to affect the outcome. In addition, as noted in the evaluation report, energy use data were not adjusted for weather conditions, thereby further compromising the statistical validity of the findings on energy use. For example, if temperatures were significantly colder in winter or warmer in summer, a corresponding increase in energy use would also be expected. If this occurred during the period after energy conservation services were provided, these services could have been responsible for stabilizing household energy use or reducing the amount of energy that might otherwise have been consumed. Such a finding could not have been reported because the effects of changes in weather had not been taken into account when measuring results. In addition to not including all of information necessary to assess the validity of its findings, the evaluation report did not provide other important information needed to assess the overall performance of the project. For example, the report did not address the project’s performance regarding one of the project performance goals identified in the authorizing legislation and the Department of Health and Human Services (HHS) program announcement—increasing the contributions energy suppliers make to reducing the energy burden of eligible households. Finally, the evaluation report did not provide a complete discussion of the project design, including information on project assumptions, hypotheses to be tested in terms of measurable objectives, or lessons learned. The goals of the Maryland REACH project were to help participants (1) reduce their electric bills, (2) improve their budgeting skills, and (3) increase their education or work opportunities with the ultimate goal of preparing them for self-sufficiency once the project was completed. The evaluation report did not clearly identify major project assumptions. The project was designed to use two groups of low-income households—a test group and a control group—to assess of services provided. Data on participants in the test group were to be collected (1) from participant surveys—before and after the project—and (2) from utility statements for the year before the project, when available, and for the year during project implementation. Participant data before project implementation were to be compared with participant data collected during project implementation so that the effect of the services could be evaluated. Utility bill statements were also to be obtained for the control group receiving financial assistance for energy bills, so that a comparison could be made between the test group and the control group. The evaluation report also noted that energy use data were to be compared by “seasons” to control for the effect of weather. Services were to be provided by a community-based organization using a case management approach. Each household was to receive four caseworker contacts. During the first contact the caseworker would perform an initial assessment of household energy use, income, education, and occupation. During the second contact, the family assessment was to be updated. The caseworker would also obtain an application for energy assistance from the household and provide energy conservation counseling. During additional contacts, the family assessment would continue to be updated and energy conservation counseling provided. Referrals to public assistance programs for help with rent or mortgage payments, utility bills, home weatherizing, or job training would be made when necessary. Other services would include a newsletter and workshops on energy conservation. (The newsletter was eventually cancelled because it was not being read, and the workshops were cancelled because of low attendance.) In addition, program participants were often referred to other agencies providing various kinds of assistance, and more than one referral could be made for each participant. There were 152 referrals made for a variety of social assistance services, including employment and education. Other referrals were made for home weatherization assistance (86), such as instruction on energy conservation; energy assistance payments (73); and mortgage or rent assistance (20). The local energy assistance office initially identified eligible households, and later participants were recruited from energy assistance applicants and referrals from other agencies. A questionnaire was used to establish eligibility. Over a 25-month period, a total of 124 participants entered the program and 92 participants were dropped from the program, primarily because they could not be contacted. The control group was selected from households receiving energy assistance payments. Its members did not receive the REACH project services noted above. The evaluation report noted that the size of the control group was adjusted monthly to compensate for changes in the number of project participants. The evaluation report compared participant pretest and posttest data on energy use and found that while households reduced their use of electricity, the overall reduction was not statistically significant. However, the report noted that a statistically significant reduction was seen in winter. The average monthly energy use was reported to have decreased by 61 kilowatt hours, from 1,275 kilowatt hours to 1,214 kilowatt hours. The average cost of household monthly utility bills was reported to have decreased by $16, from $137 to $121. The evaluation report also found that the use of electricity by the control group had increased by 55 kilowatt hours, and that the difference between the two groups was statistically significant. As a result, the evaluation report concluded that participating households were able to stabilize their use of electricity while the control group households were not. The report did not assign a dollar cost savings to the difference between the groups. Utility bills were also used to assess the impact of program services on arrearages. According to the evaluation report, the number of late payments and termination notices decreased for households participating in the project, but neither decrease was statistically significant. Although the number of termination notices significantly increased for the control group, the number of actual terminations decreased, while the number of actual terminations increased for participating households. According to the evaluation report, this difference was statistically significant and suggested that the project goal of improving budgeting skills was not accomplished. The evaluation report also analyzed monthly utility bill payments and balances carried over from month to month to assess the ability of households to manage their personal finances. The evaluation report found a slight increase in electricity bill payments that was not statistically significant but found a reduction in monthly balances that was statistically significant. Neither difference was found to be significant when compared with the control group. On the basis of these findings, the evaluation report concluded that the project achieved the following: Households received $60,000 in assistance for energy use and rent or mortgage assistance. A total of 363 referrals were made to agencies providing assistance. Indications implied that households reduced or stabilized their energy use. Households received fewer termination notices. Households reported less difficulty in paying utility bills and in conserving energy. The report, however, also concluded that the project did not meet some expectations: Over half of the households entering the project were dropped because they could not be contacted later in the project. Newsletters and workshops were discontinued because of a lack of interest. Participants did not decrease the amount of arrearages in their utility bills. A decrease was not achieved in the termination of utility services. According to the evaluation report, the attrition rate of households resulted in difficulty obtaining adequate information for the project evaluation. In addition, survey responses were obtained in two ways: 29 households completed surveys, and 10 additional responses were obtained through telephone interviews. The effectiveness of the Maryland REACH project cannot be determined on the basis of information presented in the project evaluation report. The assessment has limitations that affect the validity of conclusions that can be made regarding reducing energy use and other goals. For example, the report does not present information on the comparability of the test and control groups. In addition, there was a lack of information on the number of months of utility data obtained for REACH and control group members. The report did not address the possible effects of participant attrition on the comparability of these groups. Moreover, energy use data were not adjusted for the effects of weather. The report also had other shortcomings that precluded an overall assessment of the project’s effectiveness. The report did not provide sufficient information on the test and control groups to allow an assessment of their comparability. Discussion of factors that could affect comparability of the test and control group was incomplete in several respects. For example, the report did not provide sufficient information on the number of months of utility bills available for both groups. Estimates based on data for a few months can be more easily influenced by factors unrelated to the project services than those based on data for an entire year. Although the report stated there was a statistically significant difference in energy use between the groups for the year, it did not discuss the possible reasons a statistically significant difference was not observed in winter when the control group also showed a decrease in energy use. The validity of the conclusions regarding energy use could also be compromised by attrition in the test and control groups because households completing the project may not be representative of the target population as a whole. The report, however, did not identify the effect of attrition and the resulting loss of data on the validity of its findings. In addition, energy use data were not adjusted for possible changes in the weather from preproject implementation to postproject completion. The decrease in winter energy use noted in the report could have occurred because of a relatively milder winter during the project. The seasonal comparison of the test and control groups actually showed no significant statistical difference between the groups during the project. The evaluation report did not provide other important information needed to assess the overall effectiveness of the project. For example, the report did not address the project’s performance in terms of the performance goal stated in the authorizing legislation and the HHS program announcement—increasing energy supplier contributions to reducing the energy burden of eligible households. The evaluation report also did not provide a complete discussion of the project design, including information on the project’s assumptions, hypotheses to be tested in terms of measurable objectives, or lessons learned. The overall goal of the Massachusetts REACH project was to reduce homelessness by fostering self-sufficiency. In that context, the grant application identified three main project goals: reducing household energy costs, increasing the regularity of household energy bill payments, and increasing contributions by energy suppliers toward reducing households’ energy burden. The evaluation report also identified five objectives for the REACH project participants: Increase awareness of energy consumption. Increase awareness of the ability to shop for the least expensive energy supplier. Learn budgeting. Make regular payments to energy suppliers. Negotiate arrearage forgiveness with energy service providers. The evaluation report did not—except in the calculation of energy burden—clearly identify major project assumptions. Assumptions related to the calculation of household energy burden were discussed in some detail (see below). The project was designed as a single group of REACH participants receiving a variety of services. The study stated that financial constraints prevented using a control group. Participant data were to be collected both before and after the project using a standard data collection instrument, questionnaires, or both. Utility bills were to be obtained from energy suppliers. The information collected would then be used to assess changes in household self-sufficiency, knowledge of energy use, the effect of specific interventions, and changes in energy burden by comparing pre- REACH data with post-REACH data. The project focused on assisting households through case management services. Caseworkers were to provide financial counseling and energy education and make referrals to other public assistance agencies for services dealing with home energy conservation, employment, training, day care, and improving language skills. Energy assistance payments were to be the only immediate financial assistance provided—although 148 of 164 households (over 90 percent) reported receiving other public assistance. Caseworkers also were to facilitate household access to an arrearage forgiveness program offered by a major utility company. The evaluation report noted that, during implementation, caseworkers provided intake to over 460 households that were either homeless or at risk of being homeless, and maintained contact with over 350 of these households. Project participants were selected from low-income households eligible to receive energy assistance payments, and caseworkers recruited clients from assistance programs for the homeless. The report did not describe the selection procedure. As noted above, the project did not use a control group. According to the evaluation report, project results were difficult to quantify, and the true measure of success was the range of services provided in response to participants’ needs. However, information gathered by case managers showed no change in self-sufficiency by project participants. The evaluation report addressed project performance in terms of the three main project goals. The evaluation report calculated the energy burden to households using a formula developed to measure energy costs, or debt, as a portion of income. On the basis of the results of the calculation, the evaluation report found that the energy burden—the proportion of household income represented by energy-related debt—decreased during the project from $345 (105 percent) to $252 (64 percent). However, the evaluation report noted that complete data—both pre- and postproject data for the same households—were available for only five project participants. The report also stressed that the calculation was based on the unlikely assumption that income would remain the same over time and on unreliable energy bill data. In addition, the evaluation report stated that, although the goal of reducing energy costs appeared to have been achieved, the lack of posttest follow- up data prevented a determination of whether the objectives of increased awareness of energy use and energy supplier options were accomplished. Increasing the regularity of household energy bill payments The evaluation report assumed that participant utility bills would be available to provide a record of payments. However, only electric bills were available, limiting payment analysis to the 22 households using electricity to heat their homes. Of these households, follow-up data existed for only two, according to the evaluation report, making assessment of the outcome impossible. Increasing contributions by energy providers toward reducing the households’ energy burden The evaluation report included information provided by the project staff on the arrearage forgiveness program offered by a major public utility company to households receiving energy assistance payments and indicated that caseworkers processed 45 applications from project participants for this program. The evaluation report also identified lessons learned from experience with the project and from interviews with case managers. Lessons learned included the need for additional guidance from HHS on the conduct of evaluations, involving evaluators in project design, obtaining access to utility bills from utility companies, and thorough training for project staff involved in data collection. The evaluation report noted that in many cases data were either unreliable or missing, making an assessment of project outcomes impossible. The report attributed this data problem to several factors, including the transient nature of participating households, lack of training for caseworkers, and failure to involve the evaluator early in the program design. The report also cited shortcomings in the assumptions used to calculate and analyze household energy burden that adversely affected the assessment of project performance. In addition, the report stated that data records were sometimes constructed by case managers according to their memory of events occurring weeks or even months earlier. The evaluation report also noted that a scale used to measure self-sufficiency was never tested for validity or reliability. The effectiveness of the Massachusetts REACH project cannot be determined using the information presented in the project evaluation report. Performance could not be assessed and reported because of the lack of data. Data were missing primarily because of attrition— participants leaving the project before completion. In addition, the project used a pretest/posttest project design that attempted to measure the impact of delivering a variety of services but did not include a control group. A comparison group is necessary to help ensure that the effects observed resulted from services provided and not other variables and to evaluate the effectiveness of the services compared with energy assistance payments alone. The report had additional shortcomings that would preclude an overall assessment of the project’s effectiveness. On the plus side, the evaluation report discussed project performance in terms of the performance goals set out in the authorizing legislation and the REACH program announcement. The report also discussed the difficulty encountered in measuring project performance because of data collection problems and described the limits on statistical analysis due to insufficient data. The report also identified issues relating to the calculation of energy burden that could adversely affect the analysis. In addition, the report provided a description of lessons learned that could benefit the design of future projects. Statistical estimates of project performance could not be made because of the lack of data. Problems with households leaving the project before completion, accompanied by the inability to obtain utility bills from all energy providers, resulted in insufficient information to assess the effect of services provided on either household energy use or the regularity of utility payments. Attrition (leaving the project before completion) raises the potential for biased results if those remaining in the project systematically differ from those who drop out in ways that are likely to affect the outcome. Concerns regarding data quality also arise from the construction of records by case managers on the basis of memory months after events occurred. However, even if sufficient data had been available for statistical analysis, failure to include a control group in the project design would have limited conclusions regarding the effect of project services on household energy use and changes in the regularity of utility bill payments. When a control group is used, the credibility of the identified cause of observed outcomes is generally greater because external factors have been taken into account. Additional analytical problems identified in the evaluation report include assumptions implicit in the energy burden calculation and the use of a scale to measure changes in self-sufficiency that was not tested for either reliability or validity. These shortcomings further limit findings or conclusions that can be made about the project. While the evaluation report clearly identified certain problems encountered in assessing performance and the resulting limitations of its findings, it did not provide or address the lack of other information needed to assess overall project performance. For example, the report did not address the project’s performance in terms of the performance goal stated in the authorizing legislation and the HHS program announcement— increasing energy suppliers’ contributions to reducing the energy burden of eligible households. The report noted that the information needed to assess this goal would be provided by the project staff in a separate report. The Michigan REACH project was comprised of two subprojects with different goals. One subproject’s primary goal was the reduction of household energy use. The other subproject’s primary goal was to educate households on energy deregulation, provide consumer advocacy, and explore the feasibility of bulk energy purchases. The evaluation report identified eight performance measures that were to be used to evaluate the impact of services on households: reducing utility bills, increasing the regularity of utility bill payments, increasing earned income, reducing reliance on energy assistance programs, increasing knowledge of energy conservation methods, increasing understanding of utility bills, reducing energy-related safety problems, and maintaining or increasing the availability of affordable housing. The evaluation report did not clearly identify major project assumptions. The project was divided into two subprojects with different goals for different providers of education services. One subproject was to focus on improving energy conservation and developing life skills, such as budgeting. The other subproject was to focus on increasing knowledge about energy deregulation. Both subprojects, however, would use a pre- and posttest evaluation design—collecting data before and after project implementation—to evaluate the effectiveness of the services provided. A control group that did not receive services was to be created from eligible households to permit comparison with households that received services. In the energy conservation and life skills education subproject, household energy use data would be collected by seven service providers—located in different areas—for the year before and the year after services were provided. The evaluation report indicated that these data were to come from utility bills obtained from energy suppliers. In addition, participants would also be mailed a questionnaire after project completion to obtain their views on the project. In the energy deregulation education subproject, households would complete a questionnaire before and after project implementation to determine changes in their knowledge about energy deregulation. The energy conservation and life skills education subproject was designed to help low-income households develop plans identifying changes directed at achieving an immediate reduction in energy use. Education would be provided through workshops. Other workshops would also be provided that addressed broader issues, such as preparing household budgets and finding employment. This subproject initially targeted 1,100 households at various locations throughout the state. The other subproject was designed to prepare low-income households for the effect of utility deregulation through education and to serve as a consumer advocate. This subproject was to consist of placing informational articles on energy deregulation in newsletters and in pamphlets that were delivered to participating households. The evaluation report stated that the subprojects had a target population of households at or below 150 percent of the federal poverty level. The report also stated that households in the deregulation education subproject were participants in the Head Start Program. The report did not further explain the selection criteria or process other than to note that the control group would be selected from eligible households. According to the evaluation report, there was no significant difference before and after the project in natural gas use by participants at the three locations for which data were available. However, on the basis of the results of two locations reporting pre- and postproject data, the report found the project effect on electricity use mixed: a significant reduction was reported at one location but not at the other. On the basis of participant responses to the postproject survey, the evaluation assessed the results, or outcomes, of four performance measures in qualitative terms: Increased regularity of utility bill payments—most participants described this workshop education component as helpful. Reduced reliance on energy assistance programs—about one-third of the participants indicated that their use of emergency energy assistance had decreased. Increased knowledge of energy conservation methods—the majority of the participants felt that the energy workshop was helpful. Increased understanding of utility bills—most participants found the educational information helpful. Also, on the basis of participant responses to the surveys, before and after the project, the evaluation report concluded that there was no change in household understanding of utility deregulation. The evaluation report noted that the data results cited on natural gas use should be used with caution because of the small number of households for which data were available. Three locations provided data on natural gas use, while two locations provided data on electricity use. Data on gas use were adjusted for weather, but data on electricity use were not adjusted. The report also noted that the control group was not valid because some households may have also received weatherization services. In addition, the evaluation report stated that changes were made to the project in the second year, but no postproject data were available to include in the final evaluation report. Regarding the survey of participants, the evaluation report noted that the assessment of perceptions of energy conservation and life skills workshops was based on 299 surveys returned out of 740 surveys mailed— a response rate of 40 percent. According to the evaluation report, it was not possible to measure the effect of the project on earned income, reductions in housing safety problems, and availability of affordable housing. The effectiveness of the Michigan REACH project cannot be determined because of limitations in design and implementation. For example, the lack of data on participant energy use, along with other analytical problems, such as the lack of a control group and failure to adjust energy use data for weather conditions, compromised the validity of the findings on energy use. As a result, a valid assessment of the effect of project activities on energy use or regularity of participant utility bill payments cannot be made. In addition, there were limited months available on which to base pre- and postintervention estimates, ranging from 2 to 4 months. Moreover, the indication of a decrease in electric usage for one location was based on data that were not adjusted for effects of changes in weather. In addition, the report had other shortcomings that precluded an assessment of the project’s overall effectiveness. The first Michigan subproject focused on changing participant behavior through education. The inability to obtain quality data on participants’ energy use from the service providers and the local energy supplier— especially data on postintervention energy use—meant that conclusions were based on relatively few records. For example, postintervention data were not available on gas use for four of the seven locations and not available for electricity use for five locations. Moreover, the postintervention data did not cover an entire year—data were available for only 2 to 4 months. Overall attrition was also an issue. The evaluation report states that 1,028 participants were served by seven locations from October 1997 to September 1999 but that pre- or postintervention fuel consumption data were usable for 570 participants. Matched pre- and postintervention fuel data were available for 93 participants using natural gas and 116 participants using electricity. These data come from three of the seven locations. These limited data restrict inferences that could be drawn from the originally intended locations. Additionally, attrition occurred within the locations for which pre- and postintervention data were available. One of the three locations served 200 participants and obtained 34 useable natural gas records and 43 electric records. The second location served 156 participants and obtained 40 useable natural gas records and 73 electric records. The third location served 150 participants and obtained 19 useable natural gas records and no usable electricity data. This level of attrition raises the issue of bias because participants that remain in the project may differ from those dropping out in ways that are likely to affect the outcome. The high attrition rate also reduces the precision of any statistical estimate of change in fuel usage. Even if sufficient data had been available for statistical analysis, not having a valid control group for comparison greatly limited the ability to make a valid assessment of project effects. Similarly, failure to adjust energy use data for weather conditions prevented a valid assessment of participant energy use. In addition to not including all of the information needed to assess the statistical analysis and validity of its findings, the evaluation report did not provide other important information needed to assess the project’s overall effectiveness. For example, the report did not address the project’s performance in terms of the project performance goal provided in the authorizing legislation and the HHS program announcement: increasing energy suppliers’ contributions to reducing the energy burden of eligible households. The report also did not provide a complete discussion of the project design, including critical project assumptions, hypotheses to be tested in terms of measurable objectives, or lessons learned from implementing the design. The primary goal of the Nebraska REACH project was to increase the economic self-sufficiency of low-income families by increasing household energy efficiency. The project sought to decrease participants’ household utility bills by reducing energy use. A second project goal was to increase participants’ knowledge of energy conservation practices and personal finance. Additional project goals included increasing the environmental comfort and health and safety of the participating households. The Nebraska REACH project was to focus on changing the behavior of the target population—low-income households—as the best way of achieving project goals. This approach was based on the assumptions that low-income households were most often renters who frequently moved and that the housing available to them was generally substandard. Given these premises, it was assumed that the only way to reduce energy use in this population would be to change household behavior through teaching energy-efficient practices. It was also assumed that it would have been too expensive to weatherize substandard housing, and upgrading the structure would have benefited the household only temporarily, until its next move. According to the evaluation report, the Nebraska REACH project provided a unique opportunity to evaluate a social services project using a “true experimental design.” The project was designed to use two groups of low- income households—a test group and a control group—to evaluate the primary hypothesis that changing behavior could reduce energy use. Data collected on the two groups, both before and after project implementation, would be analyzed using a standard computer program—the Princeton Scorekeeping Method (PRISM) software—to determine the statistical significance of the results. The PRISM software uses data from monthly utility bills to produce a weather-adjusted index of energy use for both the test and control groups. The program also compares the differences in energy use among households in the test group and between households in the test group and control group. Members of both the test group and the control group would be required to sign forms authorizing the release of their utility bills to the project. In addition, knowledge of energy conservation practices and personal financial management obtained from the workshops would be measured using pre- and postknowledge tests and a third test given 6 months after project completion. Households were to receive four related services: (1) a home energy audit, (2) instruction in energy conservation practices and personal financial management, (3) case management, and (4) provision of basic home weatherizing materials. Home energy audits would include an interview and inspection by a certified energy specialist to assess the household’s energy efficiency and form the basis of the action plan. Workshops providing instruction on energy conservation and personal finance would be available to participants each month, and training manuals would be distributed providing detailed instruction on these issues. Case management would include developing a household action plan, preparing family assessments, and conducting monthly progress reviews during home visits. Materials such as weather stripping, along with instruction on installation, would also be provided. In addition to these services, project participants would receive between $200 and $350 in vouchers to pay utility bills. Control group members would receive $50 in vouchers for utility bills. Potential applicants were recruited through advertisements and social service agency referrals. Case managers determined project eligibility using four criteria: participants had to (1) have income at or below the poverty level, (2) have resided at their current address for at least 1 year, (3) not be planning to move, and (4) need assistance in paying utility bills. The control group was randomly selected from every third eligible applicant. There were no significant differences between the two groups in either education or other demographic characteristics. During implementation, the project provided services and cash assistance to 439 participants and cash assistance and goods to 202 control group members. Separate analyses were done for natural gas and electricity use using the PRISM computer program. PRISM provided normalized annual consumption data (pre- and postproject) and normalized annual savings separately for both gas and electricity. The PRISM analysis indicated a statistically significant decreased use of natural gas by REACH project participants compared with those in the control group. Program participants had normalized annual savings of $48.12 per household more than the control group—or a savings of 122 hundred cubic feet. The result of the PRISM analysis for electricity use revealed a different use pattern. Although the analysis showed no change in electricity use by project participants, the analysis revealed an annual increase in electricity use of 2,716 kilowatt hours by the control group. The evaluation report stated that this effect was consistent with the hypothesis that electricity use would be lower for project participants—an annual savings of $149.38 per household. On the basis of these findings, the evaluation report concluded that project participants had achieved greater economic self-sufficiency through a reduction in utility costs—an average of $197.50 each year—compared with the nonparticipant control group. However, the report also noted that an actual reduction in utility use for project participants was confirmed only for natural gas and not for electricity. In addition, given the results of the knowledge tests administered to participants, the report concluded that the instruction provided during the project resulted in a significant increase in knowledge about energy conservation practices and personal financial management. The evaluation report attributed the project’s ability to implement the project design as planned and to obtain sufficient data to perform the statistical analysis to the community-based organization’s experience, expertise, and knowledge of the target population and to the early involvement of the evaluator in the project’s design. Cash incentives provided to the test and control groups also appeared to have reduced participant attrition. In addition, administrative controls designed to ensure access to utility bills seemed to have played a role in obtaining sufficient data for analysis. The evaluation report noted several factors that greatly reduced the number of households whose energy use data could be used in the PRISM analysis. For example, frequent changes in residence eliminated many households from the analysis. In addition, frequent changes in household size and composition, along with utility shutoffs, made much of the data too unreliable for use in the PRISM analyses; only a small percentage of the database was considered appropriate for analysis. According to the evaluation report, analysis was limited to determining the overall effect of the project, not the effect of specific services or combination of services (i.e., identifying which services were most effective in achieving a reduction in energy use.) The effectiveness of the Nebraska REACH project can be assessed with some qualification, given the information presented in the evaluation report. The project’s design and implementation allowed statistically valid conclusions to be made about the effect of project services on participant energy use. The report did not address the two other performance goals of reducing utility bill arrearages and increasing contributions by energy suppliers and did not provide other information needed to fully assess the project’s overall effectiveness. The evaluation report did not provide an assessment of potential bias due to attrition, which may result if those remaining in the project systematically differ from those who dropped out, and its possible effect on the analyses and conclusions. Although the evaluation report provided a statistical analysis regarding the project’s main hypothesis, the report did not address two of the project performance goals identified in the authorizing legislation and the HHS program announcement. Specifically, the report did not address the performance goal of increasing the regularity of utility bill payments or the performance goal of increasing contributions by energy suppliers. Finally, the evaluation report also did not state the hypotheses in terms of measurable objectives. The report did not provide a discussion of lessons learned or best practices that could be valuable to other projects. For example, if the report had discussed the strategies used to successfully collect energy use data, this information could have been useful to other projects. This lack of information is often cited as a principle reason for not being able to measure project performance. The primary goal of the Oregon REACH project was to help low-income households manage their energy costs more effectively. The evaluation report identifies four related project goals: reducing household energy use; reducing household energy cost; increasing the regularity of utility payments, reducing arrearages in utility bills, or both; and eliminating health and safety risks related to energy use. The long-term objectives of the project were described as sustained reduced energy use, overall improvement in economic self-sufficiency, and eventual elimination of reliance on energy assistance payments. Three primary measures were used in the evaluation report to assess the impact of the services provided by the project: Reducing household energy use—75 percent of participating households will reduce energy use by 15 percent. Reducing household energy burden—no operational objective stated. Reducing arrearages in utility bills—75 percent of participating households will reduce arrearages in utility bills, and 50 percent of households will not incur new arrearages for 6 months. Other measures used in the evaluation report addressed program activities, such as completing action plans and enrolling participants in social service programs. The Oregon REACH project was based on the premise that providing services to low-income households—such as information on energy conservation practices and personal financial management—would result in changes in their behavior that would reduce their energy costs and utility bill arrearages. The evaluation report identified three general assumptions that guided development of the project: Coordinating services provided by organizations within the community would be more effective than uncoordinated assistance. Household knowledge of energy conservation, as well as assistance in weatherizing homes, is necessary to reduce reliance on energy assistance payments. Households that participated in the project would be willing and able to make changes in their behavior and personal financial management that could affect their energy costs and the regularity of their utility payments. According to the evaluation report, the Oregon REACH project was conceived as a “quasi-experimental” design. The project was designed as an experiment to use two test groups and a control group to determine the effect of the services provided. The report noted that the inclusion of a control group was one of the distinctive features of the design. A different combination of services would be provided to the two test groups. One test group would receive a complete set of services— including home weatherizing and heating system repairs—as needed. The second test group would receive all of the project services, as needed, with the exception of home weatherizing and heating system repairs. Households within each group would receive different combinations of services on the basis of need and availability of the service in the community. The control group would not receive any of the project services. Data on energy use would be collected for all three groups, when possible, for the year preceding the project, the year of project implementation, and the year after project completion. Members of each group would be required to sign forms authorizing the release of their utility bills to the project. These data would be used to evaluate energy use, utility arrearages, and energy burden by performing a statistical analysis of the differences among groups. Both participants and caseworkers would be surveyed after project implementation to obtain their views on the project’s implementation and usefulness. An incentive payment of $20 would be given to households to complete the survey. Services were to be provided to participating households by 13 community-based organizations across the state. Services would be provided through case management that focused on working with households to develop an action plan for reducing their energy use according to an assessment of their needs. Case workers would also provide instruction to households in energy conservation practices and personal financial management, facilitate negotiation with energy suppliers to develop payment plans for reducing arrearages, and make referrals to other organizations providing social services. Other services to be provided would include financial assistance to help pay for utility bills and reduce arrearages, home energy audits, and home weatherizing assistance. Project funds were also to be used to replace water heaters, furnaces, and thermostats and to provide carbon monoxide detectors and heating repairs, as needed. An emergency payment of $200 was to be allowed for especially needy households. Households were selected from those already receiving energy assistance. Households also had to have utility bill arrearages equal to one-half the energy assistance payment, an energy burden greater than 15 percent of income, and an energy-related health or safety risk. In addition, participants were selected on the basis of their motivation and the priorities of local communities. According to the report, the control group was selected from energy assistance recipients at each participating community-based organization. The sampling procedure was not discussed. According to the evaluation report, the Oregon REACH project was largely successful in achieving two of its primary goals: reducing energy use and reducing arrearages in household energy bills while increasing the regularity of payments to energy providers. The report noted that the project strongly supported the assumption that coordinated services effectively reduce energy use, energy costs, and energy burden for low- income households. In addition, the report stated that its analysis confirmed that the services helped households reduce arrearages and increase the regularity of utility bill payments. The evaluation report assessed the effect of services provided to households completing the project. The evaluation report stated that participant attrition was about 8 percent in the last half of the first year and 16 percent in the second year of the project. The evaluation report found that both test groups reduced their electricity use by 11 percent for the year after project implementation. Of the 173 households receiving services, 58 (or 33.5 percent) used less energy, and 40 (or 23 percent) achieved a reduction of at least 15 percent. This was 52 percent short of the project goal of 75 percent of households achieving the 15-percent reduction. In addition, households were found to have reduced their energy burden by 2.5 percent. The evaluation report also found that both test groups reduced arrearages in their utility bills as a result of participation in the project and that this difference was statistically significant. According to the report, the project goal that 50 percent of households not incur new arrearages for 6 months was met, and the number of households with arrearages decreased from 59 percent (102 participants) to 36 percent (63 participants). Moreover, the evaluation report found that households receiving the additional services of weatherizing and repairs achieved a slightly larger reduction in their energy bill arrearages than the test group not receiving those services— $77 compared with $55—but noted that the difference was not statistically significant. In the exit surveys, most households indicated that the project had helped make their homes healthier, safer, and more comfortable and energy- efficient. Similarly, in the staff survey, case workers indicated satisfaction with the project. As a result of all these findings, the evaluation report concluded that the Oregon REACH project greatly assisted low-income households in achieving a greater degree of energy self-sufficiency. The evaluation report did not specifically identify limitations other than to note that the difference in the energy use among the groups before project implementation was statistically significant, suggesting that the households may not have been assigned to the test and control groups in a random fashion. The control group had the lowest level of energy use both before and after project implementation. The evaluation report also noted that not all households received the full benefits because they did not complete the project and that it was difficult to contact some participants because they did not keep appointments or did not have telephone service. Two hundred and twenty four households were listed as members of the test groups, and fewer households were used in the various analyses. Data were not available for participants entering the project in the second year. The effectiveness of the Oregon REACH project cannot be determined on the basis of the information presented in the project evaluation report. Although the study did have a control group, the omission of a comparison of the test groups with the control group is a shortcoming in the analyses: Only the energy consumption analysis compared the control group directly with the REACH groups. The assessments of both energy burden and change in arrearages used two REACH test groups but did not compare changes between the two groups. Instead, they assessed changes from pre-REACH to post-REACH project points within each of the groups separately. The study did not assess the impact of the project by comparing the change in energy use of the two REACH groups to the change in energy use of the control group. The report also noted that participants might not have been assigned to groups in a random manner, leaving the possibility that the changes observed could be the result of factors other than the services provided. Finally, the report had other shortcomings that precluded an overall assessment of the project’s effectiveness. The evaluation report did not identify important limitations of the analysis or their effect on the conclusions. As noted, failure to compare the test groups and the control groups for assessment of changes in energy use and arrearages limits the usefulness of the findings. The report also did not address the effect of the criteria and procedure for selecting both the test and control groups, as well as the attrition rates from these groups. These factors also adversely affect the report’s analysis and conclusions. For example, assignment to the test and control groups appears not to have been on a random basis. Test group selection resulted in a group of households with the highest energy use, whereas control group selection resulted in a group of households with the lowest energy use. Such assignment issues weaken the conclusions that can be made about the effect of REACH services on energy use. In addition, the report did not state whether energy use data were adjusted for weather. In addition to not including all of the information necessary to assess the statistical analysis and validity of the project’s results, the evaluation report did not provide other important information needed to assess the project’s overall effectiveness. For example, the report did not address one of the project performance goals stated in the authorizing legislation and the HHS program announcement—increasing energy suppliers’ contributions to reducing the energy burden of eligible households. In addition, the report did not provide information on the dollar amount of the expected reduction in energy use or the amount of financial assistance given to households. Finally, the evaluation report did not provide a discussion of lessons learned. A discussion of best practices—the techniques, procedures, and controls—used to ensure sufficient data for analysis might have been useful to future projects. The following are GAO’s comments on the Department of Health and Human Services’ letter dated July 23, 2001. 1. The technical comments have been incorporated as appropriate. Several more substantive technical comments are addressed individually below. 2. The authorizing legislation defines home energy as “a source of heating or cooling in residential dwellings.” While HHS argues that residential energy commonly refers to all household energy use, for the purposes of this review, we used the narrower definition stated in the legislation. We have also changed the wording of our third recommendation, concerning the use of REACH program funds, to clarify that REACH program funds should be used for activities directly related to the home heating and cooling needs of low-income households. We do not dispute that a broad range of services is authorized in the legislation, but these services should relate to home heating and cooling. 3. We are not including the possibility of expanding the program in our matters for congressional consideration because the effectiveness of the REACH program has not yet been determined and because we believe that HHS should be required to report to the Congress in 2003 on the REACH program’s effectiveness. 4. While coordination among programs is addressed in a general way in the REACH program announcements, we believe that HHS should more specifically articulate its view of the REACH program’s purpose and relationship to LIHEAP in the Administration for Children and Families’ performance plan. We also note that a performance plan is more readily accessible to the Congress, the public, and other agencies than the program announcements intended for potential grant applicants. 5. Our draft report did not state that the use of randomized control groups is the only acceptable design, so we have not made any wording changes. Although there are legitimate questions concerning using randomized control group designs in social service programs, such designs have sometimes been effectively used. For instance, when resources are not sufficient to provide services to all who are eligible, random assignment to control groups can be feasible and ethical and provide convincing results. When randomized control groups are not feasible, evaluations can be designed to use nonequivalent control groups, and, with suitable caveats, such designs may yield defensible results. Regardless of the design used, evaluations of project results must be able to distinguish between results that are most likely due to the project services and results that may be due to other, external factors. 6. We have added language indicating that the Office of Community Services is aware of the evaluation challenges that we cite. 7. Pages 14 and 15 of our report describe the legislative authority and restrictions on the use of LIHEAP funds, as well as the need to use much of LIHEAP’s funding for direct assistance with energy bills. We also report that, as allowed by law, 25 states plan to use a portion of their fiscal year 2001 LIHEAP funds for activities such as energy education and budget counseling and 44 states plan to provide weatherization services. As HHS’ comments note, opportunity exists to continue some of the activities tested under REACH grants through states’ LIHEAP funds. We have made minor wording changes to more clearly recognize that not all REACH activities may fall within the legislative and funding constraints of the LIHEAP program. 8. Coordinating with other programs is not the same as spending REACH grant funds for non-energy-related activities. We believe that HHS’ planned efforts to clarify language in its program announcement should help ensure that REACH funds are used only for activities directly related to the home energy needs of low-income households and should provide a clearer basis for reviewing and selecting proposals. In addition, Julie Gerkens, Kathleen Gilhooly, Curtis Groves, Rachel Hesselink, Judy Pagano, and Don Pless made key contributions to this report.
Rising prices for natural gas, electricity, and other fuels have made it even harder for low-income families to pay their utility bills. By the end of fiscal year 2000, the Office of Community Services had awarded $30 million in Residential Energy Assistance Challenge Option (REACH) program grants to 24 states and 12 tribal organizations to fund 54 separate projects to help meet the home energy (heating and cooling) needs of low-income households. These grants ranged from $50,000 to $1.6 million. Most of the 54 REACH projects have educated low-income clients about home energy efficiency through group workshops or on individual home visits. Many REACH projects have involved energy-related repairs to homes and budget counseling, and three state REACH projects are developing consumer cooperatives to purchase electricity or bulk fuels, such as heating oil. However, some REACH projects have included social services not directly related to meeting home energy needs. The legislation authorizing REACH identifies the following three performance goals for individual REACH projects, (1) reduce the energy costs of participating households, (2) increase the regularity of home energy bill payments, and (3) increase energy suppliers' contributions to reduce eligible households' energy burden. Of the six project evaluations completed by states as of May 2001, only one project's design and implementation allowed statistically valid conclusions to be made about the effect of project services on participant energy use. The five other state project evaluations had analytical problems and other shortcomings that limited their usefulness in assessing project results. The Department of Health and Human Services has not yet developed a comprehensive plan for how it can best communicate summary information on best practices and lessons learned from the REACH program.
The Trans-Alaska Pipeline System (TAPS) is the primary transportation link for 20 percent of the nation’s domestically produced oil. For nearly 20 years, TAPS, which was built between 1974 and 1977 to meet specific environmental and technical requirements for arctic conditions, has transported more that 10 billion barrels of crude oil without a major spill. Because of its importance to ensuring the continuity of the domestic oil supply, TAPS and the federal and state agencies responsible for monitoring it have received attention from the Congress throughout the pipeline’s years of construction and operation. While the pipeline was under construction, we reviewed the status of pipeline construction and the effectiveness of federal and state monitoring efforts. These and subsequent reports, as well as congressional hearings, publicized recurring problems with the condition of the pipeline, the quality assurance program of its operator, and the effectiveness of government monitoring efforts. More recently, congressional hearings in 1993 highlighted numerous potential deviations from federal and state standards. A 1993 study of TAPS, commissioned by the Department of the Interior’s Bureau of Land Management (BLM), concluded that the pipeline had deficiencies that, if left uncorrected, could pose serious safety risks for workers and potentially cause a pipeline failure. These findings, together with those from other reviews of TAPS, have focused even more attention on the pipeline’s condition. TAPS carries almost 1.6 million barrels of oil per day, down from 2 million barrels a day in 1990, across some of the most rugged terrain in the world. The 48-inch diameter pipeline transports oil 800 miles from Prudhoe Bay, north of the Arctic Circle, to the ice-free port of Valdez on Prince William Sound. The pipeline crosses 3 mountain ranges, more than 800 rivers and streams, 3 known seismic faults, and hundreds of miles of permafrost (permanently frozen soil). The Alyeska Pipeline Service Company (Alyeska) operates the pipeline for the seven companies that own it and is responsible for meeting the various regulatory requirements for TAPS. The owner companies fund Alyeska’s budget, which they approve, and Alyeska has its own permanent staff, although a significant number of its upper-level managers are on loan for limited time periods from the owner companies. The laws, requirements, and regulations intended to ensure TAPS’ operational safety, oil spill response, and environmental protection call for monitoring and enforcement by a number of federal and state agencies. The federal government has administrative responsibility for 401 miles of the pipeline’s right-of-way, while the state administers 353 miles, including the Valdez terminal, where oil is loaded on tanker ships for transport to refineries. Specific operating requirements are contained in federal grant and state right-of-way lease agreements and in additional federal and state regulations and laws. Of the remaining 46 miles of pipeline, 26 miles are administered jointly by federal and state authorities, and 20 miles are owned by private landholders. Six federal and six state agencies have significant jurisdiction over some aspect of the pipeline’s operation or the land on which it is located (see table 1.1 for a list of agencies and the nature of their jurisdiction). The five with primary authority are the Department of the Interior’s Bureau of Land Management, which is charged with enforcing the federal right-of-way agreement on federal lands; the Alaska Department of Natural Resources (ADNR), which enforces the state’s right-of-way agreement on state-owned lands and the federal agreement on certain state-owned lands; the Department of Transportation’s Office of Pipeline Safety, which is responsible for overseeing the operational safety of the entire pipeline under the Hazardous Liquid Pipeline Safety Act; and the Environmental Protection Agency (EPA) and the Alaska Department of Environmental Conservation, which are responsible for enforcing environmental regulations along the pipeline and at the terminal. EPA is also the federal On-Scene Coordinator for responding to on-shore oil spills. Interior’s responsibilities and authorities are the most comprehensive and broadest in scope of any of TAPS’ regulators—covering operational safety and environmental protection issues. In 1990, BLM and ADNR established the Joint Pipeline Office (JPO) to better coordinate federal and state regulatory efforts. This office has since become the focal point for overseeing TAPS. Begun with a small staff from the two agencies, JPO had grown to an authorized staff of 84 in April 1995 with staff assigned or on loan from 8 of the 12 agencies with significant oversight responsibility for TAPS. BLM and ADNR are jointly responsible for JPO’s operations. However, in July 1993, the then-director of BLM testified, in response to whistleblowers’ complaints and other investigations that reported lax regulation practices for pipeline workers’ health and safety, that “Whenever and wherever needed, BLM, as lead agency, will assume the responsibility of ensuring that the mandate of the JPO is carried out fully.” Subsequently, the Executive Council was formed and it has taken the lead in providing focused policy guidance to JPO. JPO is organized into two branches, Operations and Administration; the Operations Branch is responsible for ensuring that TAPS is operated in compliance with requirements. Since about 1990, TAPS’ operations have been the subject of many separate audits and studies. Most have focused on a single facility or one operational segment, but several have taken a more systemwide approach. The range of problems they identified was broad. Some deficiencies were considered serious in that they have potential for causing severe safety and environmental impacts. Other deficiencies were of a less serious nature: For example, the studies • criticized Alyeska for being reactive and not focused on building in quality; identified systemic hardware problems that raise questions about the integrity of the TAPS electrical system; and identified hundreds of specific items, such as not having developed procedures for the qualification of inspection personnel. In response to concerns raised by whistleblowers, safety issues identified by congressional staff, and concerns for how JPO was regulating TAPS, the Subcommittee on Oversight and Investigations, House Committee on Energy and Commerce, held hearings in July 1993. The hearings highlighted a number of potential problems with TAPS. At these hearings, the Director of BLM acknowledged the problems and told the Subcommittee that BLM, which has primary authority for administering the right-of-way agreement on federal lands, was going to take charge and make sure that the problems were corrected. Subsequently, BLM began a program designed to identify and resolve such problems. As part of the effort, BLM in August 1993 contracted with Quality Technology Company (QTC), an independent consulting firm, to investigate the physical condition of TAPS and the management of operations provided by Alyeska and its contractors. QTC conducted a 6-week on-site review that included visits to the Valdez terminal and three of the pipeline’s pump stations. QTC’s final report, issued in November 1993, was highly critical of Alyeska’s management of the pipeline and pointed out that some glaring deficiencies were present in Alyeska’s management and the condition of TAPS’ equipment. QTC identified 22 broadly scoped deficiencies, which were further grouped into three classes according to their potential threat to the safe operation of the pipeline or to the safety of the public and the environment: • Six deficiencies were considered most threatening because of their potential for causing severe impacts, including death or an oil spill. These deficiencies included a lack of management focus on anticipating and correcting potential problems, a “dysfunctional” quality management program, and massive electrical code violations. • Nine deficiencies presented moderate threats because of their potential for causing impacts, including severe injury or an oil spill. Examples included the lack of accurate drawings describing the pipeline’s safety system and an inadequate safety inspection program at the Valdez terminal. • Seven deficiencies fell in the lowest class of threats because their potential impacts were limited to such effects as loss of work time due to injuries or loss of oil. An example was the lack of a maintenance program that develops trends for predicting untimely equipment failures. While the QTC study addressed conditions on a broad, systemwide basis, many other studies have addressed narrower aspects of TAPS’ operations, such as corrosion of pipeline welds, leak detection, or solid waste management. Since 1990, Alyeska and its regulators have conducted or contracted for more than 40 such studies. Together, they have identified about 500 action items. On September 9, 1993, the TAPS owners contracted with Arthur D. Little, Inc. (ADL), an independent consulting firm, to provide a comprehensive independent assessment of TAPS’ operations. Unlike the studies described so far, this one involved a detailed, facility-by-facility review of the entire pipeline and its attendant systems. The assessments were conducted by teams led by ADL personnel and composed of experts from five of the companies that own TAPS and from ADL. The assessments focused on compliance with the requirements and management systems relating to operational integrity. The result of the 9-month review was a list of more than 4,200 site-specific deficiencies, issued in two reports (December 1993 and July 1994). The following are examples of the kinds of deficiencies the study identified: • At pump station 4, the fire alarm system was not in full working order. It did not provide an immediate sitewide alarm that was audible/visible in all areas of the pump station. • At the main equipment maintenance facility in Fairbanks, Alyeska and contractor employees working with hazardous materials lacked specific hazard training, and the chemical inventory lists were out of date. • Alyeska’s quality assurance and inspection process did not have a management system defining responsibilities sufficiently to avoid duplication or omission of critical tasks. In 1991, we reported that federal and state monitoring agencies had not effectively overseen TAPS’ operations. BLM officials told us at that time that JPO was not a regulator. Instead, the agencies relied on Alyeska to police itself. We noted that, for example, the regulators did not systematically or independently assess Alyeska’s corrosion or leak detection systems, nor did they require that Alyeska demonstrate that it could respond adequately to a large-scale pipeline oil spill. We concluded that absent effective monitoring, the regulators could not ensure the safe operation of TAPS. We also reported that regulatory efforts had been hampered by a lack of coordination between the various agencies. We concluded that the recent establishment of JPO was a positive step but that its success was potentially hindered unless leadership, firm commitments from all regulatory agencies, and secure funding sources were in place. In 1994, a study by Booz-Allen & Hamilton, an independent consulting firm, concluded that weaknesses in regulatory activity were still present. The study found that JPO was not effectively addressing the prevention of pipeline hazards. More effective oversight, the study concluded, could have precluded many of the problems that QTC had found in its review of Alyeska’s operations. Specifically, the study recommended that JPO increase its monitoring of Alyeska’s quality, operations, and maintenance programs—areas of concern that we had reported on since 1976. Alyeska was confronted with the tasks of continuing to operate and maintain the pipeline, while at the same time correcting thousands of deficiencies identified in audits conducted for it, its owners, and various government agencies. During 1994, Alyeska continued to transport almost 1.6 million barrels of oil per day through the pipeline, conduct normal maintenance, and carry out numerous projects to upgrade the pipeline system. Alyeska estimates that in 1994, it spent about $81 million on upgrades in three broad areas. About $23.7 million was devoted to programs aimed at ensuring that Alyeska’s operations did not adversely affect the environment through spills or air emissions. About $34.6 million was devoted to improving the protection of the pipeline’s integrity through enhanced corrosion prevention and detection. About $20.2 million was devoted to improving Alyeska’s ability to respond to emergencies related to tanker transport. During 1994, Alyeska also reorganized the company from a centralized, functionally structured organization to an organization in which more of the responsibilities are now decentralized to “business units.” The purpose of the reorganization was to provide the business units with increased control over the resources they need to operate and to provide greater accountability for operations. The four business units are the Northern Business Unit, comprising pump stations 1 through 4; the Southern Business Unit, comprising pump stations 5 through 12; the Valdez Terminal Business Unit; and the Ship Escort Vessel System Response Business Unit. On February 23, 1994, the former Chairman, Subcommittee on Oversight and Investigations, House Committee on Energy and Commerce, asked us to review Alyeska’s progress in addressing problems that QTC had identified with TAPS. On March 28, 1995, the current Chairman, House Committee on Resources, which now has oversight jurisdiction for TAPS, became a joint requester to this review. Specifically, we • assessed Alyeska’s progress in resolving deficiencies identified by the QTC • determined whether Alyeska’s planned actions for three areas of deficiency—electrical integrity, quality, and maintenance—will address these deficiencies; • determined whether regulators are taking action to improve regulatory oversight of the pipeline; and identified the root causes of the deficiencies. To address the first objective, we reviewed Alyeska’s periodic reports, through the end of April 1995, on the status of actions taken to correct the QTC-identified deficiencies. Because Alyeska and its regulators incorporated the results of a number of other reviews besides QTC’s into the data base of action items, we expanded our review to report Alyeska’s progress in correcting deficiencies identified by these studies as well. To assess the reliability of Alyeska’s reports, we (1) reviewed the procedures that Alyeska’s quality assurance staff uses to monitor corrective actions and the documents certifying completion of various steps in the process, (2) reviewed JPO’s procedures for verifying corrective actions and the documents certifying completion of various steps in the process, (3) accompanied JPO inspectors on field visits to observe inspections as they were being made, and (4) performed on-site reviews of a number of the reported corrections. However, because the number of action items was so extensive and because many of the actions taken were still under way, we did not systematically verify the accuracy of Alyeska’s entire list of corrections. Chapter 2 contains our findings on Alyeska’s progress in resolving identified deficiencies. To address the second objective, we interviewed regulators, Alyeska personnel, consultants, and QTC’s lead auditor; reviewed Alyeska’s documentation of actions completed, under way, and planned; and traveled to various sites along the pipeline to observe conditions for ourselves. We conducted on-site work at the Valdez terminal, two pump stations, and several field locations and observed from the air about 100 miles of the pipeline’s 800-mile length. In addition, specifically in regard to the deficiency area of electrical integrity, a GAO electrical engineer accompanied us on a detailed tour of the Valdez terminal. We received briefings on the electrical problems at the terminal and on the steps being taken to correct them and reviewed selected electrical studies and discussed their methodologies and results with contractor and Alyeska staff. Chapter 3 contains our findings on Alyeska’s actions in three areas of deficiency identified by the QTC study. To address the third objective, we reviewed prior GAO reports, the 1994 Booz-Allen study of JPO, and actions JPO and its member agencies were taking in response. We met with JPO managers and staff and with representatives of consulting firms employed by JPO or its member agencies to supplement its oversight work. We reviewed examples of JPO’s actions in overseeing the resolution of action items. We reviewed JPO’s plans, procedures, and other documents. Chapter 4 contains our findings on this objective. To address the fourth objective, we reviewed past studies of TAPS to determine the root causes of problems that these studies had identified. We also interviewed regulators, Alyeska officials, and owner company officials to obtain their opinions about root causes. We then reviewed the actions that Alyeska and its regulators had taken or were taking to address root-cause issues. Our work included interviews with Alyeska and JPO managers as well as with field staff to determine whether corrective actions were being carried out. Chapter 5 contains our findings. Besides our on-site field work at Valdez and along the pipeline, we conducted work at state and federal agencies in Anchorage and Alyeska’s offices in Anchorage and Fairbanks. We conducted our field work between March 1994 and April 1995 in accordance with generally accepted government auditing standards. We provided copies of a draft of this report to Alyeska and JPO. We met with the President of Alyeska and officials of JPO, including BLM’s Authorized Officer and Alaska’s State Pipeline Coordinator. These officials agreed with GAO’s assessment of their efforts to correct audit deficiencies and improve regulatory oversight. The President of Alyeska and the Chairman of the TAPS Owners Committee commented that the draft report was an objective, professional assessment of the work by the TAPS owners, Alyeska, and JPO to respond to various audit findings. The President added that while the draft report accurately described the organizational structure for Alyeska’s quality program at the time of our work, Alyeska is in the process of making some additional organizational changes. We have revised our draft report to describe Alyeska’s planned changes to its quality program. Alyeska also provided detailed comments to clarify the draft, and where appropriate, we made changes to the report. In addition, Alyeska provided written comments. (See app. III.) The JPO officials stated that the draft was fair and impartial and accurately captured both the successes achieved and the challenges remaining for both Alyeska and JPO. They fully concurred that secure funding for JPO and Alyeska is vital to ensuring the continued safe operation of the pipeline. While they believe that Alyeska has made many positive changes thus far, they believe the work ahead in implementing the plans will be much more difficult. Consequently, they believe that periodic, comprehensive oversight from an independent source is critical to ensure that JPO and Alyeska continue their improvement efforts. The officials also provided suggestions to clarify the draft report, and where appropriate, we incorporated their suggestions into the report. Alyeska has made substantial progress toward resolving the deficiencies. However, during this period, Alyeska’s target for correcting all of the deficiencies slipped from December 1994 to 1996; a small number of items will extend beyond 1996. The completion dates slipped for a variety of reasons, including a larger than expected number of deficiencies, the complexity of many of the corrections, and Alyeska’s overly optimistic estimation of the time needed to make corrections. Alyeska is taking actions to ensure that the remaining deficiencies are corrected on a priority basis and that JPO can track progress. To determine what work needed to be done to correct the audit deficiencies, Alyeska reviewed the results of more than 40 audits and studies of the various TAPS components. It translated the deficiencies identified in these audits and studies into a total of 4,920 action items. Alyeska established a data base for tracking all of these items and a system for planning, conducting, and approving the work. By April 1994, Alyeska had identified about 1,700 action items stemming from deficiencies identified in the various TAPS audits and studies. These action items came from three sources—the first phase of the ADL study, which had been completed in December 1993; the QTC audit; and previous audits done primarily for Alyeska or its regulators. For the action items identified by April 1994, the first-phase interim report from ADL produced the most items—1,128 (subsequently expanded to 1,132). Alyeska translated the 22 overall deficiencies identified in the QTC study into 187 (subsequently expanded to 208) action items, and the findings of the various other audits and studies identified about 380 items (subsequently expanded to about 500). The second phase of the ADL study, completed in July 1994, led to an additional 3,100 action items. With these and with additional findings from other audits, the action items reached a total of 4,920. In January 1994, to keep track of the action items, Alyeska and JPO developed the Audit Compliance Tracking (ACT) data base and procedure, which was essentially in place in March 1994. In developing this data base, Alyeska and JPO also agreed to a process for identifying and resolving the action items. This process can be summarized in three main steps: identifying and setting priorities for the action items, preparing and approving corrective action plans, and preparing, reviewing, and verifying the closure packages for the work done to correct the deficiency. Reports generated from this data base provide JPO with updated information on Alyeska’s progress in correcting the deficiencies, and JPO summarizes this information in its annual report to congressional oversight committees. Under the process agreed to by Alyeska and JPO, Alyeska’s Integrity and Compliance Division was responsible for reviewing all internal and external audits and assessment reports to identify the action items, assigning the responsibility for the corrective action, and entering the action items into the data base. In doing so, the division also set priorities for the action items on the basis of the potential impact of items on the pipeline’s integrity. The priority system contains four levels, as shown in table 2.1. Alyeska’s quality assurance office and JPO reviewed and approved the priority level for each action item. The action item process called for the Alyeska unit responsible for each action item to prepare a corrective action plan (CAP) describing how a deficiency would be fixed if the item was a priority level-1 or –2 item or a priority level-3 or –4 item requiring 40 or more hours of labor. Before corrective action can begin on priority level-1 and –2 items, the CAPs go to Alyeska’s quality assurance staff and JPO for review and approval. After November 1994, Alyeska and JPO agreed that level-3 and –4 CAPs do not need a review by JPO. When the Alyeska unit responsible for the action item has corrected the deficiency, it prepares a closure package containing the applicable procedures and drawings documenting how the item was corrected. Each closure package is reviewed and verified by Alyeska, JPO, or both. Alyeska’s quality assurance unit verifies closure packages for all priority level-1, –2, and –3 items, and Alyeska’s contract compliance unit or the unit responsible for making the correction verifies the closure packages for level-4 items. JPO also verifies all level-1 closure packages and a minimum 20-percent sample of level-2 packages. By the end of April 1995, Alyeska reported that it had completed work on 3,030 of the 4,920 action items—about 62 percent (see table 2.2). It had also developed a CAP for a number of other action items—primarily level-1 and –2 items—that had not yet been closed. In all, Alyeska had approved 2,242—about 97 percent—of the 2,320 CAPs delivered for review. JPO had approved 2,126 of those. As table 2.2 shows, Alyeska had closed a higher percentage of items at priority levels 3 and 4 than at priority levels 1 and 2. Alyeska officials told us that because they initially anticipated closing all action items by December 1994, they did not use the priority levels as a basis for determining which work should be done first. Some priority level-1 items have been closed, such as the possible problem of natural gas liquids being mixed in with the crude oil in the pipeline—a situation that could lead to a safety problem at pump station one—and the redesign of a control system that used fuses to protect against electrical current surges (a design restricted under the National Electric Code). Many others, however, remain open. For example, the ADL study found that Alyeska had no risk management system in place at the terminal to (1) identify key equipment and facilities’ hazards; (2) assess the consequences and probabilities of occurrence; and (3) evaluate possible prevention and mitigation measures. According to Alyeska officials, the TAPS owners have approved an overall policy for such a risk management system, and it will be tested in pilot programs. Full implementation is scheduled for November 1995; training is to be completed in the first part of 1996. In connection with the 208 QTC items that we focused on, as of the end of April 1995, Alyeska had resolved 95 items, and CAPs were approved for 166 of the 180 items requiring CAPs. Examples of closed level-1 and –2 items include better monitoring of emissions volumes from tanker vents during filling at the Valdez terminal and improved maintenance procedures for a diesel engine that was not being properly maintained. Most level-1 items remain open. For example, a contractor is producing drawings of the current configuration of various facilities in a multiphase project. Approximately 40 percent of the drawings to be produced in the initial phase have been provided to Alyeska; the remainder are to be received by the end of July 1995. In the spring of 1994, Alyeska anticipated having to close about 3,000 action items. On that basis, it projected that it would complete action on and close all items by December 1994. The final total of action items, however, was considerably higher than expected. In January 1995, Alyeska had revised the planned completion date. Alyeska’s plan, as of February, calls for closing 85 to 90 percent of the 4,920 items by December 1995 and closing the remaining items by the end of 1996, except for a very small number of items generally associated with the Vapor Recovery Project at the terminal (a program to recover hazardous vapors from the oil tankers) and the Tank Cathodic Protection program (a corrosion prevention program for oil storage tanks). Completion of these will extend beyond 1996. The two most expensive projects are those involving correcting electrical deficiencies, known as the AKOSH/NEC Safety Compliance Program (ANSC) project, and efforts to update the drawings to match the equipment in place, known as the As-Built project. These two projects, which account for 70 percent of the projected costs to resolve the deficiencies, are near completion. Alyeska spent almost $133 million on the ANSC project in 1993 and 1994 and plans to spend an additional $41 million to complete it by August 1995. Alyeska also spent over $22 million on the multiphase As-Built project in 1994 and plans to spend an additional $15 million to complete the current phase by June 1995. The next most costly project authorized for 1994 and 1995 was related to correcting problems with the trays carrying electrical cables. Correcting these problems is expected to cost $5 million at the pump stations; additional expenditures will be necessary at the Valdez Marine Terminal. In total, Alyeska reported that it spent about $222 million on corrective actions in 1994 and expects to spend an additional $72.5 million in 1995. Alyeska’s Vice President responsible for the corrective action process estimated that an additional $5 million to $7 million will be spent in 1996 on corrective actions. He also said that beginning in 1996, the costs for the corrective actions to address major items will be included in the pipeline’s operating budget and not identified separately. One problem that affected Alyeska’s ability to meet the initial goal of closing all action items by December 1994 was the unexpected number of items added to the data base after the goal was set. The additions occurred because the number of action items identified in the second phase of the ADL study was more than double what Alyeska had expected. Phase two of the study identified 3,100 items, over 70 percent of the entire ACT data base. Alyeska received the Phase II report identifying these items in July, less than 6 months before its original deadline for completing the corrective actions. Despite these increases, our work indicates that Alyeska closed fewer than expected deficiencies because many high priority items proved to be more difficult to correct than Alyeska had anticipated and involved lengthy work programs that are being actively pursued. For example, many items in the quality assurance, preventive maintenance, and electrical integrity areas cannot be resolved until a variety of subissues are resolved. As chapter 3 explains in more detail, successful resolution of the 47 action items related to electrical integrity requires making close to 32,000 specific corrections throughout the entire pipeline system, as well as fixing thousands of electrical housekeeping items and completing a variety of specialized engineering studies assessing additional potential risks. The additional training required to implement some of the corrective actions was greater than anticipated, according to Alyeska managers. When it became apparent that the December 1994 goal could not be met, Alyeska took several steps to provide a clearer focus on how it was progressing on priority items. Two of these steps are particularly important: the development of a “key items” list and a work scheduling system. QTC). • All 82 level-2 priorities identified by QTC, plus 52 other level-2 priorities that have an estimated cost of $2 million or more to correct. As of the end of April 1995, Alyeska had completed the corrective actions and its Quality Assurance group had approved those actions for 76 of these key items, or about 33 percent (see table 2.4). Alyeska had developed CAPs for all of the 224 items requiring CAPs, and JPO had approved 179 of these CAPs. Five items did not require CAPs—four of them are closed. Alyeska has also developed an Operations Impact Plan to select and manage the work that involves field resources. According to Alyeska officials, the primary purposes of this plan are (1) to set priorities for work that requires field technicians’ time and (2) to schedule work according to its priority and the amount of technicians’ time available. This plan represents an important change in approach because it moves away from Alyeska’s earlier approach of attempting to correct all deficiencies concurrently without considering priorities. According to Alyeska officials, the five items with the highest priority will be worked on first during 1995: (1) preparing for compliance with title V air quality regulations, (2) developing a maintenance management system, (3) enhancing the local and wide-area communications facilities, (4) resolving electrical integrity problems, and (5) developing a quality assurance program. These items are expected to be completed by December 1995. Further down in the rankings are such matters as developing a technician training and advancement program based on tested performance and an information management system that will provide the operations organization with on-line access to various information, such as equipment drawings. While Alyeska’s success in resolving the action items has been slower than originally anticipated, the company has made substantial progress. When Alyeska anticipated that everything could be quickly corrected, it essentially tried to do everything at once, without considering the significance of the problem. Now that its schedule has been extended, Alyeska is trying to match priorities with available resources so that higher priority items are corrected first. We analyzed three areas in which QTC identified substantial deficiencies—the integrity of electrical systems, the quality program, and Alyeska’s approach to preventive and predictive maintenance. QTC had concluded that problems in these areas presented potential threats to the safety of the public and the environment. Our objective was to examine Alyeska’s actions in these areas to determine whether the planned actions will address the problems QTC identified. Although the implementation of corrective measures in all three areas is not yet complete, Alyeska is making progress in correcting these deficiencies. The actions taken and planned, if fully carried out, appear adequate to address the problems that were identified. QTC reported that the pipeline’s electrical systems constituted “the greatest hardware threat to the health and safety of the public and the environment/ecosystem.” As evidence, QTC pointed to the numerous electric code violations, such as improper grounding, already identified in other inspections. Other violations raised questions about the ability of the supports for cable trays that carry cables to various locations around the terminal and the ability of the pipeline to withstand earthquakes. Alyeska had begun an inspection to identify and correct electrical problems, but QTC found that Alyeska’s inspection program was not adequate to ensure that all electrical problems on the pipeline would be identified and adequately resolved. QTC concluded that a more broadly scoped effort was needed. In response, Alyeska developed a two-part process to assess the electrical systems of the pipeline: a detailed inspection and a series of studies of broad-based issues. In response to QTC’s findings, Alyeska revised the inspection process and inspected the entire pipeline for electrical safety problems. It developed the ANSC project to ensure that inspection criteria were established, inspections were conducted in an organized fashion, nonconformances were documented, corrective actions were approved in advance, corrective actions were taken, and the completed work was checked. Alyeska folded this new program into an inspection process that had already started at the Valdez terminal and pump stations, before QTC began its review. The resulting inspection covered the pipeline, including the terminal, the pipeline’s pump stations, and ancillary facilities. The inspection was completed in December 1994. The ANSC inspection identified about 32,000 individual items that did not conform to the project’s inspection criteria. To keep track of these nonconforming items, Alyeska created an extensive document control procedure and a data base system that is separate from the ACT data base. Like the ACT data base, this system tracks the items and classifies them according to priority. About 4 percent of the items were top priorities—that is, they were considered critical to the workers’ safety or the pipeline’s integrity and were not backed up by another system. Like the ACT data base, this system also breaks the deficiency identification and correction process into a series of steps so that progress in completing work can be tracked. Once identified, the deficiencies are validated by engineers. Progress is tracked through such steps as the development of corrective action plans, review and approval of those plans by JPO, implementation of corrective action, and approval as necessary by Alyeska’s quality control inspectors and JPO’s inspectors. In addition to the almost 32,000 nonconforming items, Alyeska’s systemwide approach also identified about 17,000 electrical-related “housekeeping” items that could largely be fixed on the spot, like replacing missing screws in cover plates or tightening grounding connections. These items were identified and fixed by teams of electricians in advance of the inspection, and others were fixed by electricians who accompanied the inspectors. Alyeska also developed a tracking system to ensure that these items were fixed. Early in the inspection process, Alyeska estimated that it would be able to correct all of the action items by December 1994. However, the inspections themselves took until December to complete. As of the end of April 1995, Alyeska reported having corrected 19,182 items on the pipeline and 6,940 at the terminal, or about 82 percent of the total. Alyeska also reported that as of January 1995, all of the 17,000 housekeeping items had been fixed. Alyeska’s president said the company’s initial estimate for completing the action items in the ACT data base, including ANSC, had been too optimistic. In October 1994, Alyeska revised the target for completing all items to December 1995. However, in March 1995 the company estimated that, weather conditions permitting, it would complete the ANSC project by August 1, 1995. In addition to the inspections, Alyeska is conducting 20 special engineering studies related primarily to electrical issues. Alyeska initiated the studies as part of the ANSC project to determine the best engineering solution to major issues. The need for special studies is one indication of the complexity of many of the electrical problems. (These studies are listed in app. I.) Eleven of these studies have been completed, and completion is imminent for most of the remaining studies. While completing the studies will close some items, in other cases the studies may identify the need for additional actions, and completing those actions may take some time. For example, the study of the cable trays’ structural integrity will likely be completed in May 1995, but the draft identified the need for modifications at both pump stations and the terminal. The schedules for the completion of all related construction work are not yet available. We reviewed three of the studies related to grounding, inspection of motor control centers, and power switching systems to determine whether the studies accurately assessed the problems and whether the recommended actions will address the problems. We believe that the studies accurately assessed the problems and that the actions in progress and planned should correct the problems identified. (These studies and our conclusions are discussed in app. II.) The right-of-way agreement requires that Alyeska have a comprehensive quality program to protect the safety of workers, the public, and the environment. Alyeska’s quality program has been the subject of criticism at various times since the pipeline’s initial construction. In its November 1993 study, QTC reported that Alyeska’s quality program was dysfunctional and was thus incapable of ensuring that TAPS had been constructed and could operate efficiently and safely. In January 1994, QTC provided recommendations on how Alyeska should revise its quality program. Alyeska is revising its program to correct the deficiencies QTC identified. However, for a small number of items, JPO has agreed that Alyeska can take a different approach than the one recommended in QTC’s January 1994 report. Completing the corrective actions will take longer than planned. Alyeska’s problems with its quality program have been long-standing. During the early phases of TAPS’ construction, we reported a variety of problems with how Alyeska was implementing its quality program. For example, in 1976 we reported that TAPS’ construction was about 22 percent completed before Alyeska obtained final approval for its quality program. During this phase of construction, Alyeska’s quality program was not consistently correcting violations of the stipulations to which Alyeska had agreed. Federal and state monitors, rather than Alyeska’s quality program staff, were requiring the correction of nonconforming work. Although improvements were made in July 1975 to correct the problems we identified, we identified similar problems in the 1976 construction season. After construction was completed in 1977, Alyeska continued to have problems with its quality program. QTC described the program, as it existed from about 1980 to 1990, as woefully inadequate. After the Exxon Valdez oil spill in 1989 and other problems, Alyeska began to upgrade portions of its quality program, but these efforts again proved insufficient. Staffing was increased from 11 in 1990 to about 34 in 1993, and Alyeska began revising the documents directing its quality program. Alyeska issued a revised quality program manual in October 1992 and a quality standards manual in September 1993. Despite these steps, the implementation of a quality program was still fragmented. QTC reported that Alyeska’s quality program was dysfunctional. Specifically, according to QTC, Alyeska’s management had a reactive mindset and did not support its quality program. In addition, QTC concluded that the program lacked the organizational authority and independence to protect public health and safety, could not show that Alyeska met basic commitments to the regulatory requirements set out and agreed to in its quality program manual, and lacked the key components needed for a quality program to function. Alyeska has since taken or is in the process of taking a number of steps to change the quality program from top to bottom. These steps have included ways to clearly establish management’s support for an effective quality program; reorganize the quality program to increase its authority, independence, and resources; provide a system for documenting compliance with regulatory requirements; develop essential components of a quality program; and put procedures in place to make the program work. “A comprehensive quality program is crucial to assure management and the public that the Alyeska Pipeline Service Company is operating with integrity (i.e. in a manner that is safe, environmentally sound, and reliable) and in compliance with all regulatory, legal and Company requirements.” The quality element includes four expectations: • A comprehensive, documented quality program is understood and complied with by employees. • The effectiveness of the quality program is periodically and objectively assessed and the program is continuously improved. • Corrective and preventive actions are identified, documented, implemented, and tracked to completion. • Systems are established to identify, evaluate, and resolve the quality concerns of employees and contractors. The second component provides a defined process for periodic evaluations of the extent to which the expectations are being met. The process provides for three levels of assessment—self- assessments, at least annually, by the local organization to ensure regulatory, legal, and company policy compliance; functional assessments, at 2- or 3-year intervals, by qualified company personnel to assess key areas of AIMS, especially relating to compliance; and independent assessments by skilled company personnel or outside experts to assess compliance with AIMS. Independent assessments will begin in 1996 and will cover the entire company every 3 years, one-third at a time. The first round of self-assessments was completed in November 1994. The AIMS Coordination Leader told us that in the first round of assessments, the various units averaged about 1.5 out of a possible 4. He added that as a result of the assessments, each of the 23 units assessed developed an improvement plan to address the most significant action items identified in the assessments. In total, the plans cover about 500 items. The plans call for completing action on these items by the end of 1995. In turn, the employee incentive program ties employees’ compensation to completing these plans in 1995. QTC reported that Alyeska’s quality assurance group, which conducted audits and surveillance, reported to the Vice President of Administration, who had no prior experience in any phase of a quality assurance program. In addition, the Quality Services group, which provided inspection services for pipeline and terminal operations, reported to the Vice President of Engineering and Projects and thus, according to QTC, lacked the independence and the required freedom to document conditions adverse to quality. Nationally and internationally recognized guidance on the development of quality organizations emphasizes the importance of these organizations having the organizational authority, responsibility, and freedom to (1) identify problems affecting quality, (2) report problems and recommend corrective actions, (3) control processing until nonconforming conditions are corrected, and (4) verify corrective actions. In response to QTC’s finding, in early 1994 Alyeska reorganized its quality program. It combined the audits and surveillance group and the inspections services group into a single organization, the Quality Department, headed by the Quality Department Manager. Alyeska also relocated the department under a newly created Vice President for Quality, Environment and Safety, who, organizationally, is on the same level as the Vice President for Operations. In June 1995, about 31 staff were in the Quality Department, about 14 in Audits and Surveillance, 11 in Quality Services, and 6 in Management and Administrative Support. In addition, 18 other staff perform quality functions, including nine quality generalists assigned to the business units. The 1995 quality staffing level of 49 represents an increase of 15 from the 1993 staffing level of 34. The staff resources devoted to the quality program are temporarily augmented by about 37 staff who are dedicated to short-term projects and will be phased out in 1995 as projects wrap up. After we had completed our field work, on June 1, 1995, the President of Alyeska advised us that Alyeska plans to further revise the organization of its quality program. The program’s reorganization will take place in two stages. First, beginning in July 1995, the position of Vice President for Quality, Environment, and Safety, will be abolished. The environment and safety functions will be assigned to another Vice President. The quality program, with the exception of audit and surveillance, will be assigned to a newly created Operations System Integrity Department under the Vice President for Operations. The audit and surveillance function will be transferred to the Vice President for Business Practices, who is also responsible for Alyeska’s audit function and the Employee Concerns Program. Alyeska officials believe that placing the audit and surveillance function in a separate group from Operations will enable it to retain its independence to report on conditions that may be adverse to quality. The inspection function will be reassigned from Quality, Environment, and Safety to the Operations System Integrity Department within the Operations group and eventually reassigned to the Maintenance and Modification Department within Operations and the Business Units during the second stage of reorganization. Although this reassignment will once again have the inspection function under the persons responsible for transporting oil and maintaining the pipeline—the Vice President for Operations and the Business Unit Leaders—Alyeska officials believe that the quality program will be better received and evolve into a continuous improvement mode more quickly if the personnel responsible for operating the pipeline take ownership of the quality program rather than have a separate unit outside of Operations attempt to instill quality in the way Operations personnel do their work. According to Alyeska officials, steps are being taken to ensure that the inspection function will continue to be effective. In the proposed reorganization, the inspection function and the project management/facility operations functions will remain on separate reporting paths within Operations. In addition, the Operations System Integrity Manager is establishing quality councils, and inspectors will be invited to participate in the councils along with Alyeska employees. These councils are being established to provide a forum for front-line workers to provide input for improvements in the quality program or to raise issues or problems involving quality. In addition, the officials told us that the Ombudsman Program and the soon-to-be-implemented Employee Concerns Program, which are located outside of Operations, will provide a relief valve in the event that quality-related issues are not being appropriately handled by line organizations. Alyeska plans to review and benchmark these changes against other companies and industries late in 1995 to ensure that this is the most effective approach. In our opinion, the effectiveness of these changes will become clearer over time. QTC also found that the TAPS project failed to ensure compliance with agreements, codes, standards, and government regulations because Alyeska failed to fully identify its regulatory requirements and incorporate those requirements into operating and maintenance implementing procedures. QTC noted that this failure by Alyeska to implement its own policy of regulatory compliance dates back to the original issuance of the Quality Assurance Manual, Revision 0, dated June 7, 1977. In response to QTC’s finding, Alyeska is establishing the Alyeska Regulatory Compliance System (ARCS) to help ensure that commitments, such as the requirement to comply with the federal and state right-of-way agreements, and affected documents, such as the procedures for implementing the agreement, are identified and updated in a timely fashion. The system will contain each requirement, such as a law or regulation, interpret its specific relevance to Alyeska, link it to a principal implementing procedure, identify the organization responsible for implementing the procedure, identify implementing documents such as maintenance procedures, and specify any training requirements. In October 1994, Alyeska created the Information Management Service Unit to implement this tracking system and several related programs. The requirements were divided into eight subject areas, including environment, and fire safety and industrial hygiene. The process of identifying the regulatory requirements has been completed for six of the eight subject areas in the tracking system. The Service Unit plans to partially implement ARCS in the fourth quarter of 1995 for the six areas. Alyeska plans to fully implement the tracking system around December 1996. At that time, it is expected that the required data will have been developed for the remaining two areas—Oil Spill Contingency Planning and Codes and Standards—and that safe maintenance procedures will have been completed. QTC reported that program components key to an effective quality program were either not functioning or were missing altogether. The document control process had broken down to the extent that no assurance could be made that approved drawings accurately reflected the equipment in place or its operation. Neither was there a master list of structures, systems, and components that should be included in a quality program or documentation indicating the importance of the equipment to the pipeline’s integrity. In addition, cause and corrective action programs were not in place to learn from malfunctions and maintenance histories. Alyeska is correcting these deficiencies. It is • developing a master equipment list to identify the structures, systems, and components to be included in the TAPS quality program and developing a procedure for documenting and controlling the list; • developing a document establishing the importance of various equipment to ensure the integrity of TAPS and thus the extent to which elements of the quality program apply to the equipment; • developing a risk-based cause and corrective action program that will use maintenance histories to improve future reliability; and • updating the “as-built” documentation to ensure that drawings of all TAPS’ structures, systems, and components reflect current configurations, performing a limited functional check to ensure that the selected equipment operates as provided in specifications, and developing implementing procedures to ensure that the documentation and conditions of TAPS’ equipment and facilities remain current and consistent. QTC reported that Alyeska’s quality program, as described in various quality manuals, has been inadequate as a total approach to quality and reported that the manuals, as defined, have not been implemented. QTC’s Phase 2 report identified actions for Alyeska to consider in developing its revised quality program. Alyeska considered and incorporated almost all of these actions, and in May 1995, JPO conditionally approved Alyeska’s revised program. The Quality Program Manual establishes Alyeska’s overall quality program and policies. The implementing procedures address various areas, including ones that QTC identified as lacking: the Regulatory Compliance Matrix, Master Equipment List, Trend Analysis, and Causal Factor (root cause) Analysis. After a period of orientation and training, the revised quality program will go into effect on June 15, 1995 for all new work. As with other areas, the actions required to improve the quality program have proven to be more difficult than Alyeska originally expected. Thus, a fully implemented quality program will not be completed until at least December 1996, although key components are in place now, and others are expected to be put into place during the latter half of 1995. Alyeska’s response to QTC’s recommendation for a regulatory compliance system is one example in which progress is slower than anticipated. Although Alyeska’s 1994 plans called for implementing the Alyeska Regulatory Compliance System in the first quarter of 1995, completion of the system will be implemented in stages. The system will be partially implemented in the fourth quarter of 1995, when time is available at the terminal and pump stations to provide needed training and when the communications upgrade, called the wide-area network, which will enhance computer communications between field operations and Anchorage, is completed. Full implementation of ARCS is scheduled to be completed in December 1996 when two subject areas—Oil Spill Contingency Planning and Codes and Standards—have developed needed information and when the maintenance organization completes its program for developing the required procedures for maintaining equipment to required standards. The maintenance designed to keep plant and equipment in good operating condition is generally achieved by identifying all of the structures, systems, and components requiring maintenance (a master equipment list) and developing schedules and criteria for when maintenance is to be performed. QTC found that Alyeska’s program for maintaining the pipeline’s components (such as the pipe, pumps, valves, and electrical equipment) lacked a comprehensive approach for analyzing and “trending” the condition of this equipment or for using such information as a means of establishing a maintenance program that is predictive in nature. Alyeska had no master equipment list and no implementing procedures for a comprehensive maintenance program. QTC found that Alyeska’s individual maintenance procedures lacked clarity, specificity, and technical validity. For example, the procedures did not specifically call for the types of parts/materials/tools to be used in a procedure; called for incorrect parts/materials/tools to be used; or called for incomplete/inadequate/inaccurate steps to perform preventive maintenance. Alyeska has taken and plans to take a number of steps, such as developing the master equipment list discussed under the quality program, to correct the maintenance program deficiencies identified by QTC. It has also begun developing a revised maintenance program that will include the results of its corrective actions. Together, the actions, when completed, should provide a basis for improving maintenance and for creating a predictive maintenance program that can better focus maintenance resources where they are (1) most needed to ensure safety and pipeline integrity and (2) most cost-effective. The completion of all necessary steps is not likely until at least mid-1996 at the earliest. Alyeska is developing a master equipment list to identify equipment needing maintenance and an integrity list that will relate the importance of this equipment to the integrity of the pipeline. The quality program requires greater focus on the equipment that is more critical to the safety and integrity of the pipeline. The equipment list is being developed as part of the as-built project and functional-check processes described in the earlier section on quality. The integrity list for the level-I items was completed in November 1994, and the list is scheduled to be completed for the level-II, level-III, and nonintegrity items in the fourth quarter of 1995. The initial as-built project for the 12,000 to 14,000 most critical drawings is scheduled to be completed in June 1995; a supplemental project for 5,000 to 6,000 less critical drawings is scheduled for completion in June 1996. The functional check out project is associated with the as-built project and is also a two-phase project. Each phase will be completed before the corresponding phase of the as-built program. The master equipment list is scheduled to be completed about the end of 1995. Alyeska is developing an Integrated Maintenance Management System (IMMS) to enable it to track and learn from the maintenance histories of key equipment throughout the pipeline. The information derived from maintenance histories can provide a basis for improved reliability and, possibly, reduced maintenance costs. A basic element of the system is a software system (called PassPort) that will allow Alyeska to collect and analyze maintenance histories on key equipment. The first stage of this system, the automated work order system, began testing at a pump station in spring 1995 and will come on line during the third quarter of 1995. Alyeska is also upgrading its wide-area network communications link between the pipeline’s facilities to allow the system to acquire and track maintenance histories from the equipment at the terminal and the pump stations. The computer-supported maintenance system and the related communications upgrade will provide a basis for tracking the histories of all integrity-related equipment on the pipeline. Alyeska’s plans call for completing the upgraded communications system in November 1995. Alyeska describes the maintenance system it is developing as a risk-based maintenance program which provides for (1) learning from maintenance experience that is collected and tracked in the PassPort data base and (2) using predictive maintenance procedures to improve reliability and reduce costs. Without such a program, resources could be inefficiently used to maintain equipment whose failure will have little impact on operations or for which preventive maintenance is not economical. Instead, it would be more cost-effective to operate this equipment until it fails and then replace it. On the other hand, inadequate maintenance could be performed on equipment where the likelihood of failure and/or the consequence of failure warrant more extensive maintenance, according to Alyeska maintenance officials. In a risk-based maintenance program, maintenance is performed on a schedule determined by both the consequences of failure and the likelihood of failure. The risk assessment element is scheduled to be implemented in late 1995 and early 1996 as training is provided. Predictive maintenance requires (1) the determination of conditions, such as increasing vibration, temperature, or wear, that will indicate when maintenance is needed in time to prevent equipment failure and (2) a monitoring program to identify those predetermined conditions. The PassPort system will help identify the conditions that call for maintenance, and the risk analysis will identify the equipment important enough to make monitoring worth the cost. Alyeska is developing maintenance procedures, called safe operating and safe maintenance procedures, describing how to prepare equipment for maintenance and how to perform maintenance on pipeline equipment. The completion of this program has stretched into 1996 because Alyeska is developing the criteria for identifying which equipment needs to have maintenance procedures developed. The contractor had developed over 600 procedures at a pump station and the terminal before the project was put on hold. The contractor, as directed, was developing procedures for items at equipment locations that are identified by tag number. While the tag numbers are unique, the equipment with the tag numbers is not. Thus, this method resulted in many duplicate procedures being written for the same equipment. A different system, based on component identification and a judgmental determination of importance, is being developed. The new approach will reduce the number of procedures that have to be developed and updated as equipment changes are made over time. The completion of this process is now scheduled for 1996. Alyeska is taking steps that when completed and fully implemented, should correct the problems QTC identified with electrical integrity, quality, and maintenance. However, the process for all three is taking longer than planned. Alyeska’s efforts in these areas have been affected by the complexity and breadth of the work to be done. Considerable time will be needed before the degree of success of the effort can fully be assessed. The need for additional time to fully assess progress is particularly true for the quality program, which is undergoing continuous reorganization. In addition, once the corrective measures are addressed, implementing them over the long term will require a continuing commitment of resources, as discussed in the next chapter. Effective oversight is a key component of ensuring safe pipeline operations. Although federal and state regulators made substantial attempts after 1990 to better coordinate their efforts, significant problems with regulatory effectiveness were still being pointed out by outside reviews as recently as 1994. The Joint Pipeline Office is addressing these problems. For example, it has strengthened JPO’s regulatory staff, and JPO is in the process of reorganizing its monitoring program to address prior limitations. These developments are encouraging signs that the regulatory program is continuing to improve. In a 1991 review of TAPS oversight, we concluded that the existing form of oversight did not provide for effective monitoring of TAPS’ operations. The five principal federal and state regulatory agencies did not have a systematic, disciplined, and coordinated approach for regulating TAPS. In fact, BLM officials told us they were not regulators. Instead, they largely relied on Alyeska to police itself. We also found that the Exxon Valdez oil spill and the discovery of corrosion in the pipeline in 1989 had been an impetus for the regulators to reevaluate their roles. This reexamination led to a 1990 decision to develop JPO. We concluded that the establishment of JPO was a positive step toward better regulation. During the next several years, the regulatory agencies gradually increased their participation in JPO. When we issued our 1991 report, 6 of the 12 agencies with significant jurisdiction over TAPS’ operations had agreed to participate in JPO. By 1994, 11 of the 12 agencies had signed an agreement to support JPO and to work cooperatively to protect public safety, the environment, and the integrity of TAPS. Similarly, they increased the staffing committed to JPO from a skeletal staff to 57 employees by 1993. Hearings held in July 1993 by the Subcommittee on Oversight and Investigations, House Committee on Energy and Commerce, provided indications to BLM that JPO’s efforts to regulate TAPS to date were not adequate and that further action was needed to improve JPO’s regulatory oversight of TAPS. In response to these hearings, the Director of BLM clarified BLM’s authority in relation to the other TAPS regulators. He testified that BLM not only would exercise its authority over federal lands but, as lead agency of JPO, would invoke its authority consistent with the TAPS Authorization Act to carry out thorough pipeline oversight. One of BLM’s first actions was to contract with QTC. The 1993 QTC report provided a stark picture demonstrating that Alyeska and its regulators still had a considerable distance to go in ensuring the integrity of the pipeline’s operations. Although the QTC report did not directly address how effectively regulators were doing their jobs, QTC’s findings demonstrated that JPO’s efforts to date had not been sufficient to identify major problems and ensure their correction. In response, JPO, in early 1994, selected Booz-Allen & Hamilton, an independent consulting firm, to assess its monitoring and inspection program. In its June 1994 final report on a comprehensive monitoring program for JPO, Booz-Allen concluded that JPO was not effectively addressing the prevention of pipeline hazards. The report stated that closely monitoring Alyeska’s maintenance, quality assurance, and configuration management could have precluded most of the findings in QTC’s audit. Booz-Allen concluded that for JPO to be successful in meeting its responsibility for TAPS oversight, it needed a new model for monitoring TAPS. This model would place more emphasis on identifying potential hazards and addressing them rather than waiting to detect and mitigate hazards that had already occurred. (In placing greater emphasis on prevention, however, regulatory activities would still address the monitoring of compliance and emergency response.) Booz-Allen found that JPO needed to make several changes to shift to such a model: • Monitoring risk management in nine major TAPS’ process areas—quality assurance, safety, configuration management, operations, maintenance, risk determination, environmental protection, project design, and project performance. JPO officials said that in the past, they had focused only on the latter three areas. • Performing the monitoring work in a multidisciplinary team organized under a single director. • Collecting far more information than in the past, structuring it for management decision-making and action, and making it available for outside audits, interests, and inquiries. Our most recent work indicates that JPO is making an effort to improve its oversight. Since our earlier work, JPO has changed and now recognizes its regulatory function. In addition, JPO has • expanded its staff, supplemented by contractors, to handle oversight • established a project group to monitor Alyeska’s response to the QTC • begun to reorganize and carry out other steps needed to implement the Booz-Allen model for comprehensive monitoring. Funding levels for JPO’s operations increased from about $3.5 million in 1993 to more than $5 million for fiscal year 1995. Under the agreements authorizing the pipeline, Alyeska is obligated to pay BLM’s costs for oversight activities related to TAPS. In 1995, BLM estimates its portion of these costs will be $3.5 million. (Although JPO’s operations are primarily focused on TAPS, it does monitor other pipelines in Alaska and conduct other related activities, such as reviewing and issuing permits for pipelines being considered for construction.) In addition, from February 1994 through March 1995, Alyeska paid $9.2 million for TAPS-related activities by JPO consultants and other associated contract costs; by June 1995, Alyeska’s payments for these costs will reach $12 million. In addition, Alyeska agreed in September 1990 to pay a portion of ADNR’s costs for monitoring TAPS. In 1995, Alyeska will contribute up to $800,000 of the expected $1 million for monitoring TAPS. JPO officials advised us that the state provides a ceiling on how much ADNR can spend, provided it raises the money through agreements, such as the agreement it has with Alyeska. Other sources of funds come from other agreements. For example, ADNR also receives money from rents on rights-of-way from owners of common carrier pipelines and sales of gravel from the rights-of-way. It expects to raise $335,000 in rents and $100,000 from gravel sales in 1995. ADNR’s authorized ceiling for 1995 is $1.7 million, but it will raise only about $1.3 million through its various agreements. Thus, its budgeted spending for JPO activities in 1995 will be about $1.3 million. Under these increased funding levels, overall staffing at JPO has grown from 57 positions in 1993 to 84 positions as of April 1995. Although JPO officials told us the staffing level was not adequate, the additional support it needs is being provided by contractors, such as Stone and Webster Engineering Corporation, an independent engineering consultant firm. JPO officials said that since Alyeska has not established all of its programs, such as maintenance, JPO did not know if its noncontractor staffing level was sufficient to address its regulatory responsibilities in the future. JPO will assign five Stone and Webster employees to its Operations Branch for audit item resolution through December 1995. Consistent with its more active monitoring role, JPO in 1994 established a project group to oversee Alyeska’s correction of action items. These staff members perform such functions as approving priorities for action items, coordinating the review effort, reviewing special studies, and approving corrective action plans. To supplement this staff, JPO is working with Stone & Webster. JPO used about 45 Stone & Webster staff for such tasks as reviewing corrective action plans, verifying corrective action on-the-ground, maintaining a computer data base for tracking audit action items, and performing special investigations. JPO has also hired another engineering consultant to monitor how Alyeska closes the electrical deficiencies in the ANSC project. While the former staff of the project group still spends the majority of their time on audit items, JPO has integrated them into its new organization described below. Shortly after receiving Booz-Allen’s recommendations for a new monitoring model for TAPS, JPO began to reorganize to put the model into effect. The Booz-Allen study called for establishing a centralized monitoring office with four oversight groups: quality assurance, pipeline surveillance, engineering and projects, and right-of-way administration. Each of the four groups is in the process of developing detailed monitoring programs that are based on the consultant’s recommendations. Table 4.1 shows each office’s size, primary role, and activities to date. Because much of this effort is still far from complete, it is too early to determine whether it will be successful. However, JPO is currently conducting assessments and surveillance activities under the Comprehensive Monitoring Program (CMP). Significant program reviews, which aggregate observations from JPO’s assessments and surveillance and factor in input from employees’ concerns, audit items’ progress, and Alyeska’s own quality reviews, will be completed through 1996; the initial emphasis will be on quality, operations, and maintenance. Configuration management and safety, two additional CMP focus areas, are currently undergoing review by JPO; reports are due by the end of 1995. JPO expects program reviews of significant depth to be completed under CMP by the end of 1996. Besides the 31 positions in the operations branch, JPO has 29 other staff positions that are primarily involved in monitoring other activities, such as other pipelines, but who also assist in monitoring TAPS. Of these, 26 are with the Alaska Department of Environmental Conservation, 1 is with DOT’s Office of Pipeline Safety, and 1 is with EPA. These three agencies, while locating their staff at JPO, have elected to retain final responsibility for carrying out their regulatory functions. The one remaining agency is the Alaska Office of Management and Budget, Division of Governmental Coordination, which coordinates coastal consistency reviews; it has one staff member at JPO. Like Alyeska, JPO is in the process of changing its approach to ensuring the safe operation of TAPS. At this point, it is difficult to provide an assessment of how successful JPO has been. Taken together, however, the efforts set in motion over the past 2 years demonstrate that JPO is making a concerted effort to improve. JPO’s ultimate success, like Alyeska’s, depends partly on ensuring that its changes are fundamental enough not only to resolve existing problems with TAPS, but also to keep them from recurring. In the following chapter, we address the challenges that JPO and Alyeska face in this area. Audits and studies of TAPS have pointed to a common underlying cause for past problems: Both Alyeska and JPO had an operating philosophy based heavily on reacting to problems rather than on ensuring quality and minimizing the chance that problems would occur. The QTC study called Alyeska management’s mindset “the greatest non-hardware-related imminent threat” to the pipeline, and the Booz-Allen study found that JPO needed to substantially transform its mindset in connection with oversight. Without fundamentally changing the approach to quality and prevention, which is the key to correcting past problems, JPO cannot ensure that problems will not happen again. Alyeska and JPO have developed policies that reflect this change, and both organizations have taken steps to incorporate these changes into their day-to-day work. For Alyeska, the success of this effort may depend on its ability to establish a new mindset throughout the entire organization. For JPO, the main challenge may be maintaining a stable resource base—funding and staff—over the long term for its redefined operations. Alyeska and JPO are partway through an ambitious attempt to resolve problems with the operation and oversight of TAPS. Their progress shows reason for cautious optimism on the basis of the substantial amount of work completed. However, tackling some tasks is proving to be more complex, time-consuming, and difficult than initially expected, and the real key to improved operation will be the implementation of many of these actions over the long term. “not only failed to prevent or correct these mid-level management failures, but also has failed even to recognize the need to do so. Upper management has demonstrated a tolerance for negative practices, such as harassment and intimidation of quality control inspectors and others, and has failed to take affirmative actions needed to establish the integrity of the operation.” Alyeska does not dispute QTC’s characterization of past practices by some managers and supervisors. In an April 1994 briefing describing the organizational problems outlined in the QTC report, Alyeska’s human resources department concluded that the company’s culture was typified by emphasizing oil transportation above all else. In addition, Alyeska was hiding problems and taking a “shoot-the-messenger” approach when problems were surfaced. It also maintained adversarial relations with regulators, pipeline owners, and contractors. Alyeska is taking steps to change the company mindset, but the changes will take some time to complete and will be difficult to implement. Part of the change in mindset has come as a result of actions taken by Alyeska’s seven owner companies. In the past, according to owner company executives with whom we spoke, Alyeska’s accountability was somewhat blurred by the working relationship between Alyeska and the owner companies. The Owners Committee, which oversaw Alyeska’s operations through quarterly meetings, was supplemented with 11 subcommittees covering such matters as law, budget, audit, accounting, and tax. These subcommittees were often heavily involved in management decisions. As a result, the executives said, Alyeska’s accountability may have become less clear. Beginning in the fourth quarter of 1993, Alyeska and the owner companies took action to clarify expectations. An expectations manual was created, specifying which areas were Alyeska’s autonomous responsibility, which authorities require owner notification but are delegated to Alyeska, and which areas the owner companies reserved for themselves. With the exception of the audit subcommittee, the subcommittee structure was dissolved and replaced by an approach in which joint task forces were created to deal with specific issues as they developed. The owners created a performance management contract that specified the actions and standards to which Alyeska management would be held. Among other things, this contract calls for completing action on at least 85 percent of the action items in the ACT data base by the end of 1995. According to three owner company presidents representing the Owners Committee, the committee reviews progress on the contract each quarter and supplements this review with monthly meetings with Alyeska management. Alyeska’s top management has a new policy for corporate behavior that encourages an open and more quality-oriented approach to operations. For example, on October 17, 1994, Alyeska’s president wrote a memorandum to all staff that reemphasized the objectives of the new policy. Alyeska revised and supplemented its $2.5 million baseline training program to support the transition to its new organizational culture. It spent an additional $2.6 million in 1994, and plans to spend an additional $2 million in both 1995 and 1996 for additional training. Alyeska has developed and administered training aimed at eliminating actions that employees perceived as intimidating or preventing them from expressing their concerns. Alyeska provided training to discourage intimidation and encourage open communication to about 85 percent of its employees. It also provided training, which is aimed in part at assessing and improving the extent to which supervisors promote teamwork and treat employees’ concerns fairly, to about 90 percent of those supervising three or more people. Efforts are also under way to improve and enhance an employee concerns program by making it more accessible, more reliable, and more trusted by employees. According to Alyeska officials, these and other actions are intended to build a new culture in which employees feel safe in taking appropriate action, inflexibility or inaction is not accepted, and people take pride in their work. In addition, Alyeska has surveyed employees to measure their attitudes and degree of satisfaction and plans to conduct other follow-on surveys. A survey conducted in March and April 1994 by an outside consulting firm covering 1,225 employees disclosed that the majority of the Alyeska employees responding felt that they are encouraged to report bad news as well as good news. However, 25 percent believed that bad news would not be received positively and that retribution or no corrective action was likely. Another survey, conducted in June 1994 for Alyeska by a contractor, indicated that some of the 200 contract employees surveyed feared they would be fired if they identified problems. The results of these surveys suggest that a complete changeover in Alyeska’s culture and employees’ attitudes may take additional time and effort. Another way in which Alyeska is attempting to change its mindset is to create more stability—and therefore more accountability—in the ranks of upper management. Alyeska’s upper-level management positions have traditionally been filled by managers loaned from the owner companies for short periods—usually 2 years. This situation has contributed to frequent turnover in senior positions and an emphasis on short-term production goals, according to JPO officials. Alyeska’s owner companies have made several commitments to change the loaned-executive policy in the past year. First, they adopted a policy of reducing the number of loaned executives by 50 percent from 1993 levels by the end of 1997. Second, they called for filling positions with the best qualified person whether the person was employed by an owner company, Alyeska itself, or an outside source. Third, in those cases in which positions were to be filled by loaned executives, they called for lengthening the time of the assignment to at least 3 years. At the level of day-to-day operations, the changes are reflected by the new quality and maintenance programs. Alyeska’s senior management believes that these new systems can provide processes and procedures that will outlive management turnover and bring more long-term stability and accountability. As we discussed in chapter 3, Alyeska’s efforts to implement these systems, if carried through to completion, do appear substantive enough to bring about significant improvement. These actions notwithstanding, it will take some time to change Alyeska’s culture. For example, in the summer of 1994 there were at least three instances when Alyeska supervisors or managers tried to hide problems or punish employees for reporting “bad news.” However, in each case, when Alyeska’s top management was made aware of the incident, it took action to resolve the problem identified by the employee and, where appropriate, followed up with counseling and/or disciplinary action for the supervisor. As discussed in chapter 4, past studies have pointed to the need for JPO to change its regulatory role substantially. JPO is attempting to change its philosophy, organization, and monitoring techniques. Its goal is to be a more sophisticated and technically trained regulatory/compliance organization capable of independently reviewing and analyzing TAPS’ plans, design, and systems. JPO’s operating philosophy is intended to be one of quality management, which emphasizes preventing rather than reacting to problems through closer study and knowledge of TAPS’ systems and processes. As discussed throughout the report, as we completed our work, Alyeska and JPO were still in the process of taking action to correct deficiencies and improve performance. We remain encouraged by the level of effort expended so far by Alyeska and JPO to remove the underlying causes of problems with the operation and oversight of TAPS. If the actions under way are completed and fully implemented, we believe they will provide a basis not only for fixing TAPS’ current problems, but also for helping to ensure that they will not recur. However, because much work remains to be accomplished, the full effectiveness of Alyeska’s and JPO’s actions cannot be assessed in the short term and will be largely dependent on the following: • Resolving the 4,920 action items in the ACT data base. Progress reports generated from the ACT data base provide JPO with updated information on Alyeska’s progress. In turn, JPO has summarized Alyeska’s progress in its annual report. These annual reports are required to be provided to congressional oversight committees. Information from the ACT data base and the annual report can provide those responsible for overseeing TAPS with the data needed to assess what progress is being made. • Alyeska’s following through on its commitment to implement quality and maintenance programs. Alyeska has the primary responsibility for ensuring that the pipeline operates in a safe, environmentally responsible manner. The actions planned by Alyeska to improve its quality and maintenance programs, if implemented, will help ensure that this improvement occurs. The key to this effort is for Alyeska to create and sustain a commitment to quality throughout its organization. • Long-term support for JPO’s oversight responsibilities. Strong, effective oversight of TAPS by JPO is critical for verifying that Alyeska and the owners fulfill their responsibility to resolve all TAPS’ deficiencies as quickly and effectively as possible and, more importantly, for assuring the public over the long term that Alyeska operates the pipeline in a manner that meets the right-of-way requirements for a safe, environmentally responsible operation. JPO’s ability to provide effective regulatory oversight will depend on having adequate funds and staff. The funding from Alyeska provides nearly the total foundation for JPO’s effectiveness. As for JPO’s staffing, BLM provides almost 45 percent of the staff positions; nearly all of the remainder comes from the state. Over the long term, as pipeline throughput decreases, Alyeska is likely to experience increasing pressure to reduce its costs, and BLM officials told us that downsizing at Interior eventually may put pressure on JPO’s staffing levels as well. The impact of these pressures on JPO’s budget and staff can affect JPO’s ability to be an effective regulator.
Pursuant to a congressional request, GAO provided information on the progress made in correcting deficiencies in the operation and management of the Trans-Alaska Pipeline System (TAPS), focusing on: (1) whether the planned corrective actions will address deficiencies in the pipeline's electrical systems, quality, and preventive maintenance; (2) whether regulators are taking action to improve oversight of the pipeline; and (3) the root causes of pipeline deficiencies. GAO found that: (1) the pipeline contractor has corrected about 62 percent of the almost 5,000 identified deficiencies as of April 1995, but it does not expect to be finished until the end of 1996, 2 years later than it had originally planned; (2) the contractor has corrected most electrical problems, focused management attention on the quality program, and is overhauling its maintenance program; (3) if the contractor completes actions to address these deficiencies, the TAPS problems should be corrected; (4) pipeline regulators are making a concerted effort to increase staff and reorganize to strengthen their focus on monitoring contractor operations; and (5) the root causes of the pipeline's deficiencies include the contractor's philosophy of reacting to problems rather than conducting programs aimed at prevention and early detection and regulators' inadequate oversight of contractor operations.
As shown in figure 1, biosurveillance is a concept that emerged in response to increased concern about biological threats from emerging infectious diseases and bioterrorism. Biosurveillance is carried out by and depends on a wide range of dispersed entities, including state, tribal, local, and insular jurisdictions. As we reported in June 2010, because of the vast array of activities and entities associated with effective biosurveillance, ongoing interagency and intergovernmental collaboration is crucial. The backbone of biosurveillance is traditional disease-surveillance systems. Traditional disease-surveillance systems are designed to collect information on the health of humans and animals to support a variety of public-welfare and economic goals. These systems support biosurveillance efforts by recording national health and disease trends and providing specific information about the scope and projection of outbreaks to inform response. State and local public-health agencies have the authority and responsibility for carrying out most public-health actions, including disease surveillance and response to public-health emergencies in their jurisdictions. State laws or regulations mandate disease reporting at the state and local level, bu state-based systems are coordinated at the national level by a voluntary set of reporting criteria and case definitions. For example, the mainst traditional disease surveillance in humans is the National Notifiable Diseases Surveillance System, through which state public-health departments voluntarily report their notifiable disease data to CDC. The National Notifiable Disease List includes those diseases that CDC and state public-health officials have identified as posing a serious public- health risk for which case reports would help inform prevention and control efforts. Diseases on the nationally notifiable list range from sexually transmitted diseases, such as Human Immunodeficiency Virus and syphilis, to potential bioterrorism agents, such as anthrax and tularemia. Similarly, to help protect the nation’s agricultural sector, USDA has routine reporting systems and disease-specific surveillance programs, which rely on state-collected data, for domesticated animals and some wildlife that can provide information to support the early detection goal of biosurveillance. Many states have a statutory or regulatory list of diseases that animal-health officials are required to report to the state departments of agriculture. State animal-health officials obtain information on the presence of specific, confirmed clinical diseases in the United States from multiple sources—including veterinary laboratories, public-health laboratories, and veterinarians—and report this information to USDA’s National Animal Health Reporting System (NAHRS). This system is designed to provide data from state animal-health departments on the presence or absence of confirmed World Organization for Animal Health reportable diseases in specific commercial livestock, poultry, and aquaculture species in the United States. For wildlife, USDA’s Animal and Plant Health Inspections Service’s Wildlife Services division is charged with conducting surveillance of wildlife to detect zoonotic or other diseases that may pose threats to agriculture. The division’s National Wildlife Disease Program is charged with conducting routine surveillance for targeted diseases and responding to mortality and morbidity events, particularly those occurring near humans or livestock. The program has wildlife disease biologists in most states that work to coordinate with state, local, and tribal officials to conduct surveillance and respond to events. In addition, DOI’s U.S. Geological Survey’s (USGS) National Wildlife Health Center is charged with addressing wildlife disease throughout the United States. This center provides disease diagnosis, field investigation, disease management and research, and training. It also maintains a database on disease findings in wild animals and on wildlife mortality events, although there is currently no national reporting system for wildlife diseases. Recognizing that human and animal diseases are interconnected, several organizations—including the American Medical Association, the American Veterinary Medical Association, USDA, and HHS—have taken steps to support the One Health concept, which is a worldwide strategy for expanding interdisciplinary collaboration and communications in all aspects of health care for humans and animals. Disease-reporting systems help professionals to recognize unusual disease signals and analyze their meaning, but generally have inherent limitations that affect the speed with which their results can be determined, communicated, and acted upon. Many surveillance programs incorporate other methods of surveillance that have the potential to augment and enhance the detection and situational-awareness benefits of traditional disease reporting. For example, syndromic surveillance uses health-related data collected before diagnosis to look for signals or clusters of similar illnesses that might indicate an outbreak. An example of syndromic surveillance data is prediagnostic health-related information like patients’ chief complaints recorded by hospital emergency room staff. However, we reported in September 2004 and November 2008 that the ability of syndromic surveillance to more-rapidly detect emerging diseases or bioterror events has not yet been demonstrated. Another method used in disease surveillance efforts is sentinel surveillance, in which practitioners monitor for specific disease events in a targeted subset rather than an entire population. Sentinel surveillance can also promote early detection, for example by monitoring sentinel chicken flocks and testing for the presence of antibodies to arboviruses, such as West Nile virus, which could be spread by mosquitoes to humans. Numerous federal, state, local, and private-sector entities with responsibility for monitoring animal and human health have roles to play both in supporting traditional surveillance activities and in designing systems to focus specifically on enhancing detection and situational awareness. Conducting biosurveillance is a shared responsibility among multiple local, state, and federal agencies, as well as among professionals across various disciplines in state, tribal, local, and insular jurisdictions. However, there is variation in organization and structure among public-health, animal-health, and wildlife functions at the state, tribal, local, and insular levels. For example, as shown in figure 2, a state’s public-health structure may or may not be centralized. On the other hand, livestock and poultry health is largely centralized within state departments of agriculture, relying on accredited veterinarians across the state for detection. By contrast, wildlife disease surveillance largely lacks structure entirely and is dependent upon chance observations of unusual numbers of sick or dead wildlife, or both, being observed and reported to state or local wildlife agencies. The exception is USDA’s National Wildlife Disease Program, which coordinates national surveillance and reporting of targeted diseases that may pose threats to human health or agricultural resources. Some of the nonfederal partners with key responsibilities in the biosurveillance enterprise are presented in table 1. Tribal Jurisdictions. As of October 2010, there were 565 federally recognized tribes—340 in the continental United States and 225 in Alaska. Federally recognized Indian tribes are Native American groups eligible for the special programs and services provided by the United States to Indians because of their status as Indians. Under the Indian Self-Determination and Education Assistance Act, as amended, federally recognized Indian tribes can enter into self-determination contracts or self-governance compacts with the federal government to take over administration of certain federal programs for Indians previously administered on their behalf by the Department of the Interior or HHS. The Bureau of Indian Affairs, within DOI, and the IHS, within HHS, are the primary agencies that operate Indian programs within those two departments. IHS is charged with providing health care to the approximately 1.9 million American Indians and Alaska Natives who are members or descendants of federally recognized tribes. These services are provided at federally or tribally operated health-care facilities, which receive IHS funding and are located in 12 geographic regions overseen by IHS area offices. These IHS-funded facilities vary in the services that they provide. For example, some facilities offer comprehensive hospital services, while others offer only primary-care services. Although American Indian tribes are sovereign entities, IHS facilities follow disease- reporting regulations and use disease-reporting channels for the state in which tribal patients geographically reside. For example, tribal patients who live within the boundaries of Utah, New Mexico, or Arizona could use the same IHS facility in Shiprock, New Mexico. If a patient whose tribal residence is geographically located in Arizona presents at the Shiprock facility with a disease that the state of Arizona has designated as reportable, IHS would report it to Arizona public health officials. Tribes that manage their own health services use the national notifiable disease reporting system. Land-based agricultural resources are vital to the economic and social welfare of many tribes. The Intertribal Agriculture Council is an organization of tribal agriculture producers and conducts programs designed to further the goal of improving tribal agriculture by promoting the Indian use of Indian resources through contracts and cooperative agreements with federal agencies. Insular Jurisdictions. The United States has strategic and economic pacts with two jurisdictions in the Atlantic Ocean and six in the Pacific Basin. These jurisdictions are together referred to as insular areas and include the territories of American Samoa, Guam, and the U.S. Virgin Islands; the commonwealths of the Northern Mariana Islands and Puerto Rico; and the freely associated states of the Federated States of Micronesia, the Republic of the Marshall Islands, and the Republic of Palau. The pacts with the insular areas include the provision of federal assistance which, for example, can include funding to support public- health preparedness efforts, such as building and maintaining basic public-health capabilities. According to CDC, some of the world’s most destructive diseases are vector-borne—that is they are transmitted to humans and animals by vectors such as ticks, mosquitoes, or fleas. CDC also contends the United States is at a greater risk than ever from vector-borne diseases—such as West Nile virus, Lyme disease, dengue fever, chikungunya, and Rocky Mountain spotted fever—due to globalization and climate change. separate public-health laboratory—play a dual role in providing both clinical and public-health laboratory services in their own jurisdictions. The laboratories in this network have limited testing capabilities, though, and often medical officials must send specimens to Hawaii, the U.S. mainland, or Australia for additional testing. CDC officials said that the Pacific insular areas present a challenge to global disease spread and detection, because the region has experienced outbreaks of emerging infectious disease and has lower detection capacity. According to CDC officials, in the age of routine air travel and with the rights granted to foreign nationals of some Pacific insular areas under the Compacts of Free Association, the risk of insular residents traveling to U.S. territories, Hawaii, and the mainland with undiagnosed and potentially dangerous infectious diseases is troublesome. Additionally, according to DOI officials, issues surrounding international travel create challenges to ensuring timely response to disease outbreak events in insular areas. USDA operates disease-eradication and investigation activities, export certification, and surveillance actions in most U.S. insular areas. In addition, USDA’s National Wildlife Disease Program has an office in Hawaii that supports activities to conduct surveillance for and respond to outbreaks of disease in wildlife that pose threats to human health and agricultural resources. DOI’s USGS National Wildlife Health Center, located in Madison, Wisconsin, assists state and federal agencies with wildlife health-related issues and has a Honolulu Field Station, which is staffed by a wildlife disease specialist and three biological technicians. The Honolulu Field Station was established to serve state and federal agencies in Hawaii and the Pacific, including the insular areas. The Honolulu Field Station provides training to biologists regarding response to unusual wildlife mortalities and performs laboratory and field investigations to determine the cause of death in wildlife. About 75 percent of the new diseases that have affected humans over the past 10 years are zoonotic and have been caused by pathogens originating from an animal. Many of these diseases have the potential to spread through various means over long distances and to become global problems. As shown in figure 3, these emerging and reemerging diseases transmit between animals—including livestock and wildlife—and humans. In some cases, disease transmission is direct, in others the animals act as intermediate or accidental hosts, while in others transmission occurs via arthropod—for example, mosquitoes or ticks—vectors. Examples of such emerging and zoonotic diseases include: West Nile virus, H1N1, SARS, avian influenza, and rabies. Potential bioterrorism threats also include the use of zoonotic diseases as weapons of mass destruction, such as anthrax, plague, tularemia, and brucellosis. Habitat loss and human encroachment on rural and wildlife environments are bringing populations of humans and animals, both farmed and wild, into closer and more-frequent contact. Increasingly, wildlife are involved in the transmission of diseases to people, pets, and livestock, and managing wildlife vectors is an integral part of efforts to control the spr of zoonotic diseases. Diseases among wildlife can also provide early warnings of environmental damage, bioterrorism, and other risks to human health. DOI’s USGS National Wildlife Health Center, which is the only federal laboratory in the United States dedicated to wildlife disease investigation, focuses on developing methods to reduce or eliminate the transmission of diseases among wildlife, domestic animals, and humans. In June 2010, we reported that while some high-level biodefense strategies have been developed, there is no broad, integrated national strategy that encompasses all stakeholders with biosurveillance responsibilities that can be used to guide the systematic identification of risk, assessment of resources needed to address those risks, and the prioritization and allocation of investment across the entire biosurveillance enterprise. We found that the decision makers responsible for developing a national biosurveillance capability are spread across multiple agencies and departments, and rely on support from state and local authorities. We noted that our prior work on complex undertakings like biosurveillance can benefit from strategic oversight mechanisms, such as a focal point and a national strategy, to coordinate and lead efforts across the multiple federal departments with biosurveillance responsibilities. We recommended that the Homeland Security Council, which was established to serve as a mechanism for ensuring coordination of federal homeland security–related activities and development of homeland- security policies, should direct the National Security Staff to establish a focal point and charge this focal point with the responsibility for developing a national biosurveillance strategy. Hunting feral swine is a popular sport among hunters, and also serves as a population control method which wildlife agencies support, but there are more than 24 diseases that people can get from feral swine. While most of these diseases are spread by eating undercooked meat, the germs that cause swine brucellosis are spread by swine through birthing fluids and semen. People become exposed to the germs through contact with an infected swine’s blood, fluids, or tissues (such as muscles, testicles, liver, or other organs). Domestic swine are also threatened by brucellosis through contact with infected feral swine. In August 2011, the National Security Staff reported that it had created a biosurveillance Sub-Interagency Policy Committee, under the guidan the Domestic Resilience Group, to serve as a focal point in order to coordinate the development of a National Strategy for Biosurveillance. They said the strategy, and the implementation guidance to it, will define the overall purpose of the U.S. government biosurveillance effort, and w pay particular attention to the assignment of roles and responsibilities. These efforts are the first steps taken to address the findings in our June 2010 report. In the absence of a national biosurveillance strategy, the federal government has some efforts, including emergency preparedness, disease-specific surveillance, and laboratory enhancement programs, that provide resources and information that state and city officials say are critical to their efforts to build and maintain capabilities. The federal programs and initiatives that officials identified during interviews as useful for supporting their biosurveillance capabilities generally fell into four categories, which respondents to our follow-up questionnaire ranked in descending order of importance as follows: (1) grants and cooperative agreements, (2) nonfinancial technical and material assistance, (3) guidance, and (4) information sharing. As we reported in June 2010, about federal biosurveillance activities, without a strategic approach to build and maintain a national biosurveillance capability, these efforts continue to be uncoordinated and not specifically targeted at ensuring the most-effective and efficient biosurveillance capability. Nearly all—26 of 27—of the questionnaire respondents identified grants and cooperative agreements as the most important type of federal assistance they receive. During interviews, state and local officials in multiple agriculture, public-health, and wildlife departments said that they are completely or largely dependent on federal funding for biosurveillance-related activities and that their biosurveillance capabilities would be limited without these federal grants and cooperative agreements. State and city officials we interviewed noted that grants and cooperative agreements generally serve a dual purpose in that they both provide guidance on federal priorities, goals, and objectives and provide financial support to pursue those priorities. For example, when we asked public- health officials about the federal efforts that support their capabilities, five of nine public-health departments cited the guidance on planning and federal priorities that they receive in conjunction with the Public Health Emergency Preparedness (PHEP) cooperative agreement. At the same time, six of nine public-health departments we interviewed cited PHEP funding as critical for supporting their capability resources, such as additional staff to increase investigation and diagnostic capacity, and for building and maintaining those capabilities identified as priorities. Officials from one public-health department said that the funding they receive for PHEP and another CDC cooperative agreement—Epidemiology Laboratory and Capacity for Infectious Diseases (ELC)—pays the salaries of 70 percent of their communicable-disease staff, including the salaries of their scientists, researchers, physicians, and data analysts. Moreover, these officials said the federal cooperative agreements enable the department to conduct outbreak investigations that were not possible before PHEP and ELC funding was available. Similarly, laboratory officials in one state we visited said that the cooperative agreements enable the department to pay for additional public-health positions, training, and laboratory testing efforts and equipment, and without the cooperative agreements, their laboratory testing capacity would be considerably reduced. A National Biosurveillance Capability: A national biosurveillance capability is the combination of capabilities of all jurisdictions and entities that constitute the biosurveillance enterprise working in concert to achieve the timely detection and situational awareness goals of biosurveillance, particularly for potentially catastrophic biological events. In interviews, agriculture officials in five of seven states said that their departments depend on federal funding to conduct surveillance efforts. For example, officials from three of the states said federal grants and cooperative agreements enable their departments to, among other things, collect and test specimens and purchase equipment for surveillance efforts. Similarly, wildlife officials from four states we interviewed said that their dependence on federal funding dictates priorities for certain surveillance efforts—such as the funding for avian influenza and chronic wasting disease surveillance efforts—and they would likely not conduct active surveillance efforts like these without federal support. In follow-up questionnaires, we asked officials to identify the federal grants and cooperative agreements that were essential to their core biosurveillance capabilities. Table 2 shows the federal grants and cooperative agreements most commonly identified as essential to their core biosurveillance capabilities by the 27 officials who responded to our questionnaire, by group. For more information on questionnaire results, see appendix III. Respondents to our follow-up questionnaire ranked nonfinancial technical and material assistance as the second-most important type of federal support for building and maintaining biosurveillance capabilities. According to state and local officials, the nonfinancial assistance efforts they identified help to, among other things, support biosurveillance capacity by improving state and local capacity to identify and diagnose diseases. For example, state public-health, agriculture, and wildlife officials said that training opportunities sponsored by the federal government help enhance and standardize their laboratory testing methods, epidemiological investigations, and specimen-collection procedures, which helps state and local officials develop more efficient and effective disease diagnostic capabilities. In addition, in interviews, officials from both public health and agriculture said that the chance to work together on concrete projects like avian influenza planning and surveillance projects gave them an ongoing reason to communicate and collaborate. Public-health officials from five of nine public health departments we visited said, in interviews, that they rely on CDC’s subject-matter expertise to either guide their efforts during an event—such as the 2009 H1N1 outbreak—or to answer questions about a specific investigation. Moreover, public-health officials in three of seven states said that without this and other types of nonfinancial assistance, their department would not be able to conduct as many investigations and the efficiency with which they could diagnose a disease would decrease. In addition, public- health officials from one state said the ability to get CDC’s help confirming results and to send specimens with unusual characteristics, which are difficult to identify, increases the state’s laboratory capacity and improves the efficiency with which the state can diagnose an unusual disease. Similarly, agriculture officials we interviewed in one state said if they did not have the National Veterinary Services Laboratory (NVSL) to provide confirmation for unusual disease samples, they would be less prepared to handle disease outbreaks. Finally, wildlife officials from one state said working in the field with federal officials to trap animals and collect samples has enhanced their relationships with federal officials, their knowledge of new sampling procedures and surveillance data management, and their ability to work with USDA officials during the grant process. In follow-up questionnaires, we asked officials to identify the federal nonfinancial and technical assistance efforts that were essential to their core biosurveillance capabilities. Table 3 shows the federal nonfinancial and technical assistance efforts most commonly identified as essential to their core biosurveillance capabilities by the 27 officials who responded to our questionnaire, by group. For more information on questionnaire results, see appendix III. The category of federal assistance ranked third overall in importance by state and city questionnaire respondents is guidance. Additionally, during our site visits, the majority of state and city officials we interviewed—16 of 23—said the primary source of federal guidance related to biosurveillance accompanies federal grants and cooperative agreements and serves the purpose of shaping programmatic goals, objectives, and priorities. For example, the public-health epidemiologists and laboratory director for one city said that the detailed capability guidance that accompanied the most recent round of PHEP funding helped the city perform a gap analysis, the results of which will serve as a planning guide over the next 5 years. In addition, four of nine public-health departments we spoke with discussed guidance that supports their efforts to build and maintain biosurveillance capabilities by supporting specific activities that constitute their capabilities, for example, guidance regarding standardized case definitions, disease-reporting requirements, and sampling procedures for unusual or emerging disease agents. Public-health officials in one state we visited said that guidance on standardization is essential to ensure states are able to move information to CDC more efficiently, and without standardization it would be difficult to exchange information with their partners. Similarly, agriculture officials in one state we visited said federal sampling standards help interpret information about disease occurrence in other states, because the significance of results is uniform nationwide. These officials said that without this guidance, they would need to develop protocols state-by-state to interpret results, which would lead to a loss of efficiency in animal diagnostic laboratory protocols and interpretation of results. In follow-up questionnaires, we asked respondents to characterize the various types of guidance, which had previously been identified in interviews, as very useful, moderately useful, somewhat useful, or not useful in supporting their biosurveillance capabilities. Table 4 shows the sources of federal guidance the 27 officials who responded to our questionnaire—by group—most commonly identified as very useful for supporting biosurveillance capabilities. For more information on questionnaire results, see appendix III. Information-sharing tools and analytical products was the category ranked fourth in importance by our 27 questionnaire respondents. In interviews, officials said that without the knowledge they gain through these tools and products they would lack critical information about emerging-disease situations in neighboring states and throughout the nation. For example, public-health officials in one state noted that they would lack context about a health situation in their state without the knowledge they gain through these systems and reports about incidents in neighboring states and throughout the nation. In addition, they said these tools are useful in helping them to better understand baselines for various diseases they observe in their own jurisdictions. Public-health officials in another state we interviewed noted that without the information provided by PulseNet, their ability to detect foodborne outbreaks would be diminished. Southeastern Cooperative Wildlife Disease Study (SCWDS) The wildlife agencies of 19 states (shaded on map) and Puerto Rico and the U.S. Geological Survey of DOI fund regional wildlife research and service projects through SCWDS, and USDA’s Veterinary Services provides support for national and international surveillance activities where diseases may spread among wildlife and livestock. SCWDS provides wildlife-disease expertise to state and federal agencies responsible for wildlife and domestic livestock resources. SCWDS aims to detect causes of illness and death in wildlife, characterize the effect of diseases and parasites upon wild animal populations, identify disease interrelationships between wildlife and domestic livestock, and determine the role of wildlife in transmission of human diseases. Likewise, agriculture officials we interviewed in one state said without the compiled information that federal agencies share with them—for example, disease data on USDA’s Veterinary Service Laboratory Submissions website—they would be operating blindly and would need to spend time contacting other states to know what is happening outside their borders. They said this information is particularly useful when it comes to animal movement across state lines, so that they are aware of those diseases of concern in different areas of the country. Similarly, wildlife officials from one state said that the information shared by federal agencies provides awareness of disease threats in their state and information about how to respond if they encounter the disease in question. They said that the lack of this information could delay the state’s detection of a potentially devastating disease, because outbreak signals—like animal die-offs would have to trigger an investigation in their state—before they had any awareness of looming disease threats. In follow-up questionnaires, we asked officials to identify the types of information sharing tools and analytical products that were essential to their core biosurveillance capabilities. Table 5 shows the types of information sharing tools and analytical products most commonly identified as essential to their core biosurveillance capabilities by the 27 officials who responded to our questionnaire, by group. For more information on questionnaire results, see appendix III. In June 2010, when we recommended that the National Security Staff lead the development of a national biosurveillance strategy, we noted that an effective national biosurveillance strategy could help identify the resources currently being used to support a biosurveillance capability, additional resources that may be needed, and opportunities for leveraging resources. Although not generalizable to the whole biosurveillance enterprise, our findings suggest that there are existing federal resources that nonfederal officials find essential to their efforts and could provide a starting point for considering how to leverage nonfederal resources. Because the resources that constitute a national biosurveillance capability are largely owned by nonfederal entities, a national strategy that considers how to leverage existing efforts and resources in federal, state, tribal, local, and insular jurisdictions could improve efforts to build and maintain a national biosurveillance capability. State and city officials we spoke with reported a variety of challenges in building and maintaining biosurveillance capabilities. These challenges generally fell into three different groups: (1) state policies enacted in response to fiscal constraints, (2) obtaining and maintaining resources to support capabilities, and (3) leadership and planning challenges. In the follow-up questionnaire, we asked respondents how challenges identified in the interviews affect their capabilities and to rank the top three challenges they face. For each challenge respondents identified facing, we asked them to indicate whether or not the current combination of resources, leadership, and planning in their jurisdictions were adequate to address that challenge. The challenges reported here are only those that respondents indicated are not currently adequately addressed. For additional information about questionnaire results related to challenges, see appendix IV. One set of challenges that state and city officials described to us had to do with the state and local budget crises and the policies states have put in place to respond to this challenge. Specifically, in interviews with state public-health, agriculture, and wildlife departments, multiple officials reported barriers that state policies presented for building and maintaining a biosurveillance capability. Among these barriers were (1) an inability to use federal funding for new positions because of state hiring restrictions, (2) an inability to attend national trainings and conferences (even when federal travel funding is available) because of state travel restrictions, and (3) an inability to participate in training and other online forums sponsored by federal agencies and professional associations because of state restrictions on when and how they can use information technology in their offices. In follow-up questionnaires, 20 of 27 respondents identified these kinds of state policies as a challenge to building and maintaining biosurveillance capability. One respondent who ranked this kind of challenge among the top three challenges noted that state policies on hiring require the use of contractors rather than full-time equivalent personnel. As a consequence, the respondent noted, the knowledge accrued through the course of on- the-job training leaves the agency when a given contract ends. Although federal agencies who work to help support capabilities in state and local jurisdictions have limited ability to directly affect state policies, CDC officials say they are aware of the issue and agree that it is a challenge—in some cases severely hampering states’ ability to move forward with capability building. The CDC officials said they have discussed the issue with their state and local partners as part of a larger effort to explore various funding options to help better support capability building. A second set of challenges reflected general concerns about the resources that support biosurveillance capabilities, such as appropriately trained personnel, systems, and equipment. Nineteen of 27 respondents to our follow-up questionnaires reported facing workforce shortages among skilled professionals—epidemiologists, informaticians, statisticians, laboratory staff, animal-health staff, or animal-disease specialists. One respondent who rated this particular challenge among the top three noted that noncompetitive salaries had resulted in lack of interest in positions and high turnover. As a consequence, according to the respondent, investments in training yield lower returns and quality of the overall workforce is affected. Sixteen of 27 questionnaire respondents reported problems with training availability. A state wildlife official who rated training availability as the top current challenge noted that without proper training, staff in the field—who often have duties other than disease surveillance—lack an understanding of the importance of surveillance and reporting, as well as knowledge of the techniques to carry it out. Fourteen of 27 questionnaire respondents indicated issues with workforce competency—hiring and retaining professionals with adequate training and education. One of the respondents that rated this challenge among the top three noted that without properly trained staff to support them, initiatives languish. She also noted that the need for the few skilled personnel to provide on-the- job training and education consumes time and affects workflow. Fifteen of 27 questionnaire respondents reported that keeping up with ongoing systems maintenance and enhancement needs has been challenging. One respondent who rated ongoing systems maintenance and enhancement among the top three challenges said that public-health informatics, including state-of-the-art database systems and effective electronic linkages, are critical to surveillance, but place demands on resources to attract and maintain public-health informatics expertise and support database applications. Thirteen of 27 questionnaire respondents reported challenges maintaining adequate laboratory capacity. One laboratory official who ranked this among the top three challenges stated that many at the public-health lab are nearing retirement and it has been difficult to attract and retain younger laboratory scientists to work in public health. The third set of challenges state and city officials that we interviewed reported included (1) difficulty planning for longer-term capability-building efforts because of uncertainty from year to year about whether project funds would be available; (2) difficulty planning to invest in basic capabilities for multiple disease threats because federal funding has focused on specific diseases rather than strategically building core capabilities; (3) limited leadership and planning—at all levels of the biosurveillance enterprise—to support regional and integrated disease data-surveillance approaches; and (4) differing priorities and other partnership issues. Many of the challenges that state and city officials identified are similar to issues we reported regarding biosurveillance at the federal level. We noted that many of the challenges like these that face the biosurveillance enterprise are complex, inherent to building capabilities that cross traditional boundaries, and not easily resolved. We recommended in June 2010 that a leadership mechanism, such as a focal point, and a strategy could help define the scope of the problems to be addressed, in turn leading to specific objectives and activities for tackling those problems, better allocation and management of resources, and clarification of roles and responsibilities. In our follow-up questionnaires, by far the single most-commonly reported challenge was funding instability and insecurity, with 25 of 27 questionnaire respondents identifying it as a challenge that has not been adequately addressed. Among those, 23 ranked it as one of the top three challenges and 16 of those ranked it as their top challenge. In interviews, officials in both the human- and animal-health communities noted that they receive little or no support from state budgets for surveillance activities, leaving them largely reliant on federal funding for this type of activity. Moreover, two agriculture officials noted that it is difficult for states to develop long-term plans for building and maintaining capabilities because they do not know how much funding they will receive from year to year. For example, three of the nine visits we made to state public- health departments occurred near the application deadline for the new PHEP cooperative agreements. All three sets of public health officials reported receiving news of a last minute reduction in funding—which according to CDC officials equaled 12 percent—that resulted in the need to significantly revise their PHEP application and accompanying plan for building and maintaining capabilities, in a short time frame. In interviews, agriculture officials in three of the seven states we visited said they receive little or no funding in their state budgets to support biosurveillance activities and depend on federal funding, which they say has been decreasing. Because of the decreases in funding, the agriculture officials from one state said that their department has decreased its staff level by half over the past 6 years, and these officials noted that without federal funding the department’s biosurveillance capabilities would be minimal. Likewise, wildlife officials in five of the seven states we visited said that they receive little or no funding for surveillance from their state budgets and rely on federal programs to support surveillance. Federal officials agreed that funding insecurity and instability is a serious challenge affecting states’ ability to plan for and execute capability- building efforts. In October 2010, CDC’s Advisory Committee to the Director—recognizing much of CDC’s effect results from the funds it provides state, tribal, local, and territorial public-health departments— charged its State, Tribal, Local and Territorial Workgroup to produce recommendations to maximize resources and develop capacity throughout this nonfederal community. A subworkgroup was created specifically to consider issues arising from the fiscal challenges facing states and localities. According to CDC officials, the workgroup has discussed moving cooperative agreements like PHEP and ELC to a 2-year cycle to give state and local public-health departments more time to work within state- imposed restrictions, but they cannot make such a change without legislative action. In addition, CDC officials stated that they attempt to communicate budget decisions to their nonfederal partners in a timely manner. For example, they said that they provided guidance to PHEP applicants to help them plan around funding uncertainty by communicating the minimum funding available and advising them to plan for the next fiscal year using the current year’s funding level with the expectation that it will likely be reduced. However, these officials also noted that when federal agencies have to operate on a continuing resolution, it restricts their ability to plan and obligate funds, which in turn can result in reductions and delays in funding activities at the state and local level. An official from DOI’s USGS National Wildlife Health Center also attributed funding instability and insecurity to the annual appropriations cycle, because federal agencies also do not know what the budget will be from year to year. Like CDC officials, he said that multiyear appropriations would allow for more long-term planning. USDA officials also acknowledged that their nonfederal partners face challenges planning for and developing capabilities because of funding uncertainty. Officials from USDA’s Animal and Plant Health Inspection Service’s Veterinary Services said they are working to streamline the cooperative agreement process to provide additional flexibility to the states by producing fewer but broader agreements that would allow the states to better prioritize their needs. Twenty-one of the 27 state and city officials who responded to our follow- up questionnaire reported that the common federal approach of funding capabilities in response to specific diseases or initiatives—for example, West Nile virus—limited their ability to develop core capabilities that could provide surveillance capacity that cut across health threats and for emerging-disease threats. Along these lines, one of the respondents who rated this challenge among the top three said that broad-based surveillance activities are crucial for detecting new and emerging diseases, but funding targeted for specific diseases does not allow for focus on a broad range of causes of morbidity and mortality. Federal officials agreed that the disease-specific nature of funding is a challenge to states’ ability to invest in core capabilities. CDC officials said this long-standing issue stems from the way CDC receives funding, which is disease-specific and, in turn, awarded to the states that way. According to officials, funding authorized under the Patient Protection and Affordable Care Act (PPACA) has recently offered some authority for flexible biosurveillance capability investments. For example, they said the PPACA program supports additional epidemiologists and laboratory support staff and infrastructure improvements, among other things, at the state and local level. Additionally, CDC officials noted that the all-hazards nature of PHEP grants supports states’ ability to invest in crosscutting core capabilities. An official from DOI’s USGS National Wildlife Health Center similarly noted that the structure of funding is a challenge for agencies at all levels, and said he would like to see more broad-based funding to allow for long- term investments to retain and develop capacity to address disease issues. USDA officials also acknowledge that stovepiped, or disease- specific, funding presents a challenge for their nonfederal partners when planning for and investing in crosscutting capabilities. Within USDA’s Animal and Plant Health Inspection Service, officials from Veterinary Services said that they are moving away from funding disease- and program-specific items and toward a new funding approach, intended to reduce stovepiping and provide for additional flexibility. USDA’s Wildlife Services officials also find stovepiped funding challenging, but said that they have little control over the issue. In interviews and follow-up questionnaires, city and state officials also reported challenges with the leadership and planning for integrated biosurveillance approaches. Sixteen of 27 respondents to our follow-up questionnaires reported a lack of leadership and mechanisms to support regional approaches to disease surveillance. Similarly, 17 of 27 respondents reported that integrating information across disease domains is a challenge because of a lack of leadership and mechanisms to facilitate information sharing and data integration among public-health, agriculture, and wildlife disease-control functions. One respondent who ranked integrating human and animal surveillance information among the top three challenges said that the lack of leadership and mechanisms to do so is a barrier to effective and efficient disease response. Federal agencies with biosurveillance roles have acknowledged that attention to integrated biosurveillance approaches is needed. In response to HSPD-21, CDC created the National Biosurveillance Strategy for Human Health, collaborating with federal and nonfederal partners, to provide a foundation for a long-term effort to improve a nationwide capability to manage human health–related data and information. The strategy lays out six priority areas for attention to address critical gaps and opportunities for improvement. Among the six is integrated biosurveillance, about which the strategy states that, because the responsibility for public health is shared across multiple levels of government, professional practice, and scientific disciplines, the timely exchange of reliable and actionable information is essential. Although the strategy includes goals for enhancing integration of human-health data, these goals have not yet been the central focus of implementation plans for the strategy. However, according to CDC officials, the efforts to establish objectives for enhancing management of human-health information as part of the strategy has been important for larger HHS efforts, such as implementing the National Health Security Strategy. Officials also said these activities are important to the efforts the National Security Staff has underway to guide the biosurveillance enterprise. In addition, CDC officials stated that the BioSense program is being redesigned to improve the ability for jurisdictions to share data with each other during specific events, which could foster more regional data sharing. An official from DOI’s USGS National Wildlife Health Center said it would be helpful to have a national strategy or framework to guide all of those involved in wildlife health to respond in a coordinated, appropriate, and proportionate way to wildlife disease issues. In addition, he said the framework is needed to outline the shared responsibilities related to threat detection and assessment, policy development, and management actions. According to the official, DOI plans to begin working on such a framework for wildlife surveillance with its partners in the near future. USDA officials also acknowledged that nonfederal partners have faced challenges with leadership and planning for integrated biosurveillance approaches. USDA officials from Animal and Plant Health Inspection Service’s Wildlife Services said they could enhance the integration of biosurveillance capacities for their nonfederal partners by providing access to their existing networks. However, the officials said they would need a source of funding for the increased efforts required to meet the needs of nonfederal partners. Officials from Veterinary Services stated that to address integration challenges, they try to engage their nonfederal partners in planning activities, but are looking to the National Security Staff’s work on the national biosurveillance strategy to help address larger challenges. Some challenges identified by state and local officials reflected an opportunity for better partnerships between the federal and the state and local governments. Fourteen of 27 respondents to our follow-up questionnaires indicated that competing federal priorities present challenges. For example, in one interview, state officials said that grant guidance can be contradictory with regards to funding streams, and one grant may recommend focusing on a certain priority and then other grants recommend other priorities that do not complement the other grant’s guidance. In addition, 12 of 27 questionnaire respondents reported having vague or insufficient guidance. In interviews, state and local officials who identified this issue noted that there is no user-friendly central repository of best practices for maintaining and enhancing capabilities and that guidance lacks concrete examples for things like developing state planning documents or fostering integrated biosurveillance efforts. Finally, 12 of 27 questionnaire respondents reported federalism challenges, such as conflict between national and local priorities, philosophies, and approaches to conducting biosurveillance. For example, in an interview, public-health officials in one state told us that they have to spend valuable time and resources convincing their federal partners not to overreact to electronic laboratory results of disease that are considered dangerous, such as plague, but are also endemic in low levels within their jurisdictions Officials from CDC stated that they are aware of these kinds of challenges facing their nonfederal partners and of the need to improve federal and nonfederal coordination among programs. These officials said states may have different priorities than those at the federal level due the need to balance their state responsibilities to address health concerns of the state with their other activities conducted with varying federal agencies and programs. According to the officials, they are committed—in national strategy efforts—to building on current capabilities at all levels of government and will take into consideration the issues and challenges states experience in working with their federal biosurveillance partners. They also noted that as they developed guidance for PHEP recipients for the most recent round of cooperative agreements—Public Health Preparedness Capabilities: National Standards for State and Local Planning—they involved approximately 200 stakeholders and experts to help public-health departments better organize their work and determine whether they have the resources to build and sustain all the capabilities. Additionally, they said that they attempted to ensure that their nonfederal partners do not experience continual shifts in PHEP priorities by implementing a new process for reviewing and approving proposed changes to PHEP guidance. They also described several efforts to coordinate grant guidance within CDC and with other federal partners to improve effectiveness and reduce conflicting activities or redundant reporting. Among these efforts were multiple workgroups and other activities to engage with federal and nonfederal partners, as well as a Memorandum of Understanding with multiple federal departments that fund preparedness activities. According to CDC officials, the memorandum establishes a formal framework that supports joint federal planning and better coordinates emergency public health and health care preparedness consistent with national strategies and priorities. An official from DOI’s USGS National Wildlife Health Center agrees that partners throughout the biosurveillance enterprise experience federalism challenges. He said that a national strategy or framework that clearly outlines roles and responsibilities could help alleviate these issues. USDA officials also acknowledged that their nonfederal partners have faced these kinds of challenges. Officials from USDA Animal and Plant Health Inspection Service’s Wildlife Services said that they recently created a plan to achieve a more unified cross-program approach to addressing wildlife-disease issues that will affect the agency and its stakeholders. These officials stated that enhanced integration of the USDA resources, expertise, personnel, and infrastructure needed to address issues of wildlife-disease surveillance—among other things—should help their nonfederal partners to mitigate this challenge. Officials from Veterinary Services stated that to address federalism challenges, they seek to proactively engage their nonfederal partners in planning activities, but are looking to the National Security Staff’s work on the national biosurveillance strategy to help address the larger challenge. In our June 2010 report, we called for a national strategy that could begin to address the difficult but critical issues of who pays for biosurveillance capabilities and how a national capability will be sustained in the future. Our findings about the challenges with planning and investing in core capabilities, while not generalizable to all nonfederal jurisdictions, suggest that there may be some common issues with the structure of funding that affect longer-term planning and investments in core biosurveillance capabilities. We also reported in June 2010 that clarifying the numerous governmental and private-sector entities’ roles and responsibilities for leading, partnering, or supporting biosurveillance activities could help ensure timely disease detection and situational awareness across multiple domains. Our findings similarly suggest that there may be some common issues with promoting integrated biosurveillance approaches at the nonfederal level. As part of a national biosurveillance strategy, considering challenges like these may help partners across the enterprise find shared solutions as they strive to build and maintain an integrated national biosurveillance capability. As with the state and local jurisdictions, the federal government does not have efforts designed specifically to build and maintain tribal or insular biosurveillance capabilities to support a national biosurveillance capability. However, tribal and insular jurisdictions also receive certain cooperative agreements and technical assistance that federal officials say can help support biosurveillance capacity. At the same time, federal officials reported that limited resources and infrastructure in tribal and insular jurisdictions present challenges to building their capacity. According to federal and professional association officials that work with tribal and insular jurisdictions, federal agencies provide disease-specific funding and cooperative agreements, as well as training and technical assistance, to support public-health and animal-health surveillance capacity. Insular areas are eligible for the PHEP and ELC cooperative agreements from CDC. PHEP funds public-health preparedness projects in American Samoa, Guam, U.S. Virgin Islands, Northern Mariana Islands, Puerto Rico, Federated States of Micronesia, Republic of the Marshall Islands, and Republic of Palau. In addition, ELC—which builds epidemiological and laboratory capacity—is awarded to Puerto Rico and the Republic of Palau. According to officials from PIHOA, federal agencies also provide specimen testing for Pacific insular areas—which have no reference laboratory capacity of their own—for disease agents that the islands’ clinical laboratory network is not equipped or certified to handle. PIHOA developed the Regional Lab Initiative for the transportation of human specimens, and PIHOA serves as a steward for the specimen transportation network by negotiating specimen-transportation contracts with commercial airlines, developing shipping standards for laboratory specimens, and overseeing the Regional Lab Initiative budget. PIHOA officials said that federal funding for this initiative is critical to enable Pacific insular areas to transport specimens for testing to those laboratories with greater capabilities. According to CDC officials, their Division of Global Migration and Quarantine also works with the insular areas to enhance crosscutting public-health initiatives, with a focus on disease surveillance and help public-health departments tie into various CDC programs. For example, the division has been working with Guam since late 2009 to move towards electronic data sharing of health information to improve timeliness and response to catastrophic events, including better linkages to the National Notifiable Disease Surveillance System. During a 2010 mumps outbreak in Guam and the Federated States of Micronesia, the division also played a coordination role and facilitated the shipment of lab specimens. Officials said that the Guam mumps outbreak helped identify gaps in their surveillance capacity, and the division followed up with targeted training to address the gaps. The division is also working to enhance the quality of American Samoa's public-health records to enhance its ability to submit electronic public-health data into the World Health Organization’s syndromic surveillance system for the Pacific Islands region. For animal health in the insular areas, USDA has employees and offices in some insular areas. USDA Veterinary Medical Officers in the field interact with producers, respond to reports of potential Foreign Animal Diseases, help administer disease eradication and control and surveillance activities, and assist with export certification out of these field-office sites. DOI provides diagnostic service to determine causes of mortality in wildlife. For example, in American Samoa and Palau, DOI performs necropsy surveys of free-ranging wildlife (both terrestrial and marine) to determine the cause of death. The agency reported that all bird carcasses necropsied are routinely tested for avian influenza. The agency also reported that the ability to ship samples from American Samoa and Palau to Honolulu, Hawaii, has allowed the agency to gain a greater understanding of causes of wildlife mortality in those regions. In case of catastrophic mortality, DOI officials said the agency would probably send someone out to the area to provide on-site assistance and collaborate with local agencies to deal with the issue and resolve it to its logical conclusion. For example, DOI officials have offered response assistance to Palau to help with unusual poultry mortality events in efforts to effect early detection of avian influenza. DOI also provides annual workshops to agencies to communicate findings and provide on-site training on wildlife disease response. Tribal nations are not eligible for PHEP or ELC funding, but CDC advises states to include tribes in their required all-hazards public-health capability planning for PHEP funding. In addition, IHS has cooperative agreements with Tribal Epidemiology Centers, to support local public health and provide data analyses for the tribes. As shown in figure 4, there are 12 Tribal Epidemiology Centers located around the country. The 12 Tribal Epidemiology Centers typically serve 30-100 tribes in their region. Officials from IHS said that the Tribal Epidemiology Centers may offer a foundation for building tribal biosurveillance capabilities. However, biosurveillance is not the primary job or mission of the epidemiology centers. The priorities of the centers are driven by the needs of the tribes, and the centers help the tribes create a structure for intervention to prevent the major conditions affecting the tribal population. Federal agencies also provide technical assistance and training to tribal jurisdictions. The Office for State, Tribal, Local and Territorial Support within CDC provides training and technical assistance to improve data and surveillance standards in tribal areas and work to foster public-health workforce development in tribal areas. In addition, IHS provides, without charge, software for automated electronic surveillance that can be implemented by IHS, tribal, and Urban American Indian and Alaska Native sites to help with automated reporting and information sharing. The initial project, IHS’s Influenza Awareness System, focused on influenza-like illness, but according to IHS officials is currently expanding to include other notifiable diseases. IHS also provides technical assistance to tribes, sometimes through the Tribal Epidemiology Centers, and also provides some training, which is available to any American Indian or Alaska Native. The primary focus of the training is not biosurveillance but basic public-health functions, but federal officials who work with these jurisdictions say that any effort to build public-health infrastructure increases biosurveillance capabilities over their existing levels. Additionally, according to USDA officials, tribes can participate in the same disease-control and eradication programs (such as tuberculosis, brucellosis, scrapie, and chronic wasting disease) as states through grants and cooperative agreements. These officials said these cooperative agreements increase tribes’ biosurveillance capability, particularly with tribes that have more-robust existing infrastructure, like Navajo Nation, which has a full-time veterinarian. USDA officials with responsibility for wildlife said they also provide cooperative agreements and training to support tribal wildlife disease surveillance. To help build public-health and animal-health surveillance capacity, federal agencies have also created working groups and other outreach efforts to tribal and insular jurisdictions. For example, the Office for State, Tribal, Local and Territorial Support within CDC works with health departments to increase public-health capacity through a working group that helps build capacity across jurisdictions, for example between tribes and corresponding state or local health departments. CDC has developed Pacific working groups to address various issues in the Pacific insular areas, such as the Public Health Preparedness and Response Working Group and an epidemiology working group. According to officials, these working groups help coordinate activities between various CDC departments and the Pacific insular areas. USDA’s Native American Program Coordinator serves as a tribal liaison, providing assistance to tribes and has developed a relationship with the large land-owning tribes that participate in its programs. The officials said that the tribal liaison has built this relationship over the years by attending the Intertribal Agriculture Council’s Meetings, a gathering of tribal agriculture producers. They noted that because of the tribal liaison’s continuous outreach, they believe that the tribes know whom to call if unusual animal disease symptoms appear in animals on their lands. Federal officials, as well as officials from professional associations like the Council of State and Tribal Epidemiologists and PIHOA described infrastructure and demographic challenges they face in helping to build biosurveillance capabilities in tribal and insular jurisdictions. For example, CDC officials said that, overall, there is a low capacity to detect and report diseases in both tribal and insular jurisdictions, and that better assurance for detection of potentially catastrophic signs would require enhancement of basic systems and public-health functions. HHS officials said that tribes, insular areas, and states face similar public-health infrastructure challenges, but the challenges are more severe in tribal and insular areas. For example, IHS and CDC officials said some tribes have serious public-health infrastructure limitations—for example, some have minimal or no functioning health-department structure—so officials said the idea of building biosurveillance capabilities is not a realistic pursuit in these areas. USDA and DOI officials also reported capacity challenges—such as few veterinary and wildlife personnel on the ground in tribal and insular areas—that limit biosurveillance capabilities. Additionally, officials said that the federal cooperative agreements offered by federal agencies do not always provide for the infrastructure enhancement needed for tribal and insular areas, because they assume a basic level of capacity that these jurisdictions often do not have. However, USDA, DOI, and HHS officials also cautioned that despite the limited infrastructure in some of the tribal and insular areas, it would not be practical from a cost-benefit standpoint to invest in complete biosurveillance systems for every tribe and insular area. For example, for small tribal nations and insular areas it may not make sense to expect them to support and maintain separate laboratory facilities, especially when there are other nearby state resources available that could support testing for those populations. Along the same lines, HHS officials said that tribes and their federal and state partners have historically faced disease-reporting challenges. CDC officials noted that as sovereign nations, tribes typically prefer to work directly with federal agencies, rather than state governments, but because of the nature of public health, it often makes sense for tribes and states to share data or conduct joint investigations with the states. CDC officials said that data sharing between tribes in states is challenging, because tribes may have limited public-health capacity. The officials said that Tribal Epidemiology Centers offer some promise for facilitating information sharing, but some states have been reluctant to share health data with Tribal Epidemiology Centers, because until recently they lacked public-health authority—a legal designation that governs the ability of governmental entities to collect, receive, and share data for public-health purposes under the Health Insurance Portability and Accountability Act of 1996 (HIPAA). Tribal Epidemiology Centers are operated by nonprofit organizations that typically had no legal health authority to handle such data. However, in 2010, PPACA designated these centers public-health authorities under HIPAA. This provision allows the IHS-funded Tribal Epidemiology Centers to access federal and state data sets for research purposes, just as state health departments do. However, these centers still are nonprofit organizations that are competitively selected on a periodic basis and there is no guarantee that the entire nation will continue to have center coverage. HHS officials said the designation of the centers as public-health authorities will likely facilitate more sharing among states and tribes, but it is a relatively new development, so it is too soon to determine the effect. Federal officials also reported facing demographic and logistical challenges in working with tribal nations. Complications in data collection and reporting arise from the nature of tribal boundaries and populations. Specifically, tribes are not defined by geographic boundaries, tribal members may not live on tribal lands, and tribal lands may cross state boundaries. Officials also said population size and geography vary for tribes and many tribes are in remote locations, including about half of the more than 500 federally recognized tribes located in Alaska. An official from IHS noted that, in general, tribal communities do not have populations large enough to justify building complete, individual surveillance programs and that tribes generally do not have infrastructure or resources to support such an effort. According to USDA officials, every tribe has a different relationship with the state it is located in and with the federal government. Some tribes have direct relationships with the state agriculture department because most tribes do not have veterinarians. In some cases, the states may take care of the surveillance needs for a tribe, and in other cases, the tribes may have their own surveillance capacity. In general, tribes do not have funding to establish and maintain laboratories. Tribes typically use the state labs that are part of the National Animal Health Laboratory Network (NAHLN), the facilities at Plum Island, the Ames, Iowa lab, or state labs that are not part of NAHLN. (For more information about laboratories, see app. II.) DOI officials said that tribes are interested in wildlife management and disease surveillance, but do not have the resources, as tribes need to build capabilities at the most basic level—like wildlife biologists and management expertise. Federal agencies, as well as association officials, reported similar resource, demographic, and logistical challenges in insular areas. Officials at the Council of State and Territorial Epidemiologists, PIHOA, and CDC said the Pacific insular areas are challenged in identifying disease outbreaks and emerging diseases. According to PIHOA officials, this is due to workforce shortages for doctors, nurses, epidemiologists, and laboratory officials, and the limited laboratory capacity on the islands. Although the islands can currently depend on laboratories outside Pacific insular areas to conduct testing, and there are currently initiatives and programs in place to improve laboratory capacity on the islands, it may take several days to detect a disease. CDC officials said timely reporting cannot be ensured in the Pacific insular areas and there is limited ability to build public-health infrastructure in the territories. For example, they said the public-health systems will have to transition to more formal mechanisms of information sharing, because currently events trigger regional partners to respond in an ad hoc and unsystematic way. To address some of these challenges, PIHOA developed the Public Health Infrastructure Initiative, partially funded by CDC’s National Public Health Improvement Initiative, which is supported by PPACA’s Prevention and Public Health Fund, to help improve Pacific insular areas’ public-health systems at every level. Through this initiative, PIHOA is working with Pacific insular areas officials to develop public-health curricula to improve the epidemiological and surveillance capabilities of the islands. According to DOI officials, aside from Guam, insular areas in the Pacific region have little to no existing veterinary capacity to deal with animal or zoonotic diseases. DOI officials said they would like to get more wildlife disease data from places like Guam and the Commonwealth of the Northern Mariana Islands, but the lack of reliable in-territory contacts there has made it difficult to establish those relationships. Various federal agencies and professional associations with public-health missions have assessed some aspects of nonfederal biosurveillance capabilities, such as the evaluation of laboratory, epidemiology, surveillance, and other capacities, but the federal government has not systematically or comprehensively assessed state and local governments’ ability to contribute to a national biosurveillance capability. An assessment of capabilities that support biosurveillance is called for in HSPD-10, which states that the United States requires a periodic assessment that identifies gaps or vulnerabilities in our biodefense capabilities—of which surveillance and detection is a key part—to guide prioritization of federal investments. We have previously reported that a national biosurveillance capability depends upon participation from nonfederal jurisdictions and that few of the resources required to support the capability are wholly owned by the federal government. Therefore, assessing the baseline and identifying investment needs for a national biosurveillance capability necessarily involves assessing nonfederal entities’ ability to support a national capability. No federal, state, local, or association official we spoke to was able to identify a systematic approach—planned or underway—to assessing state and local biosurveillance capabilities and identifying strengths, weaknesses, and gaps across the biosurveillance enterprise. However, certain aspects of public-health capabilities have been assessed by federal agencies and professional associations. For example, CDC’s most-recent round of guidance associated with the PHEP cooperative agreements has begun to define elements, priorities, resource considerations, and metrics for building and assessing public-health surveillance, epidemiology, and laboratory capabilities. According to CDC officials, these national standards are designed to assist states and localities in self-assessing their ability to address the prioritized planning resource elements of each capability and then to assess their ability to demonstrate the functions and tasks within each capability. CDC officials stated that this self-assessment enables states and localities to identify their gaps in preparedness, determine their specific jurisdictional goals and priorities, develop plans for building and sustaining capabilities, and prioritize preparedness investments. CDC officials noted that these data and data collected through the ELC could, with the right attention and resources, offer an opportunity to provide more-cohesive information for a national assessment in the future. In addition, for the past 4 years the Association of Public Health Laboratories has conducted an assessment of the District of Columbia and the 50 state public-health laboratories’ capacity to respond to biologi- cal, chemical, radiological, and other threats, such as pandemic influenza. Similarly, the Council of State and Territorial Epidemiologists has conducted four assessments since 2001 to assess the epidemiology capacity of state, local, and territorial health departments in the United States. Further, CDC funded a survey of state, local, and territorial syndromic surveillance capabilities that was conducted by the International Society of Disease Surveillance. According to several federal and state officials, a comprehensive assessment of the biosurveillance enterprise may identify a baseline status, strengths, weaknesses, and gaps across the biosurveillance enterprise and improve the nation’s ability to conduct biosurveillance, but state officials also noted that states would need additional funding to overcome any gaps identified by a federal assessment. For example, officials from one federal agency said that a comprehensive assessment of state and local biosurveillance capabilities would help identify vulnerabilities in the enterprise, assess needs, and help target resources to those areas. Similarly, another federal official who oversees programs for tribal entities noted that knowing more about tribes’ strengths, weaknesses, and gaps would enable their division to better understand where they need to provide additional assistance or focus resources during an event. State officials we interviewed also discussed how a national assessment could identify best practices in biosurveillance and inform states and federal resource decisionmaking. For example, public health officials from one state said that information about the capability needed to support a national biosurveillance capability would be helpful to support lessons learned and identify best practices. Similarly, wildlife officials from one state said they lack knowledge about the types of wildlife surveillance conducted by other states and other states’ baseline capabilities. They said an assessment of capabilities could determine how their efforts compare to other states, which would provide information to state decision makers to guide resource decisions. According to public-health officials from another state, some gaps in biosurveillance are already fairly well understood—such as electronic lab reporting and workforce sufficiency. These officials said that a formalized national assessment would bring these gaps to the attention of federal agencies and hoped that federal agencies would address these gaps with additional funding, guidelines, and the prioritization of investments. Although federal, state, and local officials we interviewed generally agreed that a comprehensive national assessment may improve the nation’s ability to conduct biosurveillance, all the officials we interviewed acknowledged that such an assessment would be a complex undertaking. Federal, state, and local officials said the size, variability, and complexity of the biosurveillance enterprise—including federal, state, and local biosurveillance efforts—make it difficult to define precisely what should be measured and identifying the most appropriate assessment participants would be difficult. For example, public-health officials from one state said it would be important to identify definitions and create measurements with which to evaluate capacities, otherwise it would be difficult to maintain a narrow scope for the assessment. They also noted that the development of this type of assessment would require the input of multiple stakeholders. Other officials also noted that it may be difficult to identify the most appropriate parties to provide information for the assessment. For example, agriculture officials from one state said that identifying the most appropriate person to complete the assessment would be difficult, because a state veterinarian will have a different perspective from someone who regularly works in the field. The difficulty in conducting a comprehensive national assessment is exacerbated not only by the magnitude of the undertaking—assessing the capabilities of the states, tribes, insular areas, and the tens of thousands of localities in the United States—but also by the lack of a clear mission and a vision for the desired end state of a national biosurveillance capability. In our June 2010 strategy recommendation, we noted that the National Security Staff and its focal point should define the mission and desired end state. Until it conducts an assessment of nonfederal biosurveillance capabilities, the federal government will continue to lack key information about the baseline status, strengths, weaknesses, and gaps across the biosurveillance enterprise to guide development and maintenance of a national biosurveillance capability. Officials we interviewed at all levels, as well as federal guidance and directives like HSPD-21, acknowledge that a national biosurveillance capability necessarily rests on the cumulative capabilities of state and local agencies that constitute a large portion of the biosurveillance enterprise. A national strategy like the one we recommended in June 2010—one capable of guiding federal agencies and its key stakeholders to systematically identify risks, resources to address those risks, and investment priorities—may be better positioned to guide development and maintenance of the capability if it takes into account the particular challenges and opportunities inherent in partnering with nonfederal jurisdictions such as state, tribal, local, and insular governments. Moreover, efforts to build the capability would benefit from a framework that facilitates assessment of nonfederal jurisdictions’ baseline capabilities and critical gaps across the entire biosurveillance enterprise. A key component of preparedness for a potentially catastrophic biological event is the ability to detect a dangerous pathogen early and assess its potential spread and effect. Experts have noted, and our reviews of both federal and nonfederal government biosurveillance activities confirm, that the federal government has undertaken numerous efforts to support timely detection and situational awareness for potentially catastrophic biological events, but these efforts are not well integrated. As we reported in June 2010, current efforts lack a unifying framework and structure for integrating dispersed capabilities and responsibilities across the biosurveillance enterprise. Further we noted that without this unifying framework, it will be difficult to create an integrated approach to building and sustaining a national biosurveillance capability as envisioned in HSPD-21. Officials at all levels of government, as well as HSPD-21’s vision of a national biosurveillance capability, acknowledge that state and local capabilities are at the heart of the biosurveillance enterprise. According to federal, state, and local officials, early detection of potentially serious disease indications nearly always occurs first at the local level, making the personnel, training, systems, and equipment that support detection at the state and local level a cornerstone of our nation’s biodefense posture. Therefore, to be most effective, a national biosurveillance strategy like the one we recommended in June 2010—one capable of guiding federal agencies and their key stakeholders to systematically identify risks, resources to address those risks, and investment priorities—would address the particular challenges and opportunities inherent in partnering with state and local jurisdictions. Moreover, efforts to build the capability would benefit from a framework that facilitates assessment of jurisdictions’ baseline capabilities and critical gaps across the entire biosurveillance enterprise. In order to help build and maintain a national biosurveillance capability in a manner that accounts for the particular challenges and opportunities of reliance on state and local partnerships, we recommend the Homeland Security Council direct the National Security Staff to take the following action as part of its implementation of our previous recommendation for a national biosurveillance strategy:  Ensure that the national biosurveillance strategy (1) incorporates a means to leverage existing efforts that support nonfederal biosurveillance capabilities, (2) considers challenges that nonfederal jurisdictions face in building and maintaining biosurveillance capabilities, and (3) includes a framework to develop a baseline and gap assessment of nonfederal jurisdictions’ biosurveillance capabilities. We provided a draft of this report for review to the National Security Staff, DHS, HHS, DOI, USDA, the Department of Justice; and the state and city officials who contributed to our review. The National Security Staff acknowledged the accuracy of the information contained in the report but did not comment on the recommendation. DHS provided a written response to the draft report, which is summarized below and presented in its entirety in appendix V of this report. USDA provided an oral response that is summarized below. DHS, HHS, DOI, USDA, the Department of Justice, the North Carolina Division of Public Health, and the Utah Department of Agriculture and Food provided technical comments, which we incorporated where appropriate. In written comments, DHS concurred with our findings. DHS noted that its National Biosurveillance Integration Center has key biosurveillance roles and responsibilities, and stated that to support the Center’s mission, DHS is working with the National Security Staff on the Sub-Interagency Policy Committee on Biosurveillance. DHS further stated that it understands the importance of and supports the inclusion of nonfederal biosurveillance resources in the National Biosurveillance Strategy under development. In oral comments, USDA concurred with our findings and recommendations. Overall,. Specifically, USDA’s Animal and Plant Health Inspection Service’s Veterinary Services and Wildlife Services supported our recommendation to leverage support, consider challenges, and develop a framework to understand the current capacity and conduct a needs assessment for nonfederal identities to conduct biosurveillance activities. USDA stated that it will continue to work with the National Security Staff in development of the National Biosurveillance Strategy. USDA noted that its Animal and Plant Health Inspection Service has an established national program—the National Wildlife Disease Program— that is currently available to provide the infrastructure and leadership necessary to implement these recommendations, and should be incorporated into an integrated system. USDA noted that the program has a history of providing leadership for national surveillance during various outbreaks, which demonstrates its overall abilities to develop and maintain broad local, state, tribal and private efforts to conduct targeted biosurveillance activities. We are sending copies of this report to the Special Assistant to the President for National Security Affairs; the Attorney General; the Secretaries of Homeland Security, Health Human and Services, Agriculture, and the Interior; and interested congressional committees. The report is also available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report please contact me at (202) 512-8777 or jenkinswo@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. To address our objectives, we reviewed key legislation and presidential directives related to biosurveillance, including the Homeland Security Act of 2002; the Public Health Security and Bioterrorism Preparedness and Response Act of 2002; the Pandemic and All Hazards Preparedness Act of 2006; and Homeland Security Presidential Directives (HSPD) 9, 10, and 21. This report focuses on surveillance efforts for zoonoses— diseases affecting animals and humans—and other emerging infectious diseases with the potential to cause catastrophic human-health effects. Our work issued in June 2010 on biosurveillance efforts at the federal level explored surveillance for the following biosurveillance domains: human health, animal health, plant health, food, and the environment (specifically, air and water). Given further complexity arising from the number of and variation among states, localities, tribes, and insular areas, we narrowed the disease scope for this report. We focused on zoonotic disease agents, because of the particular threats associated with them and because threats from zoonotic disease agents clearly illustrate the potential benefits of an integrated biosurveillance capability. Given the focus on surveillance for zoonoses and other emerging infectious diseases in humans, certain federal efforts—for example, the Department of Homeland Security’s air-monitoring system BioWatch—are not discussed. Similarly, certain types of waterborne, foodborne, plant, or animal diseases—for example Foot and Mouth Disease—that could have devastating economic consequences or dire human-health effects are not the focus of this report. At the federal level, we consulted officials at the Departments of Agriculture, Homeland Security, Health and Human Services, and the Interior, which have key missions, statutory responsibilities, directives, or programmatic objectives for biosurveillance activities within the scope of this report, including protecting human and animal health and national security. We also discussed biosurveillance issues at the state and city level with officials from the Department of Justice’s Federal Bureau of Investigation. To develop background on and contextual understanding of the federal efforts that support state biosurveillance capabilities and the challenges officials face building and maintaining those capabilities, we interviewed officials from 10 professional associations and research organizations and asked for recommendations on factors to consider when selecting states for site visits. We interviewed officials from the following organizations:   Council of State and Territorial Epidemiologists,  Trust for America’s Health,  National Association of State Public Health Veterinarians,  American Phytopathological Society,  Association of Public Health Laboratories,  U.S. Animal Health Association,  American Association of Veterinary Laboratory Diagnosticians,  Association of State and Territorial Health Officials, and  OneHealth. International Society for Disease Surveillance, On the basis of information collected during interviews with officials from professional associations and research organizations and a review of published reports and studies, we identified several factors that could be associated with variability in approaches, philosophies, and challenges faced by states in conducting biosurveillance. We selected seven states for site visits with the dual goals of capturing variation on each of these factors and accounting for each in commonalities identified across the states we visited. The factors we identified and their application to our site selection are shown in table 6. Application of factor to selection We visited coastal states in the eastern and western United States, as well as noncoastal states. We also visited states with large urban populations and states with more rural populations. In addition, we visited at least one state that has an international border. We visited at least one state with a centralized public-health structure, at least one with a decentralized public-health structure, at least one with a shared or mixed relationship, and at least one with no local public-health departments. We visited at least one state identified by professional association officials as having strong public-health capabilities as a result of leadership and political will, connections between public and animal health or attention to health security as a public-health and national-security issue. We also visited at least one state that the professional association officials identified as part of a group of states that had chronically struggled with resource issues. According to association officials we interviewed, the extent to which a state has agricultural interest has a bearing on its animal-health resources and programs. We visited at least one state with a large industry presence for one or more of the following types of agriculture: commercial fishing, chickens, turkey, hogs, and cattle. In 2007 and 2008, the Association of State and Territorial Health Officials surveyed the states for their State-by-State Profile of Public Health. As part of that effort, the association asked states to select from a list indicating their top five priorities. Within the list were two priorities particularly relevant to health preparedness generally and biosurveillance capabilities specifically. Respectively these are: (1) assuring preparedness for a health emergency and (2) focusing on early detection or population-protection measures. We selected at least one state that selected neither of the priorities and at least one state that selected one or both. There were no states that selected priority (2) but did not select priority (1) in our sample. The states selected were California, Colorado, Delaware, Mississippi, New Jersey, North Carolina, and Utah. In every state, we interviewed three groups of officials: 1. Officials in public-health departments, including state epidemiologists, who had responsibility for infectious-disease control, disease monitoring, and emergency response in humans. 2. Officials, generally including the state veterinarian, in state agriculture departments who had responsibility for infectious-disease control and monitoring in livestock and poultry. 3. Officials in various departments that included wildlife infectious- disease control and monitoring in their missions. For example, one of these was a State Department of Wildlife and Fisheries. We also interviewed public-health officials with responsibility for human infectious-disease control and monitoring in two cities with an increased risk of bioterrorism—New York City and Washington, D.C.—that received direct funding from federal agencies to support preparedness capabilities. We analyzed the information collected during state and city interviews and developed follow-up questionnaires to confirm and enhance information from the interviews about the federal programs and initiatives that support state and local biosurveillance capabilities and the challenges officials face. We sent follow-up questionnaires to public- health departments in all seven states and two cities and to agriculture and wildlife officials in the seven states. Within each public-health department, we sent separate questionnaires to laboratory and epidemiology officials. In total, we distributed 32 questionnaires and received 27 responses. Of the 27 respondents, 7 were epidemiologists, 7 were public-health laboratory officials, 6 were state agriculture officials, and 7 were state wildlife officials. All of the public-health, agriculture, and wildlife departments represented by the 27 respondents had also been represented in our initial interviews. However, in 7 cases—6 laboratory directors and 1 state veterinarian—the lead officials to whom we directed the questionnaire had not been present at the initial interviews. We pretested the public-health questionnaire with a laboratory official who was not at the original interviews in order to ensue the questions could be understood outside of the context of the interview. Each questionnaire had two sections: one on federal support to states and cities and one on challenges faced by states and cities. The content of the federal support section varied for human-health and animal-health respondents, but the challenges section was the same for both human- and animal-health respondents. The specific federal programs and challenges we asked about were based on initial interviews with the different groups of respondents. We asked respondents to consider federal efforts over the last 2 years. Because the states and cities in this report were not selected in a probability sample, neither the information derived from interviews with officials nor the questionnaire responses are generalizable across the 50 states or the tens of thousands of localities in the United States. Rather, both the interviews and the questionnaire results offer some perspective on the value of select federal activities to, and challenges faced by, a group of state officials who are actively engaged in efforts to detect and respond to major disease events. In addition, although we interviewed officials responsible for public-health emergency management in most state public-health departments that we visited, we did not administer follow-up questionnaires to the officials responsible for planning and preparing for emergency response, because their response focus was generally not central to our scope. Because this report focuses on detection of and situational awareness of potentially catastrophic zoonotic and emerging infectious-disease events, certain federal efforts that federal agencies consider important in supporting state and local preparedness may not have been identified by state and city officials during our interviews and follow-up questionnaires. To consider the relationship between our findings at the nonfederal level and our previous findings at the federal level about building and maintaining a national biosurveillance capability, we reviewed our June 2010 findings about the centrality of nonfederal capabilities to a biosurveillance enterprise. We also reviewed our June 2010 findings about the purpose of a national biosurveillance strategy and the benefits it could provide for guiding the effort to support a national biosurveillance capability. We determined that because the federal government relies on nonfederal resources to support a national biosurveillance capability, our June 2010 findings about using the strategy to determine how to leverage resources, weigh the costs and benefits of investments, and define roles and responsibilities were particularly germane to the federal government’s efforts to partner with nonfederal biosurveillance enterprise partners to support a national biosurveillance capability. To understand how the federal government supports biosurveillance in tribal and insular areas, we consulted officials from components of federal departments with responsibility for working with tribal or insular councils and governments, generally, or on health-related matters. These included: the Department of Health and Human Services’s Indian Health Service; the Department of Health and Human Services’s Centers for Disease Control and Prevention’s (CDC) Office of State, Tribal, Local and Territorial Support; CDC’s Office of Surveillance, Epidemiology, and Laboratory Services, CDC’s National Center for Emerging and Zoonotic Infectious Diseases, the Department of Agriculture’s Office of Tribal Relations and the Department of Agriculture’s Animal and Plant Health Inspection Service; the Department of the Interior’s Bureau of Indian Affairs; and the Department of the Interior’s Office of Insular Affairs. In addition, to develop additional background and context about health infrastructure and surveillance in insular areas, we interviewed representatives from the Pacific Island Health Officers Association (PIHOA), which works in the U.S.-Affiliated Pacific Islands to strengthen crosscutting public-health infrastructure, including health-workforce development, quality assurance, health data systems, public-health planning, and public-health laboratories. The findings in this report about insular areas focus on the U.S.-Affiliated Pacific Islands. With the exception of Puerto Rico and the U.S. Virgin Islands, all commonwealths, territories, possessions, and freely associated states of the United States fall within the U.S.-Affiliated Pacific Islands. To evaluate the extent to which the federal government has assessed nonfederal governments’ capacity to contribute to a national biosurveillance capability, we reviewed relevant presidential directives and federal-agency documents like the National Biosurveillance Strategy for Human Health, along with our prior work and recommendations on building and maintaining a national biosurveillance capability. We determined that such assessment is called for in HSPD-10 and CDC’s National Biosurveillance Strategy for Human Health and is a critical activity for developing an effective national strategy containing the elements we advocated in prior work on national strategies. To determine what types of assessment activities had been undertaken and whether an enterprisewide assessment of biosurveillance of nonfederal capabilities had been conducted, we reviewed relevant assessments and federal documents like the Council of State and Territorial Epidemiologist’s 2009 National Assessment of Epidemiology Capacity and CDC’s Public Health Preparedness series. In addition, we interviewed federal officials at all five federal departments, state officials in each of the seven states, city officials in the two cities, and officials at 10 professional and research institutions that include public health, animal health, or laboratories in their missions about assessment efforts, including whether they had participated in or had any familiarity with an enterprisewide assessment of nonfederal capabilities. We conducted this performance audit from August 2010 to October 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Public-health and animal-health laboratories serve a critical role in both initial detection and ongoing situational awareness of biological events. This appendix contains the results of our follow-up questionnaire for each of the four categories of federal programs and initiatives that state and city officials identified during interviews. Presented below are the questions and response totals to the follow-up questionnaires we sent to (1) state and city public-health epidemiology officials (called the Epidemiology group in this appendix), (2) state and city public-health laboratory officials (the Laboratory group), (3) state agriculture officials (the Agriculture group), and (4) state wildlife officials (the Wildlife group) by group, and descriptions of the federal programs and initiatives listed. Supports Capability Enhancement: Without this support, core functions are adequately maintained, but enhanced biosurveillance methods and mechanisms cannot be built or maintained. The content of the questionnaire varied for the different respondent groups. For example, public-health officials (the Epidemiology and Laboratory groups) were asked about some information sharing and analytical products, whereas animal-health officials (the Agriculture and Wildlife groups) were asked about others. This was based on earlier interviews with these different groups of officials. Of the 27 officials who responded to these questionnaires, 7 were from the Epidemiology group, 7 were from the Laboratory group, 6 were from the Agriculture group, and 7 were from the Wildlife group. For more detail on the method by which these questionnaires were administered, see appendix I. Table 16 shows the results of our follow-up questionnaire for the question concerning challenges that state and local officials may face in building and maintaining biosurveillance capabilities. Presented below are the question and response totals to the follow-up questionnaires we sent to (1) state and city public-health epidemiology officials, (2) state and city public-health laboratory officials, (3) state agriculture officials, and (4) state wildlife officials by group, and descriptions for the challenges identified. Question: How do you classify the following challenges as they currently pertain to your area of responsibility? In addition to the contact named above, Edward George, Assistant Director; Amanda Jones Bartine; Michelle Cooper; Kathryn Godfrey; Susanna Kuebler; and Heather Romani made significant contributions to the work. Tina Cheng assisted with graphic design. Amanda Miller and Russ Burnett assisted with design, methodology, and analysis. Stuart Kaufman assisted with design and administration of the follow-up questionnaire. Tracey King provided legal support. Linda Miller provided communications expertise.
The nation is at risk for a catastrophic biological event. The Implementing Recommendations of the 9/11 Commission Act directed GAO to report on biosurveillance--to help detect and respond to such events--at multiple jurisdictional levels. In June 2010, GAO recommended that the National Security Staff lead the development of a national biosurveillance strategy, which is now under development. This report focuses on nonfederal jurisdictions, which own many of the resources that support a national capability. It discusses (1) federal support for state and local biosurveillance; (2) state and local challenges; (3) federal support and challenges for tribal and insular areas and (4) federal assessments of nonfederal capabilities. To conduct this work, GAO interviewed select federal-agency, jurisdiction, and association officials and reviewed relevant documents. To collect information on federal efforts and challenges, we also sent standardized questionnaires to seven states and two cities. The federal government has efforts to support health preparedness that state and city officials identified as critical to their biosurveillance capabilities. The efforts these officials identified fell into four categories: (1) grants and cooperative agreements, (2) nonfinancial technical and material assistance, (3) guidance, and (4) information sharing. Within each of the categories, the officials identified specific federal efforts that were essential to their biosurveillance activities. For example, public-health officials described cooperative agreements from the Centers for Disease Control and Prevention that provided resources for disease investigation, as well as guidance on federal priorities. However, as with our June 2010 findings about federal biosurveillance, in the absence of a national strategy, these efforts are not coordinated or targeted at ensuring effective and efficient national biosurveillance capabilities. Because the resources that constitute a national biosurveillance capability are largely owned by nonfederal entities, a national strategy that considers how to leverage nonfederal efforts could improve efforts to build and maintain a national biosurveillance capability. State and city officials identified common challenges to developing and maintaining their biosurveillance capabilities: (1) state policies that restrict hiring, travel, and training in response to budget constraints; (2) ensuring adequate workforce, training, and systems; and (3) the lack of strategic planning and leadership to support long-term investment in cross-cutting core capabilities, integrated biosurveillance, and effective partnerships. A national biosurveillance strategy that considers planning and leadership challenges at all levels of the biosurveillance enterprise may help partners across the enterprise find shared solutions for an effective national biosurveillance capability. The federal government provides some resources to help control disease in humans and animals in tribal and insular areas, but there are no specific efforts to ensure these areas can contribute to a national biosurveillance capability. Resources include cooperative agreements, disease-specific funding, training, and technical assistance. Surveillance capacity varies among tribes and insular areas, but common challenges include limited health infrastructure including human- and animal-health professionals and systems. The federal government has not conducted a comprehensive assessment of state and local jurisdictions' ability to contribute to a national biosurveillance capability, as called for in presidential directive. According to federal, state, and local officials, the magnitude and complexity of such an assessment is a challenge. Until it conducts such an assessment, the federal government will lack key information to support a national biosurveillance capability. A national strategy like the one we previously recommended--one capable of guiding federal agencies and its key stakeholders to systematically identify gaps, resources to address those gaps, and investment priorities--would benefit from an assessment of jurisdictions' baseline capabilities and critical gaps across the entire biosurveillance enterprise. GAO recommends that the National Security Staff ensure the strategy considers (1) existing federal efforts, (2) challenges, and (3) assessment of nonfederal capabilities. GAO provided a draft of this report to the National Security Staff, and the federal, state and city officials who contributed information. The National Security Staff acknowledged the accuracy of the report, but did not comment on the recommendation.
Approximately 49 million elderly and disabled individuals were enrolled in Medicare in 2011, of which about 29 million were enrolled in Part D. Medicare beneficiaries obtain Part D coverage by choosing from multiple, competing plans offered by plan sponsors—often private insurers—that contract with CMS to offer the prescription drug benefit. About 63 percent of the approximately 29 million Part D beneficiaries were enrolled in stand-alone prescription drug plans (PDP), which add drug coverage to original fee-for-service Medicare and certain Medicare plans, and approximately 37 percent were enrolled in Medicare Advantage prescription drug plans (MA-PDP), which provide Medicare benefits and prescription drug coverage through a single privately managed plan (see table 1 for the number of beneficiaries enrolled by plan type). Of the approximately 29 million beneficiaries enrolled in Medicare Part D, about 36 percent were LIS beneficiaries and approximately 64 percent were non-LIS beneficiaries (see table 1 for the number of beneficiaries enrolled in PDPs and MA-PDPs who were LIS and non-LIS). In 2011, federal spending on Part D totaled approximately $67 billion, accounting for about 12 percent of total Medicare expenditures. Medicare Part D spending depends on several factors, including the number of beneficiaries, their health status and extent of drug utilization, and the cost of drugs covered by Part D. In its 2012 report to Congress, the Medicare Payment Advisory Commission (MedPAC) reported that prices for individual Part D drugs (brand-name and generics) rose by an average of 18 percent cumulatively between January 2006 and December 2009. To help keep Part D spending down, CMS relies on competing plan sponsors to negotiate drug prices for the beneficiaries in their plans. Medicare Part D plan sponsors may contract with PBMs to negotiate price discounts with retail pharmacies and rebates with drug manufacturers for the drugs a plan covers, or plan sponsors may independently negotiate directly with pharmacies and manufacturers. The price discounts that plan sponsors negotiate with pharmacies are based on drug prices that manufacturers establish and generally result in a lower price that a beneficiary pays at the point-of-sale. In comparison, the rebates that plan sponsors negotiate with drug manufacturers are passed on to plan sponsors who may use them to lower beneficiary costs including premiums. The Medicare Prescription Drug, Improvement, and Modernization Act of 2003 (MMA), which established Medicare Part D, required that all Part D plan sponsors offer a minimum set of benefits to beneficiaries, defined as the standard Part D benefit. For non-LIS beneficiaries, this benefit features a deductible (a fixed dollar amount that beneficiaries must pay before coverage takes effect) and an initial coverage period during which the beneficiary pays a coinsurance (or percentage share of the drug’s actual costs) for prescription drugs until the beneficiary reaches the initial coverage limit. After the initial coverage period, the beneficiary enters the coverage gap, which is followed by the catastrophic coverage period in which he or she pays a small amount of the total drug costs. Beneficiaries must also pay a monthly premium to be enrolled in a Part D plan. LIS beneficiaries do not pay the same out-of-pocket costs as non- LIS beneficiaries since they receive subsidies to assist them with their out-of-pocket drug costs. In 2011, out-of-pocket costs for non-LIS beneficiaries in defined standard benefit plans in the initial coverage period included a $310 deductible and 25 percent coinsurance (with the plan paying the remaining 75 percent) until the total combined drug costs paid by the beneficiary and the Part D Plan reached the initial coverage limit of $2,840. The beneficiary then entered the coverage gap until total drug costs reached the 2011 catastrophic coverage threshold of $6,447.50. Once this threshold was reached, the beneficiary paid the greater of either a $2.50 to $6.30 copayment or 5 percent coinsurance per prescription during the catastrophic period. Prior to 2011, non-LIS beneficiaries in the defined standard benefit plan were responsible for 100 percent of their drug costs while in the coverage gap. In 2011, over 90 percent of beneficiaries in PDPs and MA-PDPs were enrolled in actuarially equivalent or enhanced plans. Most of these beneficiaries, however, do not have coverage for brand-name drugs in the coverage gap. See Medicare Payment Advisory Commission, March 2012 Report to the Congress: Medicare Payment Policy. drug. Plan sponsors may require beneficiaries to pay a higher coinsurance or co-pay amount, for example, for certain high-cost drugs, such as specialty-tier eligible drugs that treat conditions such as cancer, multiple sclerosis, and rheumatoid arthritis. Plan sponsors also select whether any utilization management practices apply for each listed drug, such as limits on the amount of drug that can be provided. The Discount Program began in January 2011 after being established in 2010 by PPACA to reduce beneficiaries’ out-of-pocket drug costs when they reach the coverage gap. Non-LIS beneficiaries are eligible for the discount if they are enrolled in a PDP or MA-PDP, are not enrolled in a qualified retiree prescription drug plan, and have reached or exceeded Beneficiaries that are enrolled the initial coverage limit during the year.in enhanced plans providing some coverage for brand-name drugs when they reach the coverage gap may also receive the discount after supplemental benefits are applied. PPACA required that manufacturers wishing to have their brand-name drugs covered under the Medicare Part D program participate in the Discount Program. To participate in the Discount Program, manufacturers must sign an agreement with CMS to provide non-LIS beneficiaries a 50 percent discount on the plan- negotiated price for brand-name drugs at the point-of-sale when non-LIS beneficiaries reach the coverage gap. In addition, PPACA stipulated that both the portion of drug costs for brand-name drugs paid by the beneficiary and the portion paid by the manufacturer count toward reaching the beneficiary’s annual catastrophic coverage threshold. As a result, beneficiaries’ out-of-pocket costs will be significantly reduced. (See app. III for information about how the Discount Program works for beneficiaries at the point-of-sale for brand-name drugs.) Separately, PPACA also included provisions that phase out the coverage gap gradually through 2020 by providing Medicare subsidies to help pay for the cost of brand and generic prescription drugs in the gap for non-LIS beneficiaries. Specifically, beginning in 2013, Medicare will pay 2.5 percent of the plan-negotiated price for brand-name drugs. Medicare will increase its subsidy to 25 percent for brand-name drugs by 2020, while manufacturers will continue to pay the 50 percent discount through 2020 and in subsequent years for a combined 75 percent payment towards brand-name drugs for beneficiaries. Additionally, beginning in January 2011, Medicare paid 7 percent of the plan-negotiated price for generic drugs while the beneficiary paid 93 percent of the cost when they reached the coverage gap. Medicare will increase its subsidy to 75 percent for generic drugs by 2020. CMS encourages beneficiaries to use generic drugs to reduce their out-of-pocket spending for drugs, which also helps keep Medicare Part D spending down. In 2010, about 75 percent of drugs dispensed in Medicare Part D were generic, according to CMS. The coverage gap will be eliminated by 2020 as the beneficiary’s coinsurance for brand-name and generic drugs will be reduced to 25 percent—the same coinsurance amount as required during the initial coverage period. See table 2 for beneficiary coinsurance and Medicare subsidy amounts for brand-name and generic drugs through 2020. Figure 1 shows a comparison of a non-LIS beneficiary’s out-of-pocket spending for prescription drugs when the beneficiary reaches the coverage gap under the standard benefit plan without and with implementation of the Discount Program in 2011. If the Discount Program was not in place, non-LIS beneficiaries in the standard benefit would have been responsible for $3,607.50 in drug costs during the coverage gap in 2011 ($6,447.50 annual catastrophic threshold - $2,840 initial coverage limit = $3,607.50). With the Discount Program, non-LIS beneficiaries would pay $1,803.75 in drug costs when using only brand-name drugs during the coverage gap. Plan sponsors, drug manufacturers, and CMS each have responsibilities for carrying out the Discount Program. Plan sponsors are responsible for making payments at the point-of-sale for the 50 percent discount for brand-name drugs on behalf of manufacturers, providing information to pharmacies about beneficiaries and the drugs subject to the discount, and reporting discount amounts to CMS. In order for the discount to be provided at the point-of-sale to beneficiaries, plan sponsors determine: (1) that the drug is an applicable drug; (2) that the beneficiary is eligible for the discount; (3) that the pharmacy claim for the drug is wholly or partially in the coverage gap; and (4) the amount of the discount. After the beneficiary receives the discount at the point-of-sale, plan sponsors are responsible for recording the amount of the discount that was paid for the drug, along with information such as the associated sales tax and dispensing fee. The plan sponsors include this information on the PDE record, a summary record for each prescription that a beneficiary fills. Plan sponsors must submit PDE records to CMS. Drug manufacturers are responsible for making payments to plan sponsors for the discounts sponsors provide on applicable drugs and maintaining up-to-date listings of drugs that are subject to the discount, as stated in the Discount Program Agreement. Manufacturers are required to reimburse plan sponsors for the discounts for applicable drugs that plan sponsors paid on their behalf at the point-of-sale. Manufacturers are also responsible for electronically listing and maintaining an up-to-date electronic Food and Drug Administration (FDA) registration and listing of all national drug codes (NDC) so that CMS and plan sponsors can accurately identify applicable drugs in the Discount Program. CMS is responsible for making prospective payments to plan sponsors, invoicing manufacturers, and overseeing the Discount Program, as stated in the Discount Program Agreement. CMS makes monthly Part D prospective payments to plan sponsors for providing prescription drug benefits to Medicare beneficiaries, which includes payments for providing discounts to beneficiaries. The prospective payments are calculated with information such as the number of beneficiaries enrolled in a plan and their projected drug costs. CMS is also responsible for aggregating and validating the discount amounts that plan sponsors have paid, as reported on the PDE records. Upon aggregating the amount of the discounts that plan sponsors have paid, CMS sends this information to its third-party administrator (TPA), which is responsible for invoicing the manufacturers on a quarterly basis. CMS also monitors plan sponsors’, manufacturers’, and the TPA’s compliance with their program responsibilities. CMS oversees the provision of discounts by plan sponsors to eligible beneficiaries who reach the coverage gap, and ensures that the discounts are paid for by drug manufacturers. CMS oversight activities include performing checks of prescription drug data to verify that plan sponsors provide accurate discounts at the point-of-sale to eligible beneficiaries who reach the coverage gap. CMS also tracks the payment of discounts by drug manufacturers to plan sponsors and has implemented a dispute resolution process to resolve manufacturer disputes about discounts. In addition, CMS performs other activities, such as monitoring beneficiary complaints, and has reported on certain Discount Program outcomes. CMS performs 15 automated checks of PDE data specific to the Discount Program that verify whether plan sponsors have provided and accurately calculated discounts at the point-of-sale to eligible beneficiaries who reach the coverage gap. The PDE data checks include verifying that plan sponsors have provided discounts to beneficiaries who are eligible for a discount; for example, by checking beneficiaries’ LIS status and their accumulated drug costs to confirm that they have reached the coverage gap within the benefit year. The PDE data checks also verify whether plan sponsors have accurately calculated discounts for beneficiaries. For example, CMS calculates an expected discount amount based on the brand-name drug price that is recorded on the PDE, and compares it with the discount amount that the plan sponsor records on the PDE. CMS provides plan sponsors with detailed information about any errors the agency identifies through the PDE data checks. Plan sponsors are responsible for correcting these errors and resubmitting the PDE records to CMS.checks is to prevent fraud in the Discount Program, since CMS uses PDE data to determine the final payment amounts owed to plan sponsors by comparing actual costs to the prospective payments that CMS makes to plan sponsors, which includes payment for the discounts plan sponsors provide to beneficiaries at the point-of-sale. CMS officials also told us they also review the validity of plan sponsors’ PDE records for discounts as part of CMS’s annual onsite audits of plan sponsors. CMS officials told us that an additional use of the PDE data CMS periodically provides guidance to plan sponsors about reporting Discount Program information on the PDE record and the agency’s 15 PDE data checks. For example, in April 2010, CMS issued guidance to plan sponsors on the requirements and procedures for implementing the Discount Program, including how to calculate discounts for eligible beneficiaries enrolled in the defined standard benefit plan and how to record discount information using the PDE data fields that are specific to the Discount Program. Since the Discount Program was implemented in January 2011, CMS has issued further guidance to plan sponsors regarding the 15 PDE data checks. For example, in September 2011, CMS issued a memo to plan sponsors that explained how CMS plans to conduct PDE data checks that verify the status of brand-name drugs that received discounts using the FDA’s updated NDC directory, which identifies brand-name drugs. CMS tracks the payment of discounts by drug manufacturers to plan sponsors and can impose penalties for failure to pay. CMS officials reported that they track manufacturers’ payments to plan sponsors for discounts by reviewing confirmation reports that plan sponsors submit to the agency when they receive payments from manufacturers. Manufacturers receive quarterly invoices from the TPA of discount payments owed to plan sponsors based on aggregated PDE data. Manufacturers pay plan sponsors directly and plan sponsors submit a confirmation report to CMS upon the receipt of these payments. To ensure manufacturers make payments to plan sponsors for discounts, CMS may impose civil monetary penalties on drug manufacturers that fail to pay plan sponsors for the discounts. CMS officials told us a few manufacturers have been late in submitting payments to plan sponsors due to technical issues, and that one manufacturer did not submit payment because the company went bankrupt.have not imposed any penalties on manufacturers as of July 2012. CMS has implemented a dispute resolution process that allows manufacturers to dispute discounts they have paid to plan sponsors if they find problems with the quarterly invoices. Manufacturers can submit a dispute within 60 days of receipt of the quarterly invoices to the TPA, which is responsible for determining if the dispute is valid and makes adjustments to manufacturers’ invoices as necessary. Manufacturers have the right to appeal the TPA’s determination through an independent review entity established by CMS. the independent review entity’s determination, it may request the review of CMS, with CMS having the final decision on the dispute determination. In March 2012, CMS issued guidance providing manufacturers with detailed information about the basis for submitting disputes and CMS’s process for evaluating dispute submissions. For example, CMS explained that manufacturers may submit a dispute for a discount amount included in an invoice because they believe it is too high, and such disputes would be evaluated by analyzing the drug’s price relative to all other PDE records for the same drug. If it is determined that the price falls within an acceptable range, the dispute would be denied. In October 2011, CMS issued updated guidance on the dispute resolution process that expanded the time frames for manufacturers to appeal the TPA’s determinations to the independent review entity from 60 days to 90 days. See CMS, Medicare Coverage Gap Discount Program – Updated Guidance (Baltimore, Md.: Oct. 28, 2011). CMS performs other oversight activities of the Discount Program that include maintaining codes, identifying drugs covered under the program, monitoring beneficiary complaints, and conducting audits of manufacturers: The agency maintains a list of codes (called labeler codes) identifying drugs covered under the Discount Program, which it makes publicly available on the CMS website. CMS checks that manufacturers that participate in the Discount Program are providing discounts on brand- name drugs associated with this list of labeler codes. CMS officials told us they monitor and resolve beneficiary complaints—expressions of dissatisfaction about the Medicare program, including concerns about providers and health plans— related to the Discount Program through their Part D Complaints Tracking Module. Beneficiaries submit the complaints, for example, by calling the 1-800-MEDICARE toll-free number or submitting an online Medicare complaint form. CMS officials said that, as of June 30, 2012, they have received and resolved 147 beneficiary complaints about the Discount Program, including complaints from beneficiaries who reported they reached the coverage gap and did not receive discounts, who received incorrect discounts, or who had concerns about how the discount was calculated. CMS may periodically audit drug manufacturers regarding information about the Discount Program that they are required to submit to the agency, including NDC expiration dates and labeler codes. Manufacturers rely on this information when they submit disputes of discounts from the quarterly invoices. CMS officials told us they have not conducted any of these audits as of July 2012. CMS also ensures that information that may identify beneficiaries is not disclosed in any capacity under the Discount Program, as stated in the Discount Program Agreement. In order to protect beneficiary information, CMS initially decided not to invoice manufacturers for low-volume claims—claims for a specific drug submitted by 10 or fewer beneficiaries at the same pharmacy—because they were concerned that certain information from these claims, such as the identity of the pharmacy, may be used to identify beneficiaries. After further evaluation of the policy, CMS issued guidance in January 2012 stating that the agency would invoice manufacturers for low-volume claims; CMS officials told us they had determined that beneficiary information could not be identified from such invoices. In addition, CMS has stated that the agency conducts other monitoring activities of the Discount Program, which include reporting on certain outcomes of the program and monitoring Medicare Part D drug prices. For example, CMS reported that over 3.7 million beneficiaries who reached the coverage gap received discounts, with an average of $613 in discounts per beneficiary in 2011. CMS officials told us they also monitor Medicare Part D brand-name drug prices annually. CMS will continue its process of monitoring drug prices, using data from 2011, which will take into account any effects on prices from the Discount Program and other factors. CMS officials further explained that because many factors, including time, can affect changes in drug prices, the agency may not be able to separate out such effects on prices from the time the Discount Program was introduced. Plan sponsors, PBMs, and drug manufacturers we spoke with had different perspectives on aspects of the drug pricing and plan design effects of the Discount Program, which include drug prices, rebates, formularies, plan benefit design, and utilization management practices. Most plan sponsors and PBMs told us they believe the Discount Program may have been a factor in the rising prices of some brand-name drugs, while most manufacturers told us the Discount Program has not affected the prices of brand-name drugs they negotiate with sponsors and PBMs. The three PBMs we interviewed also told us they observed that some manufacturers decreased the amount of rebates for the brand-name drugs they offered, which they believe occurred as a result of the Discount Program. In comparison, most of the plan sponsors did not observe manufacturers decrease rebate amounts and most manufacturers reported no effects on their rebate negotiations as a result of the Discount Program. Most plan sponsors and PBMs also reported that the Discount Program did not affect their Part D plan formularies, plan benefit design, or utilization management practices. Six of the seven plan sponsors and two of the three PBMs we interviewed told us they believe the Discount Program may have been a contributing factor in the rising prices of brand-name drugs by some manufacturers.Some sponsors and one PBM told us they believe that some manufacturers raised prices for their brand-name drugs to recoup the costs of the discounts that they anticipated paying. Some of these sponsors and one PBM based their observations on reviews of drug pricing data; for example, one plan sponsor told us it reviewed PDE data. Two of these plan sponsors and one PBM also told us they observed such price increases occurring as early as 2010—when the Discount Program was announced—and continuing through 2012. For example, one plan sponsor and one PBM told us that they attributed the Discount Program as a factor in rising brand-name drug prices they observed from 2010 to 2011 based on analyses of their own drug pricing data. Six of the eight manufacturers we interviewed, in comparison, believe that the prices of their brand-name drugs negotiated with plan sponsors and PBMs have not been affected by the Discount Program. One of the two remaining manufacturers said that it considered the Discount Program as a factor when negotiating drug prices, but other factors, such as whether a given drug has competitors in the market, had more influence over negotiations. The other remaining manufacturer told us it was still evaluating the impact of the Discount Program and therefore could not determine whether it will affect or has affected brand-name drug prices. The three PBMs we interviewed told us they observed that some drug manufacturers decreased the amount of rebates they offered for brand- name drugs, which they believe occurred as a result of the Discount Program. One PBM observed that this was occurring among some manufacturers of specialty-tier-eligible drugs. Another PBM also told us it observed these effects beginning as early as 2010, prior to the implementation of the Discount Program in 2011. Four of the seven plan sponsors we interviewed told us they did not observe decreased rebates as a result of the Discount Program. Three of these four plan sponsors told us that, while they did not observe decreased rebates, they believe manufacturers may likely decrease the amount of rebates they offer in the future and, according to two plan sponsors, they expect the decreases to be a result of manufacturers trying to recoup the costs of the discounts manufacturers are paying for some drugs. The remaining one of these four plan sponsors told us it has not observed any changes to rebate amounts because it has worked with manufacturers to maintain the same rebate levels offered prior to the Discount Program. In comparison, two plan sponsors told us they did observe some manufacturers decrease the amount of rebates they offer, and one of these plan sponsors told us it believes this occurred as a result of the Discount Program. The remaining seventh plan sponsor we spoke with did not specifically address the Discount Program’s effect on decreased rebate amounts. Six of the eight manufacturers we interviewed told us that the Discount Program did not change their rebate negotiations with plan sponsors and PBMs. However, two manufacturers told us that the Discount Program has changed some aspects of their rebate negotiations. For example, one of these two manufacturers told us it has established limits with plan sponsors regarding the rebate amounts it will pay to plan sponsors as a result of the discounts it has to pay for some drugs. The other manufacturer also told us that it has taken the Discount Program’s effect into account when entering into rebate negotiations because paying for the discounts affects its profitability. Most plan sponsors and PBMs we interviewed reported that the Discount Program has not affected their Medicare Part D plan formularies, plan benefit designs, and drug utilization management practices. All seven plan sponsors and two of the three PBMs we interviewed told us that Part D plan formularies have not changed as a result of the Discount Program. In addition, most of these plan sponsors and PBMs told us that the placement of brand-name drugs on plan formularies, including specialty-tier eligible drugs, was not affected by the Discount Program. In comparison, one PBM told us that formulary placement changes have occurred more frequently as a result of some manufacturers decreasing the amount of rebates they offer for brand-name drugs. In particular, this PBM has observed fewer brand-name drugs included on plan formularies as well as fewer brand-name drugs placed on formularies in preferred positions, which result in lower beneficiary cost-sharing for those drugs. In addition to plan formularies, the seven plan sponsors we spoke with told us that the Discount Program has not affected the plan benefit design or drug utilization management practices of their Part D plans. For example, one of these plan sponsors told us that the Discount Program has not been a factor in any plan benefit design changes and that it bases its plan benefit design on factors such as the ability to compete for Medicare Part D beneficiaries. We found that prices for brand-name drugs used by beneficiaries in the coverage gap increased similarly to those used by beneficiaries who did not reach the gap, before and after the Discount Program was implemented in January 2011. From January 2007 to December 2010, prior to the implementation of the Discount Program, the median price (weighted by the utilization of each drug) for the basket of 77 brand-name drugs used by beneficiaries in the coverage gap increased 36.2 percent (see fig. 2). When measured across the same period, the median price for the basket of 78 brand-name drugs used by beneficiaries who did not reach the coverage gap also increased at a similar rate of 35.2 percent. During the first year with the Discount Program (from December 2010 through December 2011), the median prices for the two baskets increased equally at a rate of about 13 percent. The median prices for the two baskets of brand-name drugs also increased similarly on an annual basis, from 2007 to 2011, with the greatest increase in price for both baskets occurring the first year with the Discount Program from December 2010 through December 2011 (see fig. 3). For example, from December 2009 through December 2010 the median price for the basket of drugs used by beneficiaries in the coverage gap and for the basket of drugs used by beneficiaries who did not reach the coverage gap each increased 10.2 percent. The greatest annual percent increase for the two baskets of brand-name drugs occurred from December 2010 through December 2011, during which time the median price increased 13.1 percent for the basket of brand- name drugs used by beneficiaries in the coverage gap and 13.2 percent for the basket of brand-name drugs used by beneficiaries who did not reach the coverage gap. In addition, the average annual rate of increase for the basket of brand-name drugs used by beneficiaries in the coverage gap was 9.2 percent over the entire period (January 2007 to December 2011), compared with a 9.0 percent increase for the other basket of drugs. We continued to find similar price increases for each basket of unique brand-name drugs during the first year with the Discount Program after removing the 50 drugs that overlapped both baskets. During the first year of the Discount Program, the median prices for the two baskets of unique drugs increased by over 12 percent: 12.3 percent for the 27 unique drugs used by beneficiaries in the coverage gap and 12.9 percent for the 28 unique drugs used by beneficiaries who did not reach the coverage gap. While many factors affect drug prices, such as the availability of competing drugs to treat the same condition and manufacturing and marketing costs, increasing brand-name drug prices can increase out-of- pocket spending for some beneficiaries in the coverage gap as well as increase overall Part D spending. Thus, continued monitoring of brand- name drug prices and manufacturer rebates will be important as the Discount Program matures. HHS reviewed a draft of this report and in its written comments noted that our finding on the perspectives of stakeholders (Medicare Part D plan sponsors, drug manufacturers, and PBMs) on the effects of the Discount Program is consistent with HHS’s expectations and experience. HHS commented that our finding on price changes before and after implementation of the Discount Program for brand-name drugs used by Medicare Part D beneficiaries who did and did not reach the coverage gap is also consistent with HHS’s expectations and experience. HHS further noted that our finding on brand-name drug price changes is similar to the results of CMS’s own analysis of drug price data, which used a different methodology. HHS commented that CMS will continue to monitor the Discount Program to ensure that discounts on brand-name drugs are applied accurately and in a timely manner for Medicare Part D beneficiaries. In addition, HHS noted that CMS will continue to monitor Part D drug prices as well as the impact of drug prices on the Medicare Part D program. HHS’s comments are printed in appendix IV. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services and interested congressional committees. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have questions about this report, please contact John E. Dicken at (202) 512-7114 or DickenJ@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix V. To describe how prices changed before and after implementation of the Medicare Coverage Gap Discount Program (Discount Program) for brand-name drugs, we compared the trend of Medicare Part D prices from January 2007 to December 2011 for a basket of brand-name drugs used by beneficiaries in the coverage gap with a basket of brand-name drugs used by beneficiaries who did not reach the gap in 2011. We compared price trends for these two baskets because brand-name drugs used by beneficiaries in the coverage gap in 2011—the year the Discount Program began—may be more susceptible to price increases, since manufacturers must provide a 50 percent discount for these drugs compared with drugs used by beneficiaries that do not reach the gap and thus are not subject to the discount. We limited our analyses of Medicare Part D prices to those brand-name drugs that had high expenditures— based on price and utilization—used by beneficiaries who did not receive a low-income subsidy (LIS) and who were enrolled in stand-alone prescription drug plans (PDP) and Medicare Advantage prescription drug plans (MA-PDP). We created two fixed baskets of high-expenditure brand-name drugs using prescription drug event (PDE) data obtained from the Centers for Medicare & Medicaid Services (CMS) to analyze the trend of Medicare Part D prices. We began by selecting: (1) the top 100 brand-name drugs, based on total expenditures, used by non-LIS beneficiaries in PDPs and MA-PDPs in the coverage gap in 2011 and (2) the top 100 brand-name drugs, based on total expenditures, used by non-LIS beneficiaries in PDPs and MA-PDPs who did not reach the coverage gap in 2011. We identified the top 100 brand-name drugs for each basket by using the nine-digit national drug code (NDC-9). We determined the brand-name status of each NDC-9 by using FDA’s NDC directory, which CMS uses to identify whether a drug is a brand-name drug and therefore eligible for a 50 percent discount under the Discount Program. We determined total expenditures for each NDC-9 by aggregating the amount paid at the point-of-sale for all PDE records corresponding to a given NDC-9. The amount paid at the point-of-sale included the ingredient cost (the drug’s price negotiated by the beneficiary’s Part D plan), sales tax, dispensing fee, and vaccination fee, if applicable.expenditure brand-name drugs by NDC-9 in each basket, we excluded those NDC-9s that did not have at least 25 PDE records in each month of After identifying the top 100 high- our analysis, from January 2007 to December 2011, for data reliability purposes. After completing these data steps, we had two fixed baskets of drugs for which we could follow monthly prices throughout the period of our analysis. The fixed baskets included 77 brand-name drugs used by non-LIS beneficiaries in the coverage gap in 2011 and 78 brand-name drugs used by non-LIS beneficiaries who did not reach the coverage gap in 2011 (see app. II for a list of the brand-name drugs included in each basket). To analyze Medicare Part D price trends for the two baskets of brand- name drugs, we created utilization-weighted price indexes using PDE data to track the monthly change in the median ingredient cost per unit for all drugs in each basket from January 2007 to December 2011. We used the median because it is not sensitive to the presence of extreme measurement errors. The ingredient cost reflects discounts negotiated with pharmacies but not certain price concessions such as drug manufacturer rebates. We tracked the ingredient cost for our analysis because it is subject to the 50 percent discount by manufacturers for brand-name drugs under the Discount Program and is affected by price changes made by the manufacturer. We tracked the ingredient cost per unit to account for varying quantities dispensed for a drug at the point-of- sale. We performed several data edits involving the quantity dispensed and ingredient cost per unit variables to further improve data reliability. We then trimmed the data to remove outliers.multiplied the monthly median ingredient cost per unit by each drug’s relative utilization, calculated as the ratio of the drug’s quantity dispensed to the total quantity dispensed for all drugs in the basket. To create the monthly price indexes for each basket, we summed the resulting weighted median ingredient cost per unit of all the drugs in the basket and divided the resulting value by the entire basket’s weighted median ingredient cost per unit as of January 2007. Each price index began with a value 100 as of January 2007. To weight the baskets, we To further analyze Medicare Part D price trends, we calculated monthly changes in the median ingredient cost per unit of subsets of drugs of the two baskets of drugs. First, because a significant number of drugs—50— overlapped both baskets, we compared price trends for the brand-name drugs that did not overlap. included only in the basket of drugs used by non-LIS beneficiaries in the coverage gap in 2011 and 28 were included only in the basket of drugs used by non-LIS beneficiaries who did not reach the coverage gap in 2011. Second, within the basket of drugs used by beneficiaries in the coverage gap in 2011, we compared specialty-tier-eligible drugs, which are high-cost drugs, to non-specialty-tier-eligible drugs to examine whether specialty-tier-eligible drugs had different price changes than non- specialty-tier-eligible drugs. We considered specialty-tier-eligible drugs to be those drugs with a median cost that exceeded $600 for a 30-day supply in 2011 (see app. II for a list of the brand-name drugs considered specialty-tier-eligible). While 50 brand-name drugs overlapped the two baskets, each of these drugs had a different weight depending on their relative utilization in each basket. Our analyses of the trends in Medicare Part D prices are limited because we did not account for the multiple factors that can affect the prices of brand-name drugs over time. As a result, any changes we observed in prices may not be directly related to the implementation of the Discount Program. In addition, our analyses were limited to those brand-name drugs that had the highest total expenditures in 2011. We reviewed all data from CMS for reasonableness and consistency, including screening for outliers. We also reviewed documentation and talked to CMS officials about steps they take to ensure data reliability. We determined that these data were sufficiently reliable for our purposes. We analyzed the trend in Medicare Part D prices from January 2007 through December 2011 for two baskets of brand-name drugs used by beneficiaries who did not receive a low-income subsidy (LIS). Table 3 lists the drugs we analyzed for both baskets: the 27 high-expenditure brand-name drugs unique to the basket of drugs used by non-LIS beneficiaries in the coverage gap in 2011, the 28 high-expenditure brand-name drugs unique to the basket of drugs used by non-LIS beneficiaries who did not reach the coverage gap in 2011, and the 50 high-expenditure brand-name drugs that overlapped both drug baskets. Under the Medicare Coverage Gap Discount Program (Discount Program), the plan-negotiated drug price is the price used in the calculation of the 50 percent discount for brand-name drugs. The 50 percent discount is based on the sum of the plan-negotiated drug price and the drug’s sales tax. This sum is called the discounted amount, and the beneficiary and manufacturer each pay 50 percent of the discounted amount. The beneficiary is also responsible for the drug’s dispensing fee and vaccination fee, if applicable.includes the amount the beneficiary and manufacturer pays, is counted as out-of-pocket spending for the beneficiary, that is, towards the amount the beneficiary needs to move out of the coverage gap and into the catastrophic coverage period. Figure 4 provides a hypothetical example of how the 50 percent discount would be calculated at the point- of-sale for the purchase of a brand-name drug by a beneficiary who does not receive a low-income subsidy (non-LIS), is enrolled in a defined standard benefit plan in 2011, and has reached the coverage gap. In addition to the contact named above, individuals making key contributions to this report include Rashmi Agarwal, Assistant Director; Zhi Boon; Robert Copeland; Pam Dooley; Seta Hovagimian; and Laurie Pachter. Drug Pricing: Research on Savings from Generic Drug Use. GAO-12-371R. Washington, D.C.: January 31, 2012. Prescription Drugs: Trends in Usual and Customary Prices for Commonly Used Drugs. GAO-11-306R. Washington, D.C.: February 10, 2011. Medicare Part D: Spending, Beneficiary Out-of-Pocket Costs, and Efforts to Obtain Price Concessions for Certain High-Cost Drugs. GAO-10-529T. Washington, D.C.: March 17, 2010. Medicare Part D: Spending, Beneficiary Cost Sharing, and Cost- Containment Efforts for High-Cost Drugs Eligible for a Specialty Tier. GAO-10-242. Washington, D.C.: January 29, 2010. Brand-name Prescription Drug Pricing: Lack of Therapeutically Equivalent Drugs and Limited Competition May Contribute to Extraordinary Price Increases. GAO-10-201. Washington, D.C.: December 22, 2009. Medicare Part D Prescription Drug Coverage: Federal Oversight of Reported Price Concessions Data. GAO-08-1074R. Washington, D.C.: September 30, 2008. Prescription Drugs: Trends in Usual and Customary Prices for Drugs Frequently Used by Medicare and Non-Medicare Health Insurance Enrollees. GAO-07-1201R. Washington, D.C.: September 7, 2007. Prescription Drugs: Price Trends for Frequently Used Brand and Generic Drugs from 2000 through 2004. GAO-05-779. Washington, D.C.: August 15, 2005.
The Patient Protection and Affordable Care Act of 2010 established the Discount Program to help Medicare Part D beneficiaries with their prescription drug costs while in the coverage gap, which occurs between the initial and catastrophic coverage periods where Medicare helps pay for drug costs. Until the Discount Program began in 2011, beneficiaries in the coverage gap paid 100 percent of drug costs. The Discount Program required manufacturers to provide a 50 percent discount on the price of brand-name drugs for beneficiaries in the gap. GAO was asked to describe (1) CMS's oversight of the Discount Program; (2) perspectives of plan sponsors, manufacturers, and PBMs on effects of the Discount Program; and (3) how prices for brand-name drugs used by beneficiaries in the coverage gap and by those who did not reach the gap changed before and after the start of the Discount Program. To describe CMS's oversight, GAO reviewed CMS documents and interviewed CMS officials. To describe perspectives on the effects of the Discount Program, GAO interviewed the 7 largest Part D plan sponsors based on enrollment data, 8 of 10 manufacturers of brand-name drugs with the highest expenditures in the gap, and 3 PBMs who contracted with sponsors GAO interviewed. To describe price changes, GAO used CMS Part D data from 2007 to 2011 to track prices for high-expenditure brand-name drugs used by those in and those who did not reach the gap. GAO compared prices for the two baskets because drugs used by those in the gap may be more susceptible to price increases since manufacturers must provide the discount for these drugs. As part of Medicare's Part D Coverage Gap Discount Program (Discount Program), the Centers for Medicare & Medicaid Services (CMS), located within the Department of Health and Human Services (HHS), oversees the provision of discounts by plan sponsors to eligible beneficiaries when they purchase brand-name drugs and monitors that discounts are paid for by drug manufacturers. CMS checks prescription drug data to verify that sponsors provide accurate discounts at the point-of-sale to eligible beneficiaries in the coverage gap. These checks include verifying whether a beneficiary has reached the coverage gap and that the plan sponsor has calculated the discount amount correctly. CMS also tracks that manufacturers pay plan sponsors for the discounts sponsors have provided to beneficiaries and has implemented a dispute resolution process for manufacturers disputing discount payment amounts. CMS also performs other activities such as monitoring beneficiary complaints related to the program. The plan sponsors, pharmacy benefit managers (PBM) that negotiate on behalf of plan sponsors, and drug manufacturers GAO interviewed had different perspectives on aspects of the drug pricing and plan design effects of the Discount Program. Most sponsors and PBMs believed the Discount Program may have been a contributing factor in the rising prices of some brand-name drugs by some manufacturers. However, most manufacturers did not believe the Discount Program affected drug prices they negotiated with sponsors and PBMs. The PBMs we interviewed also told us they observed that some manufacturers decreased the amount of rebates for the brand-name drugs they offered, which they believe occurred as a result of the Discount Program. In comparison, most of the plan sponsors did not observe manufacturers decreasing rebate amounts and most manufacturers reported no effects on their rebate negotiations as a result of the Discount Program. Most sponsors and PBMs told GAO that the Discount Program did not affect Part D plan formularies, plan benefit designs, or utilization management practices. GAO found that the prices for high-expenditure brand-name drugs used by beneficiaries in the coverage gap and by those who did not reach the gap in 2011 increased at a similar rate before and after the Discount Program was implemented in January 2011. Specifically, from January 2007 to December 2010, before the Discount Program began, the median price for the basket of 77 brand-name drugs (weighted by the utilization of each drug) used by beneficiaries in the coverage gap increased 36.2 percent. During the same period, the median price for the basket of 78 brand-name drugs used by beneficiaries who did not reach the coverage gap increased 35.2 percent. From December 2010 through December 2011, the first year with the Discount Program, the median price for the two baskets increased equally by about 13 percent, the greatest increase in median price for both baskets compared to earlier individual years. HHS reviewed a draft of this report and in its written comments noted that GAO's findings on stakeholder perspectives and changes in brand-name drug prices were consistent with its experience and CMS's drug price analysis. HHS stated that CMS will continue to monitor the Discount Program and Part D drug prices.
Medicare is the national health insurance program for those aged 65 and older and certain disabled individuals. In 1998, Medicare insured approximately 39 million people. All beneficiaries can receive health care through Medicare’s traditional fee-for-service arrangement, and many beneficiaries live in areas where they also have the option of receiving their health care through a managed care plan. Of the almost 7 million Medicare beneficiaries enrolled in managed care as of March 1999, nearly all were enrolled in plans whose MCOs receive a fixed monthly fee from Medicare for each beneficiary they serve. Total Medicare spending is expected to reach about $216 billion in fiscal year 1999, with managed care’s portion reaching approximately $37 billion. The Balanced Budget Act of 1997 (BBA) established the Medicare+Choice program as a replacement for Medicare’s previous managed care program. Medicare+Choice was intended to expand beneficiaries’ health plan options by permitting new types of plans, such as preferred provider organizations and provider-sponsored organizations, to participate in Medicare. BBA also established an annual, coordinated enrollment period to begin in 1999 during which beneficiaries may enroll or change enrollment in a Medicare+Choice plan. Previously, MCOs were required to have at least one 30-day period each year when they accepted new members, but most MCOs accepted new members throughout the entire year. Also, before BBA, Medicare beneficiaries could join or leave a plan on a monthly basis. Beginning in January 2002, Medicare beneficiaries will no longer be able to enroll and disenroll on a monthly basis. If they experience problems with a plan, identify a better enrollment option, or simply have second thoughts, beneficiaries will have a limited time each year to change the election they made during the coordinated enrollment period. Afterwards, they will be “locked into” their health plan decision for the remainder of the year. Each plan’s benefit package is defined through a contracting process that establishes the minimum benefits a plan must offer and the maximum fees it may charge during a calendar year. After a benefit package is approved by HCFA, a plan may not reduce benefits or increase fees until the next contract cycle. A BIF, which is included in an MCO’s contract as an exhibit, describes in detail the services, copayments, and monthly premiums associated with each plan. HCFA’s central and regional offices are involved in reviewing plans’ marketing materials, which include member literature. The central office negotiates contracts and establishes national policy regarding marketing material review. HCFA’s regional offices review marketing materials when submitted throughout the year and require MCOs to change the materials when they omit required information or are inaccurate, misleading, or unclear. While some regional offices may review materials that certain organizations distribute nationwide, generally each regional office is responsible for reviewing the materials to be distributed within its geographic jurisdiction. To verify the accuracy of benefit information, regional staff are instructed to check plan materials against the BIF. HCFA staff also verify that MCOs have included certain information in their materials, such as explanations of provider restrictions and beneficiary appeal rights. HCFA provides guidance for both developing and reviewing marketing materials through its contract manual, marketing guidelines, and operational policy letters. Despite HCFA’s authority to do so, the agency does not require MCOs to use standard formats or terminology in their marketing materials. According to HCFA regulations, if HCFA staff do not disapprove submitted materials within 45 days, the materials are deemed approved, and MCOs may distribute the materials to beneficiaries. Review procedures established by several regional offices allow “contingent approval”; that is, the materials are approved on the condition that the MCOs make specific corrections. When contingent approval is given, procedures in three regions call for HCFA staff to verify that the MCOs have made the required corrections before the materials are published and distributed to beneficiaries. (See fig. 1.) Historically, HCFA has done little to address beneficiaries’ need for comparable and unbiased information about Medicare managed care plans. In 1996, we reported that beneficiaries received little or no comparable information on Medicare health maintenance organizations and that the lack of information standards made it difficult for beneficiaries to compare plans’ member literature. At that time, we recommended that HCFA produce plan comparison charts and require plans to use standard formats and terminology in key aspects of their marketing materials. BBA mandated that HCFA undertake a number of activities to provide Medicare beneficiaries with information about their health plan options. Beginning in November 1998, HCFA was required to provide an annual national educational and publicity campaign to inform beneficiaries about the availability of Medicare+Choice plans and the enrollment process. Also, each fall starting in 1999, HCFA must distribute to beneficiaries an array of general information about the traditional Medicare program, supplemental insurance, appeal and other rights, the process for enrolling in a Medicare+Choice plan, and the potential for Medicare+Choice contract termination. At the same time, HCFA must provide each Medicare beneficiary with a list of available Medicare+Choice plans and a comparison of plan options. All of these activities are designed to coincide with and support the coordinated open enrollment period slated to occur each November starting in 1999. HCFA’s goal is to make beneficiaries aware of their health plan options and to provide some summary information to help beneficiaries compare those options. According to HCFA officials, in 1999 each beneficiary will receive a Medicare handbook that contains some comparable information about available health plans. Beneficiaries who want more information may call HCFA’s toll-free telephone number (1-800-MEDICAR) or log onto the Internet Web site (www.medicare.gov). All of these resources—the Medicare handbook, toll-free telephone number, and Web site—are designed to help beneficiaries identify enrollment options and compare selected aspects of benefits. To obtain detailed information about specific plans, however, beneficiaries must continue to rely on MCOs’ sales agents and member materials. (See fig. 2.) Our investigation of 16 MCOs uncovered flaws in their plans’ member literature, beneficiaries’ only source of detailed benefit information. Much of the MCOs’ plan literature contained errors or omissions about mammography and prescription drug benefits, ranging from minor oversights to major discrepancies. While we found no errors about ambulance services, some MCOs’ member literature omitted information about the benefit. Moreover, beneficiaries frequently did not receive important information until after enrollment. Even then, beneficiaries in some plans received member literature that was incomplete and did not fully disclose plan benefits, exclusions, and fees. The lack of full disclosure in member literature leaves the beneficiary vulnerable to unexpected service denials and additional out-of-pocket fees. Making comparisons among health plans’ benefits remains challenging because of the use of nonstandard formats and terminology. In contrast, FEHBP participants received plan brochures that contained relatively complete benefit descriptions presented in a standard format. We found significant errors and omissions in the plans’ member literature that MCOs distributed to beneficiaries. For example, effective January 1998, HCFA required organizations to cover annual screening mammograms and to permit beneficiaries to obtain this service without a physician’s referral. Also, MCOs were required to notify beneficiaries of this new Medicare benefit. Materials from five MCOs, however, explicitly stated that beneficiaries must obtain physician referrals to obtain screening mammograms. (See fig. 3 for three examples.) Member literature from five other organizations failed to inform beneficiaries of their right to self-refer for this service. Much of the MCOs’ member literature provided incorrect or inconsistent information about prescription drug coverage. For example, the member literature for a large, experienced Medicare MCO specified an annual dollar limit for prescription drugs that was lower than the amount required by the organization’s Medicare contract. The contract required the provision of unlimited generic drugs and coverage of at least $1,200 for brand-name drugs. This MCO’s materials, which varied by county, understated the brand-name drug coverage, listing annual dollar limits as low as $600. When we contacted the MCO officials, they confirmed that they were providing the lower benefit coverage. On the basis of the MCO’s enrollment for 1998, we estimated that about 130,000 members could have been denied part of the benefit that Medicare paid for and to which they were entitled under the MCO’s contract. Another MCO provided conflicting information about its prescription drug benefit. In one document, the MCO alternately described its prescription drug benefit as having a $200 monthly limit and a $300 monthly limit. (The correct limit was $300.) In another case, an MCO used the same member literature for four separate plans, emphasizing that all members were entitled to prescription drug benefits. Actually, however, only two of the four plans offered a prescription drug benefit. The member literature we reviewed did not contain errors regarding ambulance services, but the documents often omitted important information about the benefit. One MCO did not include any reference to the benefit in its preenrollment member literature. Three other MCOs stated that ambulance services were covered “per Medicare regulations” but did not define Medicare’s coverage. Most of the remaining MCOs provided general descriptions of their ambulance coverage but did not give details of the extent of the coverage, such as whether the MCOs would pay for out-of-area ambulance service in an emergency. Officials from several MCOs told us that their organizations typically issue a member policy booklet—a document that discloses the details of a plan’s benefit coverage, benefit restrictions, and beneficiary rights—after a beneficiary enrolls. Moreover, MCOs often provided enrollees with outdated member policy documents. For example, one MCO failed to provide enrollees with a current member policy document until August 1998—8 months after the start of the new benefits year. Distributing outdated information can be misleading. HCFA allows MCOs to use outdated plan member materials as long as the organizations attach an addendum indicating any changes to the benefit package. HCFA officials believe that this policy is reasonable because beneficiaries can determine a plan’s coverage by comparing the changes cited in the addendum with the prior year’s literature. However, some MCOs distributed outdated literature without the required addendum. When MCOs did include the addendum, the document did not always clearly indicate that its information superseded the information contained in other documents. In addition, some MCOs did not provide dates on their literature, which obscured the fact that the literature was outdated. Adequate preenrollment benefit information will become even more crucial in 2001, as BBA’s annual enrollment provisions begin to take effect in 2002 and Medicare beneficiaries are no longer able to disenroll on a monthly basis. To help beneficiaries make informed choices, BBA requires HCFA to provide beneficiaries with summary plan information before the annual November enrollment period. Furthermore, new regulations now require MCOs to issue letters by mid-October each year describing benefit changes that will be effective January 1 of the following year. MCOs must send these annual notification letters to all enrollees, and to any prospective enrollees upon request. However, HCFA has not required MCOs to provide more complete member literature prior to enrollment. As a result, beneficiaries still might not have the information they need to make sound enrollment choices. Additionally, beneficiaries enrolling in plans before 2002 may be unaware that their plans may be terminating services shortly after the beneficiaries have enrolled. A plan must notify its members at least 60 to 90 days before it ends services. However, there is no requirement that a terminating plan stop advertising and enrolling new members, with the result that in 1998, some beneficiaries unknowingly joined plans that soon exited the Medicare program. For example, one MCO notified its members in May 1998 of its intent to end services in several Ohio counties. The MCO continued to advertise and enroll new beneficiaries without informing them that plan services would end on December 31, 1998. After inquiries from beneficiaries, the MCO ceased marketing activities in July. Although these marketing activities angered many beneficiaries, the MCO was operating within HCFA’s notification requirements. Some beneficiaries do not receive important information about plan benefits and restrictions even after they have enrolled in a plan. Because HCFA’s instructions regarding benefit disclosure are vague, MCOs vary in the amount of information they provide to beneficiaries. Some organizations we reviewed provided relatively complete descriptions of plan coverage in a member policy booklet or similar document. However, other MCOs did not disclose important restrictions in any member literature. In fact, MCOs that adopt HCFA’s suggested disclosure language will send beneficiaries to an information dead end. In the guidelines it provides to MCOs, HCFA suggests that a plan’s “evidence of coverage,” a document frequently referred to as a member policy booklet, direct beneficiaries to the MCO’s Medicare contract to obtain full details on the benefit package. According to HCFA, a member policy booklet should state that “ constitutes only a summary of the . . . . The contract between HCFA and the [MCO] must be consulted to determine the exact terms and conditions of coverage.” HCFA officials responsible for Medicare contracts, however, said that if a beneficiary requested a contract, the agency would not provide it because of the proprietary information included in an MCO’s adjusted community rate proposal. Furthermore, an MCO is not required, according to HCFA officials, to provide beneficiaries with copies of its Medicare contract. MCO officials we spoke with differed on whether their organization would distribute copies of its contract to beneficiaries. By establishing an MCO’s Medicare contract—a document that is not usually available to beneficiaries—as the only document required to fully explain the plan’s benefit coverage, HCFA cannot ensure that beneficiaries are aware of the benefits to which they are entitled. Vague or incomplete benefit descriptions leave beneficiaries vulnerable to unexpected service denials. For example, disputes sometimes arise when beneficiaries are told they do not have the coverage they believed they would have when they enrolled. An official from the Center for Health Dispute Resolution (CHDR), HCFA’s contractor that adjudicates managed care appeal cases, told us that CHDR uses the information in MCOs’ member literature to determine whether plan members are entitled to specific benefits that are not covered by Medicare fee-for-service. When an MCO’s literature is vague, CHDR allows the MCO to submit internal plan memorandums that clarify its benefit coverage. But beneficiaries generally do not receive these internal memorandums. Consequently, beneficiaries who must rely on incomplete member literature and sales agents’ verbal interpretations of this literature are likely to be unaware of important benefit limitations or restrictions. Inconsistent formats and terminology made comparisons among plans’ benefit packages difficult. We generally had to read multiple documents to determine each plan’s benefit coverage for mammography, prescription drugs, and ambulance services. Answering a set of basic questions about three plans’ prescription drug benefits, for example, required a detailed review of twelve documents: two from plan A, five from plan B, and five from plan C (see fig. 4). It was not easy to know where to look for the information. For example, we found the answer to the question of whether a plan used a formulary in plan A’s summary of benefits, plan B’s Medicare prescription drug rider, and plan C’s contract amendment. Plan C’s materials required more careful review to answer the question because the membership contract indicated the plan did not provide drug coverage. However, an amendment—included in the member contract as a loose insert—indicated coverage for prescription drugs and the use of a formulary. As in previous studies, we found plans’ materials did not use comparable terms or formats. For example, it was difficult to determine whether the three plans offered by one MCO covered nonemergency ambulance transportation, because each plan’s materials used different terms to describe the benefit. The lack of clear and uniform benefit information almost certainly impedes informed decision-making. HCFA officials in almost every region noted that a standard format for key member literature, along with clear and standard terminology, would help beneficiaries compare their health plan options. FEHBP, administered by the Office of Personnel Management (OPM), is similar to the new Medicare+Choice program in that it serves a large and diverse population, allows participation of different types of health care organizations, and allows plans’ benefit packages to vary. Unlike HCFA, however, OPM requires FEHBP plan materials to follow standard formats and terms. OPM officials believe this requirement helps FEHBP members make informed decisions. FEHBP health care organizations produce a single, standard brochure for each plan that is the “contractual document” between the member and the organization. This brochure is a complete description of the plan’s benefits, limitations, and exclusions. The 1999 FEHBP brochure explicitly states the following objective: “This brochure is the official statement of benefits on which you can rely. A person enrolled in the Plan is entitled to the benefits stated in this brochure.” OPM officials said that the brochures must describe what each plan’s coverage includes, as well as what it excludes, so that there is less chance for misunderstanding. The benefit information must be listed in a prescribed format and language to facilitate members’ comparisons among plan options, but OPM’s standards allow variation in some language to accommodate differences in plans’ benefits and procedures. Each plan’s brochure must include a benefit summary presented in OPM’s prescribed format. OPM officials update the mandatory brochure language every year to reflect changes in the FEHBP’s requirements and organizations’ requests for improvements to the language. Finally, OPM requires organizations to distribute plan brochures prior to the FEHBP annual open enrollment period so that prospective enrollees have complete information on which to base their decisions. OPM officials told us that all participating organizations publish brochures that adhere to OPM’s standards. Although OPM’s process for reviewing and approving member literature is generally similar to HCFA’s, it differs in important ways. The process begins when FEHBP organizations submit benefit coverage information to OPM in standard brochure format. OPM contract specialists then review the brochures to verify compliance with mandatory terminology and format requirements and to ensure that nonstandard information is presented appropriately, given the plans’ benefit packages and organizational structures. For example, organizations offering fee-for-service (indemnity) plans would use different language in describing plan procedures and restrictions than MCOs would. Organizations are then responsible for printing and distributing the brochures. To verify the accuracy of the final documents, OPM obtains 20 brochures from each plan’s first print run. According to an OPM official, if OPM contract reviewers identify errors, they can require organizations to attach an addendum, reprint the brochures, or pay a fine. The official said that any errors identified are generally minor and are corrected through an addendum attached to the brochures. Although HCFA approved all the member literature we reviewed, weaknesses in three critical elements of the agency’s review process allowed errors to go uncorrected and important information to be omitted. Our review showed that the structure of HCFA’s contracting documents has created problems in determining the accuracy of plan materials and has resulted in the omission of important benefit details by several organizations. Additionally, HCFA’s lack of consistent standards has contributed to inconsistent reviews and extra work and may have increased the chance of errors slipping through the review process undetected. Moreover, MCOs have failed to correct plan materials as required by HCFA staff. HCFA has begun to address some, but not all, of the issues we have identified. MCOs’ Medicare contracts, which include the BIF, establish the foundation for HCFA’s review of marketing materials. HCFA reviewers are instructed to use the BIF to check that plan member literature accurately reflects the contracted benefits and member fees. Reviewers told us, however, that the BIFs often do not provide the required detail, and our work revealed that the BIFs did not provide consistent or complete benefit descriptions. For instance, the BIFs did not always specify whether a plan’s prescription drug benefit covered only specific drugs. Restricting coverage to a list of specific drugs, or a formulary, is a common element of plans’ benefit packages. Yet of our sample of 16 MCOs, 14 used formularies in one or more of the plans they offered, but only 8 disclosed this restriction in their BIFs. Because BIFs are often incomplete, reviewers sometimes rely on benefit summary sheets provided by MCOs to verify the accuracy of plan materials. This practice is contrary to HCFA policy, which requires an independent review of the MCOs’ plan literature. The reviewers who approved the erroneous materials cited earlier explained that some of the errors might have occurred because the MCOs’ summary sheets incorrectly described plans’ benefits. This was the explanation given by the reviewer who approved the plan member literature advertising a $600 annual benefit limit for brand-name prescription drugs instead of the contracted $1,200 annual limit. The lack of detailed standards for plans’ member literature can result in misleading comparisons and put some MCOs at a competitive disadvantage. Without detailed standards, HCFA reviewers have wide discretion in approving or rejecting plan materials. The MCO representatives and HCFA officials we spoke with said that this latitude leads to inconsistent HCFA decisions. An MCO official told us that, while several plans in a market area required a copayment for ambulance services if a beneficiary was not admitted to a hospital, not all plans were required to disclose that fact. The HCFA reviewer responsible for one plan’s materials required the plan to disclose the fee, yet different HCFA staff in the same regional office who reviewed other plans’ materials did not require similar disclosure. These inconsistent review practices caused one plan’s benefits to appear less generous, even though several other plans had similar benefit restrictions. The lack of mandatory format and terminology standards for key member literature, such as benefit summary brochures and member policy booklets, increases the amount of time and effort needed to review and approve plans’ member literature. Moreover, unlike many government programs, Medicare does not require MCOs to use standard forms for such typical administrative functions as enrollment, disenrollment, and appeals. Instead, each organization creates its own forms. Consequently, HCFA staff spend a great deal of time reviewing disparate documents that could be routine forms. Several reviewers commented that the volume and complexity of MCOs’ member literature contributed to the likelihood that errors would pass through the review process undetected. Agency staff said that they could spend more time reviewing important member documents, such as member policy booklets, if HCFA required the use of standard forms for administrative functions. HCFA officials recognize that standardizing key documents and terms would facilitate their review of plans’ marketing materials and reduce the administrative burden on both HCFA and MCOs. Some agency officials expressed concern, however, that MCOs might resist efforts to standardize the way information is presented. In fact, many of the MCO officials we spoke with said they would welcome some standardization because it could save them time and money. One MCO official commented that MCOs may not be using HCFA’s current guidelines and suggested standards because they are voluntary and use language that is legalistic and confusing to beneficiaries. Several MCO officials stressed that any mandatory standards should be developed with industry input and with the advice of professional marketing specialists. MCOs are responsible for correcting errors in their marketing materials and distributing accurate information. Some HCFA reviewers told us that they do not approve marketing materials until the MCO has corrected all identified errors. Other HCFA reviewers told us that they give contingent approval—that is, they approve the material if the MCO agrees to make specific corrections. The MCO is required to send a copy of the print-ready document to HCFA so the reviewer can verify that the corrections were made. Reviewers often did not have copies of the print-ready or final documents in their files, however. Several reviewers admitted that it was difficult to get the final documents from MCOs and that they generally trust the organizations to publish materials as approved or to make the corrections outlined in approval letters. Moreover, reviewers noted that the contingent approval practice was adopted to expedite reviews when materials required only minor corrections. However, MCOs did not always correct the errors HCFA identified during the review process. We reviewed one plan’s summary of benefits that incorrectly commingled 1997 and 1998 benefit information. The document we received from the MCO official contained several handwritten notations correcting inaccurate benefit information. For example, the copayment for prescription drugs was listed as $5, but a handwritten note indicated that there was no copayment for generic drugs. The HCFA staff member responsible for approving the material showed us a working copy of the document on which she had indicated the need for numerous changes. The published document we observed, however, did not incorporate many of these corrections. The reviewer had been unaware that the published document contained errors because she had never received a print-ready copy from the MCO. HCFA has undertaken several efforts to address some of the problems we identified during our review. The agency is developing a new plan benefit package (PBP) that it hopes will replace the BIF. The PBP’s new format improves upon the BIF by standardizing the information collected from each plan. The PBP includes detailed checklists that make it easier to obtain consistent benefit information from plans. However, the PBP is flexible enough to capture benefit features that do not fit neatly into a predetermined checklist. Using the PBP should also facilitate efforts to standardize member literature. HCFA intends to pilot test the PBP with a few MCOs this year for contract submissions effective in 2000. HCFA officials estimate that the PBP proposal will need to begin the Office of Management and Budget’s clearance process no later than August 1999 to achieve full implementation by 2000. Otherwise, full implementation could be delayed. Agency officials also recognize the importance of more uniform member literature and have articulated their intent to standardize key documents in future years. As a first step, HCFA established a work group to develop a standard format and common language for all plans’ benefit summaries. HCFA hopes to establish the benefit summary by May and plans to use it in the fall 1999 benefit summary brochures. Achieving this goal will require HCFA’s work group to reach consensus on standards for clear and accurate information and to avoid imposing burdensome requirements on MCOs. HCFA’s long-term goals include establishing standards for other key documents, but the agency has not yet developed a coordinated strategy for its long-term efforts or decided whether such standards will be voluntary or mandatory. Beneficiaries who enrolled or considered enrolling in the plans we reviewed were not well-served by plans’ efforts to produce member materials or HCFA’s review of them. The information that plans distributed was often confusing and hard to compare. Some plans distributed inaccurate or incomplete information or provided the information after beneficiaries had made their enrollment decisions, when it was less useful. These problems significantly limited beneficiaries’ ability to make informed decisions about their health plan options. Moreover, some beneficiaries may have been denied health care coverage to which they were entitled or required to pay unexpected out-of-pocket fees. In contrast, each FEHBP plan must provide prospective enrollees with a single, comprehensive brochure to facilitate comparisons and informed enrollment choices. Revisions to HCFA’s current review process and procedures could greatly improve the quality of plans’ member literature. For example, full implementation of HCFA’s new contract form for describing plans’ benefit coverage, the PBP, could help ensure that approved member literature is accurate and fully discloses important plan information. Similarly, standard terminology and formats for key member literature would facilitate full disclosure and provide beneficiaries with comparable plan information. Moreover, new standards for the distribution of key member literature would enable beneficiaries to have the information they need when they need it. The required use of standard forms for routine administrative functions, such as member enrollment, could reduce HCFA’s workload and allow staff to spend more time reviewing important member literature. Finally, efforts to standardize review procedures would help ensure consistent application of the agency’s marketing material review policy. In October 1996, we recommended that the Secretary of Health and Human Services direct the HCFA Administrator to (1) require standard formats and terminology for important aspects of MCOs’ marketing materials, including benefits descriptions, and (2) require that all literature distributed by organizations follow these standards. Although HCFA has taken initial steps toward this end, significant work remains. Therefore, we are both renewing our previous recommendations and recommending that the HCFA Administrator take the following additional actions to help Medicare beneficiaries make informed health care decisions and reduce the administrative burden on agency staff and MCOs. Require MCOs to produce one standard, FEHBP-like document for each plan that completely describes plan benefit coverage and limitations, and require MCOs to distribute this document during sales presentations and upon request. Fully implement HCFA’s new contract form for describing plans’ benefit coverage, the PBP, for the 2001 contract submissions to facilitate the collection of comparable benefit information and help ensure full disclosure of plans’ benefits. Develop standard forms for appeals and enrollment. Take steps to ensure consistent application of the agency’s marketing material review policy. HCFA agreed with our findings that the agency’s review process and procedures need to be strengthened in order to ensure that beneficiaries receive accurate and useful information. The agency also concurred with our recommendations to improve the oversight of Medicare+Choice organizations’ marketing materials and to require the use of standardized formats and language in plans’ member materials. HCFA has steps under way that may help correct some of the problems we found. For example, the agency is developing a standardized summary of benefits document and intends to require Medicare+Choice organizations to use the document beginning in November 1999. While HCFA’s efforts may standardize important aspects of plans’ materials, such as information about appeal rights, these efforts stop short of requiring Medicare+Choice organizations to provide a single standard and comprehensive document that describes plan benefits and beneficiaries’ rights and responsibilities as plan members. HCFA believes that Medicare+Choice organizations should retain the flexibility to develop materials that differentiate their services from those provided by other Medicare+Choice organizations. We agree that MCOs should be able to differentiate their plans. However, requiring MCOs to provide an FEHBP-like brochure, in addition to other plan materials, would preserve the MCOs’ flexibility and provide Medicare beneficiaries with more complete and comparable information than they may currently receive. In fact, these standard brochures may encourage plans to compete on real differences in plan features. The full text of HCFA’s comments appears in appendix II. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 1 day after the date of this letter. At that time, we will send copies of this report to the Honorable Donna E. Shalala, Secretary of Health and Human Services; the Honorable Jacob Lew, Director, Office of Management and Budget; the Honorable Nancy-Ann Min DeParle, Administrator of the Health Care Financing Administration; and other interested parties. We will also make copies available to others upon request. This report was prepared under the direction of James Cosgrove, Assistant Director, by Marie James, Keith Steck, and George Duncan. If you or your staff have any questions about this report, please contact Mr. Cosgrove at (202) 512-7029 or me at (202) 512-7114. To do this work, we reviewed relevant policies and procedures at Health Care Financing Administration (HCFA) headquarters and regional offices. We also interviewed HCFA officials at headquarters and at all regional offices and spoke with representatives of industry and beneficiary groups. We visited four regional offices (Atlanta, Chicago, Philadelphia, and San Francisco) that cover high managed care penetration areas. In addition, we analyzed 1998 member literature and Medicare contracts for 16 of the 346 MCO contracts effective in 1998 (4 from each region we visited). Our sample included MCOs that varied in enrollment levels, structure, location, and years of Medicare experience. Because each MCO can offer more than one plan—for example, a standard option and a high option—we reviewed key materials for a total of 26 plans. We considered key member literature to include benefit summary brochures, member policy booklets, member handbooks, and plan letters related to benefit changes. The plans we reviewed used various combinations of these key documents to disclose the details of their benefit packages, including benefit restrictions and members’ rights. Finally, we compared the Federal Employees Health Benefits Program and Medicare’s standards for plans’ member literature. Our analysis focused on three benefits that vary in complexity: ambulance transportation, annual screening mammography, and outpatient prescription drugs. We selected ambulance transportation and screening mammography because these benefits must be provided by all Medicare plans and are relatively simple to describe and understand. We selected the outpatient prescription drug benefit because it is complex, not covered by traditional Medicare, and an important consideration in many beneficiaries’ enrollment decisions. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the Medicare Choice program, focusing on: (1) the extent to which managed care organizations' (MCO) member literature provides beneficiaries with accurate and useful plan information; and (2) whether the Health Care Financing Administration's (HCFA) review process ensures that beneficiaries can rely on MCOs' member literature to make informed enrollment decisions. GAO noted that: (1) although HCFA had reviewed and approved the materials GAO examined, all 16 MCOs in GAO's sample from four HCFA regions had distributed materials containing inaccurate or incomplete benefit information; (2) almost half of the organizations distributed materials that incorrectly described benefit coverage and the need for provider referrals; (3) one MCO marketed (and provided) a prescription drug benefit that was substantially less generous than the plan had agreed to provide in its Medicare contract; (4) moreover, some MCOs did not furnish complete information on plan benefits and restrictions until after a beneficiary had enrolled; (5) other MCOs never provided full descriptions of plan benefits and restrictions; (6) although not fully disclosing benefit coverage may hamper beneficiaries' decisionmaking, neither practice violates HCFA policy; (7) as GAO has reported previously, it was difficult to compare available options using member literature because each MCO independently chose the format and terms it used to describe its plan's benefit package; (8) in contrast, the Federal Employees Health Benefits Program's (FEHBP) plans are required to provide prospective enrollees with a single comprehensive and comparable brochure to facilitate informed enrollment choices; (9) the errors GAO identified in MCO's member literature went uncorrected because of weaknesses in three major elements of HCFA's review process; (10) limitations in the benefit information form (BIF), the contract form that HCFA reviewers use to determine whether plan materials are accurate, led some reviewers to rely on the MCOs themselves to help verify the accuracy of plan materials; (11) additionally, HCFA's lack of required format, terminology, and content standards for member literature created opportunities for inconsistent review practices; (12) according to some regional office staff, the lack of standards also increased the amount of time needed to review materials, which contributed to the likelihood that errors could slip through undetected; (13) HCFA's failure to ensure that MCOs corrected errors identified during the review process caused some beneficiaries to receive inaccurate information; and (14) HCFA is working to revise the BIF and develop a standard summary of benefits for plans to use--steps that will likely improve the agency's ability to review member literature and other marketing materials--but other steps could be taken to improve the usefulness and accuracy of plan information.
The Missile Defense Agency plans to develop and field ballistic missile defense elements in increments called “blocks,” with each block providing increasing levels of capability over the previous block. In doing so, MDA’s charter states that MDA is responsible for assuring the supportability of the system and for developing plans with the services for BMDS elements early enough to support effective transition. DOD policy calls for new weapon systems be managed using a life-cycle management approach, which should include all activities for acquiring, developing, producing, fielding, supporting, and disposing of a weapon system over its expected lifetime. In addition, each service is responsible for developing force structure to organize units to accomplish missions using the new system. Life-cycle management is to consider how the new system will be supported over its expected useful life because system engineering and design can have a significant effect on operations and support costs. Typically, support planning begins early in development as DOD begins exploring concepts for a new weapon system and the support strategy is developed as the system is developed and completed before fielding. However, the DOD Inspector General reported in 2006 that MDA had not planned fully for system sustainment and had not developed a complete integrated logistics support plan. The report concluded that without improving its processes, including support planning, MDA faces increased risk in successfully integrating elements into a single system that will meet U.S. requirements for ballistic missile defense. A life-cycle cost estimate includes all costs associated with a weapon system’s research and development, investment, military construction, operations and support, and disposal. Since operation and support costs historically are the largest portion (over 70 percent) of a weapon system’s costs over its life, these costs can significantly affect development of a life- cycle cost estimate and were the focus of our analysis of DOD’s cost estimates. DOD usually prepares an independent life-cycle cost estimate for major weapons systems, and these estimates typically form the basis for budget submissions. Using a life-cycle cost estimate helps support the budget process by providing estimates of the funding required to execute a program and can help assess whether resources are adequate to support the program. A key step in assuring the credibility of the estimate is acquiring an independent cost estimate by an entity separate from those connected to the program. Independent estimates tend to be higher and more accurate than estimates developed by a system’s program office since independent estimators may be more objective and less likely to use optimistic assumptions. In its 2007 transition plan, DOD recognized that as much time as possible—72 months or more—should be allotted to transition a BMDS element from MDA to a military service. The transition process may, for some elements, end at a point that DOD calls transfer—which is the reassignment of the MDA program office responsibilities to a service. According to MDA and Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics officials, not all BMDS elements will ultimately transfer; the decision to do so will be made on a case-by- case basis and the conditions under which this may happen have not yet been specifically identified for each element. MDA’s 2004 charter states that the agency shall develop plans in conjunction with the services for BMDS elements during transition. The transition plan covers some overarching issues and contains separate sections for each BMDS element. For example, the transition plan includes some discussion for each element of various topics such as doctrine, organization, training, materiel, facilities, security, support strategies, and funding. DOD approved the first transition plan in September 2006 and approved the second plan in February 2008. DOD intends for its plan to guide the transition of roles and responsibilities from MDA to the services and serve as a basis for preparing budget submissions. Another purpose of DOD’s transition plan is to highlight critical issues that are of executive interest for the overall BMDS. For example, the latest plan included a critical issue of how BMDS capabilities will be managed over their life cycle and another critical issue is how operation and support costs will be shared between MDA and the services. Table 1 below shows the BMDS elements, when they were or are planned to be fielded, and which service has been designated as the lead for each element. year time period and to make resource decisions in light of competing priorities. The FYDP is a report that resides in an automated database, which is updated and published to coincide with DOD’s annual budget submission to Congress. The current FYDP, submitted with DOD’s fiscal year 2008 budget, included data through fiscal year 2013. Likewise, the FYDP that will be submitted with DOD’s fiscal year 2010 budget will include data through fiscal year 2015. This report is one in a series of reports we have issued on ballistic missile defense (see the list of Related GAO Products at the end of this report). Most recently, we found that DOD lacks a sound process for identifying and addressing the overall priorities of the combatant commands when developing ballistic missile defense capabilities. We reported in May 2006 that DOD had not established the criteria that must be met before BMDS can be declared operational. Also, in April 2007, we found that DOD and congressional decision-makers could benefit from more complete information to assess basing, support, infrastructure, budget requests, and DOD spending plans when considering BMDS program and investment decisions. Also, we issue an annual assessment of DOD’s progress in developing BMDS, and in March 2008, we reported that the high level of investment MDA plans to make in technology development warrants some mechanism for reconciling the cost of these efforts with the program’s progress. DOD has taken some initial steps to plan for BMDS support, but planning efforts to date are incomplete. In addition, long-term support planning has been complicated by difficulties in transitioning responsibility for providing support from MDA to the services. While DOD has drafted a proposal for BMDS management that DOD officials have stated is intended, in part, to address this issue, the draft proposal lacks important details. DOD’s long-term support planning for BMDS is incomplete because DOD has not developed and instituted a standard process that clearly specifies what support planning should be completed before elements are fielded, identifies which organization is responsible for life- cycle management, involves the services, and specifies how to transition support responsibilities from MDA to the services. Without such an established process that is enforced, DOD faces uncertainty over how BMDS elements will be supported over the long term and will be limited in its ability to improve support planning for future BMDS elements. While MDA has developed some guidance for developing support plans for BMDS elements and the overall system, based on Presidential and Secretary of Defense direction, MDA has focused on fielding a defensive ballistic missile capability as soon as practical. In 2005, MDA issued an Integrated Program Policy and a companion Implementation Guide, which directed MDA’s BMDS element offices to develop support plans for each element, as well as develop an integrated support plan for the entire system, update these plans every 2 years, and complete an assessment of readiness of the integrated plan to support operations of the overall BMDS. Nevertheless, planning efforts are incomplete. According to officials, as of August 2008, three of the seven elements we examined, the forward-based radar, the sea-based radar, and the European radar, do not have support plans in place. Additionally, a fourth element, the Ground-based Midcourse Defense element, has a plan that was initially completed in 2005, but the plan is now out of date, does not reflect the current configuration of the element, and it is therefore currently being updated. MDA has also issued a sustainment directive which states that support planning should be completed as elements move through various development phases. MDA’s directive specifies four phases with associated criteria that should be completed before exiting a phase to ensure, in part, effective long-term support of BMDS elements. Accordingly, initial support plans for a BMDS element should be completed before an element progresses from the programming and planning phase to the program execution phase and before the final deployment phase when an element is fielded. However, two of the elements we examined did not have support plans, even though they had progressed to a subsequent phase of development. One of these elements, the sea-based radar, has been categorized by MDA officials as being in the program execution phase, but officials stated that currently there is no support plan for this element and MDA has just recently begun to develop one. In addition, MDA officials told us that portions of the forward-based radar’s development are described as being in the deployment phase since the element has been fielded, but as of August 2008, there was no support plan for the radar and officials told us a plan would be completed by the end of the year. MDA officials recognize that past efforts in support planning have been incomplete. In response, MDA is proposing forming a logistics directorate, but it is not clear what the roles and responsibilities of this group will be or how soon the group will be fully staffed. Incomplete support planning is not a new issue. In 2006, DOD’s Inspector General reported that MDA had not developed an overall, integrated, BMDS-wide support plan, but had developed a summary document containing only general support planning information for four elements. The DOD Inspector General’s report concluded that without improving support planning, MDA faced increased risk in successfully integrating elements into a single system that will meet U.S. requirements. In 2006, MDA revised the document to include information on a total of 8 elements, but the document still did not contain more than high-level information on how each individual element would be supported and did not contain specific detail for how support would be managed across the integrated system. As of August 2008, MDA still had not developed an overall, integrated, BMDS-wide support plan. Without current support plans for every element and an integrated, system-wide support plan, MDA will be unable to conduct a support readiness assessment of the overall, integrated ballistic missile defense system as directed by its guidance. As a result, MDA cannot ensure that the integrated system has appropriate plans in place to support operations. DOD’s planning to support BMDS over the long term has not followed DOD’s key principles of weapon system life-cycle management. Although BMDS is not required to follow traditional weapon system life-cycle management processes, MDA’s charter states that BMDS will be managed consistent with the principles of the traditional weapon system process and that the office of the Under Secretary of Defense for Acquisition, Technology, and Logistics (USD (AT&L)) and MDA will determine which principles will be applied to the management of BMDS. However, USD (AT&L) and MDA have not determined and communicated to the services which parts of the usual life-cycle management processes apply to each element. Our prior work has shown that organizations should have defined guidance for planning and should communicate this guidance to stakeholders. While DOD’s key principles of weapon system life-cycle management state that support plans should be completed before a system is fielded, DOD has fielded BMDS elements before developing support plans. Of the elements we examined, three of the five elements that had been fielded as of August 2008 did not have support plans in place before the element was fielded. MDA fielded the Ground-based Midcourse Defense element in 2004 for limited defensive operations, but a support plan was not developed until 2005 and, officials said, it is now out of date. Similarly, MDA fielded a forward-based radar in Japan in 2006, but as of August 2008, the element still did not have a support plan. Finally, as of August 2008, MDA has not completed a support plan for the sea-based radar, even though the element was fielded in 2007 and is available for emergency use. Figure 1 below shows, for selected elements, a comparison of when the element was fielded to when the element’s support plan was, or is expected to be, completed. MDA’s support planning may not cover the elements’ expected useful life. Typically, weapon system developers are expected to develop support plans that provide detail for support that will be provided throughout a system’s life cycle. MDA officials told us that, in general, BMDS elements have an expected useful life of 20 years. However, MDA’s sustainment directive only applies until support responsibilities for an element have transitioned from MDA to the lead service. In general, MDA has agreed to support BMDS elements via contractors through 2013. However, Army and Navy officials told us that in some cases, they may prefer to perform some support functions within their organization and have begun some efforts to determine to what extent that should be done. For example, the Terminal High Altitude Area Defense (THAAD) element support plan assumes contractor-provided support, but Army officials told us that MDA and the Army are currently conducting an analysis of support options for THAAD, including contractor- or service-provided or a mix of the two. Depending on results of analyses of support options, BMDS support planning for some elements may change, making it difficult for DOD to consolidate element support planning into an overall, integrated system support plan. DOD has experienced difficulties in long-term support planning because DOD has not developed and instituted a standard process that clearly specifies what support planning should be completed before elements are fielded, identifies which organization is responsible for life-cycle management, involves the services, and specifies how to transition support responsibilities from MDA to the services. Until DOD takes action to do so, DOD will be unable to ensure that individual elements will be sustained after 2013. Also, without such a standard process, DOD’s long- term support planning for BMDS has been faced with a number of challenges. The first challenge affecting long-term support planning is that MDA and the services have not agreed on which organization should be responsible for long-term life-cycle management responsibilities, including developing long-term support plans. DOD policy and guidance state that the program manager is responsible for life-cycle management activities, including developing support plans, and is the single point of accountability for sustainment of a weapon system throughout its life. Additionally, our prior work has shown that establishing clear roles and responsibilities can improve outcomes by identifying who is accountable for various activities. However, in negotiating transition for some BMDS elements, MDA and the services disagree over which organization will be responsible for performing life-cycle management responsibilities, such as providing and planning for support over the long term. As a result, for five of the seven elements we examined, MDA and the services have been unable to reach agreement on who will be responsible for providing support and how these elements will be supported after 2013, even though MDA officials have stated that most elements are expected to have a useful life of 20 years. For example, MDA hopes to have the Army assume support responsibilities after 2013 for the Terminal High Altitude Area Defense element, the forward-based radar, and the Ground-based Midcourse Defense element. However, Army officials stated that they have not agreed to take over support of these elements at that time. Moreover, Navy officials stated that all life-cycle issues have to be considered to prevent the emergence of unplanned, future costs and intend to have the responsibilities for life-cycle support of the sea-based radar understood and apportioned between MDA and the Navy and documented prior to the formal transfer of the element. Table 2 below shows, by element, whether there is agreement on who provides support after 2013 and on who should be responsible for life-cycle management, and the status of support planning. Second, although DOD has designated a lead service to assume support responsibilities for most BMDS elements that have been or will be fielded by 2015, MDA and the services have not consistently worked together to develop plans to transition responsibility for long-term support. A DOD directive states that lead services, who will assume support responsibilities for BMDS, should work with MDA to develop transition and support plans. However, there was little or no service input in developing transition plans for three of the seven elements we examined— the ground-based element, sea-based radar, and the European radar. The services have been involved in support planning for ballistic missile defense capabilities added to already existing, legacy systems, but not routinely involved in support planning for newer elements. For example, MDA officials told us the Air Force was involved in support planning for the Upgraded Early Warning Radar since a ballistic missile defense capability was added to an existing Air Force radar. Similarly, the Navy included support planning for the Aegis Ballistic Missile Defense element in its existing support for Aegis ships. For non-legacy elements, however, services have either not been involved in support planning at all or have not been involved “early” enough to influence design decisions that may affect how an element is to be supported. For example, after the 2004 fielding of the ground-based element, the contractor developed a support plan for the element, but MDA and Army officials told us that since the plan was developed before the Army was named lead service, the plan had little to no Army input. Further, the Army stated that it is reluctant to assume responsibility for support contracts involving design decisions made without Army involvement. Also, Army officials said that it would have been helpful if they had input into facilities and security design decisions for the forward-based radar that were made before they were named lead service. Third, MDA and the services use methods to negotiate transition of support responsibilities from MDA to the services that are inconsistent, resulting in confusion over which method is authoritative and binding. Our prior work has shown that it is important for organizations to provide clear and complete guidance to their subordinate organizations. According to our analysis of the transition plan and what DOD officials have told us, it is unclear whether the transition plan is binding on the parties, and the plan does not provide specific guidance to the services or MDA for how to transition support responsibilities of individual elements. As a result, the transition plan is not the preferred forum for negotiating transition for all elements. For example, Air Force officials told us that they prefer using the transition plan as their negotiation forum because it identifies open issues unique to each element, and documents what MDA and the Air Force will do in specific years. In contrast, Navy officials told us that they prefer to use a memorandum of agreement to document transition agreements for each element that is signed by MDA and Navy leaders since it can take several months for the transition plan to be approved. As a result of the transition plan timing, Navy officials told us that the transition plan may not always reflect the Navy’s views, particularly for new elements such as the sea-based radar. Without a clear agreement on how to negotiate transition of support responsibilities, MDA has proposed that each service have a memorandum of agreement that would provide a strategic overview of how BMDS elements will transition from MDA to a service. These service memoranda of agreement would be supplemented by an element-specific transition plan that would provide a detailed, tactical view and specify when and how responsibilities, such as support, will transition from MDA to the service. However, DOD has not documented that this approach is the preferred method. As a result, transition of support responsibilities seems to occur ad hoc, element by element, with no standard process. Further, DOD will not be able to take advantage of lessons learned from one transition effort to the next without a consistent, documented process for how support responsibilities are transitioned from MDA to the services. The Missile Defense Executive Board is developing a proposal to improve management of BMDS elements, in part, to address support and transition issues. DOD created the Missile Defense Executive Board in 2007 to recommend and oversee implementation of strategic policies, plans, program priorities, and investment options for BMDS. The draft proposal states that BMDS should be managed as a portfolio to ensure major decisions take into account the BMDS life-cycle and include all major stakeholders. The proposed portfolio management suggests defense-wide funding for research and development, procurement, operation and support, and military construction. Finally, the draft proposal states that the responsibilities of DOD stakeholders in BMDS life-cycle management should be clarified. As the Board’s Chair, the office of USD (AT&L) has taken the lead in developing this draft proposal. USD (AT&L) officials explained that this draft proposal is intended to bridge the gap that exists between the traditional life-cycle system management processes and how BMDS is currently being fielded and managed. This process is also intended to: identify which principles of traditional life-cycle system management should be applied to BMDS, such as milestone reviews and support planning; specify how to transition responsibility for support from MDA to the services; and explain when a lead service should become involved. However, the draft proposal is very general and lacks important details. In particular, the draft proposal does not specify the role or timing for service involvement in developing support plans for elements, that support plans are to cover the elements’ expected life, be completed before fielding, how MDA and the services should negotiate transition of responsibility for providing support of BMDS elements, or when the draft proposal is expected to be approved and implemented. Also, MDA and USD (AT&L) officials told us that the draft proposal would not require discussions about life-cycle management for elements until the element has a lead service—which makes it difficult for the lead service to provide input into support decisions. DOD’s recent efforts to develop operation and support cost estimates for BMDS elements have limitations and are not transparent for DOD and congressional decision-makers. Although DOD has started to develop operation and support cost estimates for BMDS elements, the estimates are not complete and have limitations. Furthermore, BMDS operation and support costs are not transparent in the Future Years Defense Program (FYDP). DOD has not yet clearly identified BMDS operation and support costs because the department has not required that these costs are to be developed, validated, and reviewed according to key principles for cost estimating, and it has not specified when this should be done or identified who is responsible for doing so. DOD has developed a draft proposal for the overall management of BMDS, but the draft proposal lacks important details and does not address the limitations we identified. Without a requirement to develop operation and support cost estimates, DOD and the services will have difficulty preparing credible and transparent budget requests and face unknown financial obligations over the long term, thus hindering decision-makers’ ability to make informed tradeoffs among competing priorities both across BMDS elements and across the department. DOD is developing operation and support cost estimates for all seven of the BMDS elements we examined, which it intended to use in preparing its fiscal year 2010 through 2015 spending plan and to facilitate transition of funding responsibilities from MDA to the services. Thus far, MDA and the services have jointly developed and agreed on cost estimates for only two of the seven elements we examined—the Aegis ballistic missile defense and the Upgraded Early Warning Radar. MDA and the services have not yet completed the joint estimates for operation and support costs for the remaining five elements. The status of each of these remaining efforts is summarized below. Army—Ground-based Midcourse Defense, Terminal High Altitude Area Defense, and the forward-based radar: As of July 2008, MDA and the Army had not completed operation and support cost estimates for these three elements. MDA initially planned to complete the estimates by February 2008. The Army and MDA have agreed on the methodologies for developing operation and cost estimates. However, Army officials stated that, as of July 2008, the estimates are not complete because some of the assumptions may change and the estimates have not been reviewed and approved by the Army Cost Review Board. For example, an Army cost estimator told us that the estimate for the forward-based radar is not complete because many of the major assumptions that will drive costs, such as physical site location, infrastructure, and security requirements, remain undetermined. Air Force—European radar: The Air Force and MDA began to develop a joint estimate for the European radar in August 2008 and plan to update the estimate as assumptions are refined. However, since all base operating support requirements are not finalized, the Air Force spending plan for fiscal years 2010 through 2015, which is due to the Office of the Under Secretary of Defense (Comptroller) in August 2008, may not include all the operation and support costs for the European radar. Navy—Sea-based radar: The Navy and MDA plan to develop a joint estimate in fiscal year 2009. However, MDA and the Navy have separately developed operation and support cost estimates for this element. Using their separate estimates, MDA and Navy officials met to discuss the differences. According to MDA and Navy cost estimators, the Navy’s estimate was approximately $10 million a year higher than MDA’s, but MDA officials agreed that the Navy’s estimated platform maintenance costs were more accurate. The resulting cost estimate is intended to support a cost-sharing agreement between MDA and the Navy which, as of August 2008, had not been finalized. MDA and some service officials told us that the longer it takes to finish the estimates and agree on funding responsibilities, the less likely it is that these estimates will be reflected in the spending plans for fiscal years 2010 through 2015, which are currently under development. MDA officials have stated that their intention is to update these estimates annually, beginning in October 2009; however, as of August 2008, there were no signed agreements or requirements for the agency to do so. MDA and the services are beginning to estimate BMDS operation and support costs, but these efforts have limitations. First, the initial estimates are not yet complete and are likely to change over time, perhaps significantly, since MDA and the services are still determining key assumptions, such as how support will be provided—by contractor, the service, or a combination of the two—and where some elements may be fielded and operated. DOD and GAO key principles for preparing cost estimates state that complete and credible cost estimates are important to support preparation of budget submissions over the short term as well as to assess the long-term affordability of the program. As discussed earlier in this report, MDA and the services have not completed long-term support planning and they are still in the process of determining where some BMDS elements will be fielded and operated. DOD and GAO key principles for developing accurate and reliable cost estimates recommend that all assumptions that can profoundly influence cost should be identified before calculating the estimate. However, MDA and the services have not determined how some of the elements will be supported over the long term, which will affect operation and support costs, such as maintenance, base operating support, and facilities. For instance, during research, development, and fielding, MDA is using contractors to support the BMDS elements. However, after the elements transition from MDA to the services, the services may decide to support the elements using their own military personnel and facilities or possibly a combination of contractor support and military service support. For example, if the Army used its own operation and support personnel, the cost estimate could increase, since Army would require facilities costing about $138 million for 41 different buildings. Further, assumptions about where two of the BMDS elements will be fielded and operated could change which, when finalized, could affect key assumptions and the resulting cost estimate. An official in the Office of the Secretary of Defense, Cost Analysis Improvement Group, stated that any ambiguity in the estimate’s assumptions lowers the quality of the estimate and creates uncertainty about the results. For example, the Navy and MDA have not determined the amount of time the sea-based radar will spend on location in Adak, Alaska, in transit, and at sea. The greater use of fuel alone for increased time spent in-transit could significantly affect the operation and support cost estimate for the sea-based radar. Also, in developing the cost estimate for the Terminal High Altitude Area Defense, MDA and Army assumed peacetime operations with all of the units to be located at one site within the continental United States. However, if the Army decides to forward deploy one or more of the units for peacetime rotations, as is done for other similar weapon systems such as the Patriot system, the cost estimate could change significantly. Also, additional infrastructure and operation and support costs may be incurred if the Army decides to base the Terminal High Altitude Area Defense units at more than one site within the United States. The second major limitation to DOD’s cost estimates is that DOD does not plan to have the operation and support cost estimates for all the elements independently verified. DOD and GAO key principles for cost estimating state that independent verification of cost estimates is necessary to assure accuracy, completeness, and reliability. In typical weapon system development, cost estimates—including estimates for operation and support costs—are developed, independently validated, and reviewed by senior DOD leadership before a system is fielded. However, since MDA is exempt from traditional DOD weapon system development processes, there is no requirement for independent cost estimates, and DOD’s Cost Analysis Improvement Group prepares independent cost estimates only at MDA’s request. As of August 2008, MDA had requested only independent estimates of operation and support costs for two of the seven BMDS elements we reviewed. The Cost Analysis Improvement Group completed an estimate for Aegis ballistic missile defense in 2006 and is currently developing an estimate, including operation and support costs, for the European radar and interceptor site. Independently validated cost estimates are especially important to formulating budget submissions and DOD’s 6-year spending plan, the FYDP, which is submitted to Congress, since, historically, cost estimates created by program offices are lower than those that are created independently. Nevertheless, MDA and Cost Analysis Improvement Group officials have stated that there is no firm schedule or agreement to develop independent operation and support estimates for any of the other five BMDS elements we reviewed, including those that are already fielded, such as the forward- based radar, or will soon be fielded, such as the Terminal High Altitude Area defense element. However, even though the Army Cost Review Board will be reviewing the operation and support cost estimates for the Army’s three elements, these reviews do not constitute an independently developed cost estimate. MDA officials have stated that their priority is for the Cost Analysis Improvement Group to develop independent cost estimates for the research, development, and procurement costs of BMDS blocks, and this effort will not include independently estimating operation and support costs. MDA officials stated that they intend to ask the Cost Analysis Improvement Group to begin working on independent operation and support cost estimates after the block estimates are completed. However, MDA officials also acknowledged that there is no requirement for independent validation of operation and support estimates and the Cost Analysis Improvement Group would not begin its work on operation and support cost estimates until at least late 2009. Without credible long- term operation and support cost estimates, DOD and the services face unknown financial obligations for supporting BMDS fielding plans which will hinder budget preparation and assessment of long-term affordability. Table 3 below shows whether the joint operation and support cost estimates have been completed, whether the cost estimates will be independently verified, and the status of the joint estimates. The cost to operate and support the BMDS elements is not transparent in the FYDP and, as a result, DOD may have difficulty communicating to congressional decision-makers how much it will cost over time to support DOD’s fielding plans. For example, the FYDP, DOD’s 6-year spending plan, does not fully reflect BMDS operation and support costs that are expected to be incurred—and these are likely to be significant since operation and support costs are typically over 70 percent of a system’s total lifetime costs. Key principles for estimating program costs note that credible cost estimates are the basis for establishing and defending spending plans. We and DOD have repeatedly recognized the need to link resources to capabilities to facilitate DOD’s decision-making and congressional oversight. However, four factors hinder the visibility of BMDS operation and support costs in the FYDP. First, for five of the seven elements we examined, MDA and the services have not yet agreed on which organization is responsible for funding operation and support costs after fiscal year 2013, as shown in Table 4 below. As a result, all of the BMDS operation and support costs will not be reflected in the FYDP for fiscal years 2010 through 2015, which is currently under development. For example, the Army and MDA are still negotiating memoranda of agreements for the Ground-based Midcourse Defense element, Terminal High Altitude Area Defense, and forward-based radar that are intended, in part, to specify which organization is to fund operation and support costs in which fiscal years. One Army official estimated that it could take up to 18 months for these agreements to be signed. Hence, the Army will not be including all the costs in its budget for fiscal years 2010 through 2015 other than what the Army has already agreed to fund, such as security for the first forward-based radar at Shariki, Japan and some base support costs at Ft. Greely, Alaska. Also, MDA has not yet reached agreement with the Navy and the Air Force on which organization will fund operation and support costs for the sea-based radar and the European radar, respectively. The extent to which the FYDP for fiscal years 2010 through 2015 will include all of the operation and support costs that might be incurred for these elements is unclear. The second factor that hinders visibility of BMDS operation and support costs is that DOD’s transition plan, which is intended to reflect the most current cost agreements between MDA and the services, has not been completed in time for the services to use as they prepare their budgets and spending plans. The 2006 transition plan was approved in September 2006 and was intended to support the development of the budget and spending plan for fiscal years 2008 through 2013, but the services were required to submit their budgets to DOD in August 2006, which allowed no time for the services to alter their budget submissions accordingly. Similarly, the 2007 transition plan was originally intended to influence development of the fiscal year 2008 budget, but it was not approved until February 2008— too late to support development of the fiscal year 2008 budget. In commenting on the 2007 transition plan, the Army stated that the plan was not the basis for the Army’s budget submission. Consequently, the transition plan has not been effective in assisting development of the services’ budget and FYDP spending plans. The third factor that hinders transparency is that DOD does not clearly identify and aggregate BMDS operation and support costs in the FYDP. We previously reported that there is no FYDP structure to identify and aggregate ballistic missile defense operational costs. In 2006, we recommended that DOD develop a structure within the FYDP to identify all ballistic missile defense operational costs. However, as of August 2008, according to an official in the Office of the Under Secretary of Defense (Comptroller), DOD has not adjusted the FYDP structure to allow identification and aggregation of ballistic missile defense operation and support costs. Fourth, as the services develop their spending plans, funding BMDS operation and support costs will compete with other service priorities. Service officials stated that BMDS operation and support costs will have to come out of their operation and maintenance budgets, which fund the training, supply, and equipment maintenance of military units, as well as the administrative and facilities infrastructure of military bases. Priorities within this fund are highly competitive and BMDS operation and support would have to compete against all other service operation and maintenance priorities. It is therefore unclear how much of the operation and support costs will ultimately be reflected in the services’ budget submissions and spending plans, and DOD faces a risk that operation and support for BMDS will be funded unevenly across elements. DOD has not yet clearly identified BMDS operation and support costs because the department has not required that these costs be developed, validated, and reviewed, and it has not specified when this should be done or identified who is responsible for doing so. Without such a requirement, DOD’s operation and support cost estimates will continue to have limitations and will not be transparent in the FYDP. As a result, DOD will have difficulty preparing credible budget requests and estimating long- term costs, which are important in assessing affordability over time. As mentioned earlier in this report, DOD’s Missile Defense Executive Board is developing a draft proposal for the overall management of BMDS, which is intended to include an approach for managing and funding operation and support; however, the draft proposal is not well defined. The draft proposal suggests funding operation and support costs from a defense-wide account which, in theory, would allow these costs to be clearly identified and would alleviate the pressure on the services’ budgets to fund operation and support for BMDS. However, this proposal as drafted to date does not fully address the operation and support cost limitations identified in this report. Specifically, the draft proposal to date is not well defined, and the explicit process detailing how it would work has not been developed. Among other things, the draft proposal does not specify how MDA and the services will jointly determine the amount of operation and support funding that is needed; when and how operation and support cost estimates are to be developed, validated, and reviewed; or who should be responsible for doing so. Also, the draft proposal does not include a requirement for senior level review of cost estimates where the cost drivers and differences between the program estimates and independent estimates could be reviewed and explained. In typical weapon system programs, the program office estimate and the independent estimate are reviewed by senior DOD leaders and differences explained. Finally, it is not clear when the draft proposal will be approved or implemented. As a result, there is little likelihood that the upcoming DOD spending plan for fiscal years 2010 through 2015 will contain significant improvements in the visibility of BMDS operation and support costs. Although DOD has taken some initial steps to plan for support of BMDS elements, without a clearly defined process for long-term support planning, DOD is not poised to effectively manage the transition of BMDS support responsibilities from MDA to the services or to plan for their support over the long term. This will become increasingly important in years to come as more elements are fielded and operation and support costs begin to increase. Further, if the lead service is not actively involved early enough to influence support planning, the services may have little time to prepare to assume responsibility for the elements and could risk being unable to provide support for an element in the short term, particularly for new elements that did not originate in a service, such as the adjunct sensor. At the same time, DOD may face difficulties determining how the overall BMDS and individual elements will be sustained over the long term. MDA is not required to follow all of DOD’s traditional life-cycle management processes for weapon system programs. However, unless DOD takes action—either via the Missile Defense Executive Board’s draft proposal or by some other means—to establish when support planning that covers the element’s expected life and involves the services is to be completed, to specify who is responsible for life-cycle management and specify what this entails, and to establish accountability for ensuring these steps are completed, Congress will lack assurance that key decisions have been made that involve the services for which organization is responsible for providing support and how that support will be provided over the long term. Further, as Congress considers requests to fund operation and support for BMDS elements, in the face of many competing priorities, decision-makers may lack confidence that DOD has plans in place to assure the overall long-term supportability of this complex and costly system. As one of DOD’s largest weapon system investments, BMDS could easily incur billions of dollars in operation and support costs over time. Operation and support typically comprises over 70 percent of a weapon system’s total cost over its life. It is therefore critical that DOD and congressional decision-makers have complete, credible, and transparent cost information with which to evaluate budget requests in the near term and to evaluate whether fielding plans are affordable over the long term as an increasing number of BMDS elements are fielded. Given the program’s limited transparency to date, Congress is already limited in its ability to evaluate the near- and long-term budget implications of decisions already made to develop and field BMDS elements. Until DOD develops accurate, realistic, and transparent cost estimates according to key principles, including independent verification, its estimates will continue to lack the credibility necessary for building budget submissions and spending plans. Also, since MDA and the services have, in general, not reached agreement on who will pay for operation and support after 2013, and since BMDS will compete with other service priorities, there is a risk that operation and support funding for BMDS elements will vary from element to element. Until DOD requires that credible estimates be developed and until DOD specifies how BMDS operation and support funds will be prioritized, allocated, and distributed, the department risks being unable to clearly identify and align operation and support cost with fielding plans or to assure that funds are available for the operation and support of the missile defense elements over the long term. Further, the department will continue to lack internal controls to manage and oversee a significant number of federal dollars. Moreover, DOD and the services face unknown financial obligations to support BMDS elements over the long term. Finally, decision-makers inside and outside DOD will not have a sound basis with which to make difficult funding tradeoffs among competing priorities both across BMDS elements and across the department. We recommend that the Secretary of Defense take the following six actions: To improve planning to support BMDS elements, including planning for the transition of support responsibilities from MDA to the services, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology, and Logistics, establish a standard process for long-term support planning that adheres to key principles for life-cycle management, including: establishing timelines for planning that must be completed before each element is fielded, such as naming a lead service, involving services in support and transition planning, and deciding when support responsibilities will be transitioned to the services; requiring active lead service participation in developing long-term support plans and designating what support planning should be completed before elements are fielded; and specifying which organization is responsible for life-cycle management and identifying steps for oversight to identify who is accountable for ensuring these actions are accomplished. To increase transparency and improve fiscal stewardship of DOD resources for BMDS, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology, and Logistics, to establish a requirement to estimate BMDS operation and support costs including: detailing when credible estimates are to be developed, updated, and reviewed; specifying criteria for prioritizing, allocating, and distributing funds; and clearly identifying who is responsible for oversight of this process; requiring periodic independent validation of operation and support costs for each BMDS element; and using the independently validated estimates to support preparation of complete and credible budget submissions and DOD’s spending plan and to assess the long-term affordability of the integrated system and individual elements for informing key trade-off decisions. In written comments on a draft of this report, DOD concurred with one and partially concurred with five recommended actions. The department’s comments are reprinted in their entirety in appendix II. DOD also provided technical comments, which we have incorporated as appropriate. DOD partially concurred with our three recommendations to improve long-term support planning for BMDS elements. First, DOD partially concurred with our recommendation that the Under Secretary of Defense for Acquisition, Technology, and Logistics (USD (AT&L)) establish timelines for planning that must be completed before each element is fielded, such as naming a lead service, involving services in support and transition planning, and deciding when support responsibilities will be transitioned to the services. In its comments, DOD stated that the new BMDS life-cycle management process provides for service participation in annual MDA planning and programming. DOD further stated that through this process, timelines for transition of BMDS elements from MDA to the services after initial fielding will be executable within reasonable periods of time following initial fielding. DOD also stated that tailored negotiations between MDA and the services would be better than establishing uniform timelines and the Missile Defense Executive Board would step in if issues cannot be resolved in a timely fashion. However, USD (AT&L) officials told us that, as of September 15, 2008, the proposed BMDS life-cycle management process is a proposal and has not yet been implemented. Moreover, the draft proposal does not specify the role or timing for service involvement in developing support plans for elements. Regarding DOD’s preference not to establish uniform timelines, we believe that key steps in completing support planning can be condition-based rather than calendar- based. For example, we point out in our report that MDA’s own sustainment directive specifies what criteria, including support planning, should be completed before an element moves to a subsequent development phase. Also, while the Missile Defense Executive Board may step in to resolve issues, the Board is a new organization and it is not clear what criteria the Board would use to determine whether intervention is needed, specifically in the absence of specific guidance outlining how the process should work. Our recommendation would provide some needed structure and specificity that the draft proposal currently lacks; unless DOD takes action to implement this recommendation, transition of support responsibilities may continue in an ad hoc manner, and DOD may not be able to take advantage of lessons learned from one transition effort to the next. Second, DOD partially agreed with our recommendation that USD (AT&L) require active lead service participation in developing long-term support plans and designate what support planning should be completed before elements are fielded. DOD agreed that it is better to put long-term support plans into effect before BMDS elements are fielded, but said that fielding of an element should not be delayed because of incomplete support planning. DOD stated that once a lead service is designated, the element enters into the transition phase, memoranda of agreement are established, and an assessment is made by the department to determine when the element transfer is appropriate. As stated in our report, however, DOD has not documented that establishing memoranda of agreement is the preferred method of negotiating transition of responsibilities from MDA to the services. DOD also stated that by initiating its proposed life-cycle management process, the department intends to ensure that the services are active participants in long-term support planning. However, we point out in our report that several elements were fielded before support plans were completed and some, like the forward-based radar, still do not have a support plan more than 2 years after fielding. Also, we point out that DOD’s draft proposal for life-cycle management lacks important details such as when support plans are to be completed, and how MDA and the services should negotiate transition of responsibility for providing support. Further it is not clear when this draft proposal might be approved and implemented. Therefore, without specifying active service participation in developing long-term support plans and when these should be completed, DOD is likely to face continued difficulty in transitioning support responsibilities from MDA to the services and uncertainty will persist regarding how elements will be supported over the long term. Third, DOD partially agreed with our recommendation that USD (AT&L) specify which organization is responsible for life-cycle management and identify steps for oversight to identify who is accountable for ensuring these actions are accomplished. DOD stated that USD (AT&L) is responsible for initiating lead service designations and expects that the proposed life-cycle management process will ensure service involvement. DOD further stated that the Missile Defense Executive Board is chartered for providing oversight. However, we point out in our report that MDA and the services disagree over which organization will be responsible for performing life-cycle management responsibilities, such as providing and planning for support over the long term. Further, even though the Missile Defense Executive Board may provide some oversight, the proposed management process developed by this Board does not specify the role or timing for service involvement in developing support plans for elements, or that support plans are to cover the elements’ expected life and be completed before fielding, or how MDA and the services should negotiate transition of responsibility for providing support of BMDS elements. Our prior work has shown that establishing clear roles and responsibilities can improve outcomes by identifying who is accountable for various activities. Therefore, without specifically designating life-cycle management responsibilities and specifying what these responsibilities entail, DOD may continue to face challenges it its ability to transition responsibility for providing support from MDA to the services and will be limited in its ability to improve long-term support planning for future BMDS elements. DOD concurred with one and partially concurred with two of our recommendations to establish a requirement to estimate BMDS operation and support costs. DOD agreed with our recommendation that USD (AT&L) require periodic independent validation of operation and support costs for each BMDS element. In its comments, DOD stated that periodic independent estimates of operation and support costs for BMDS elements are desirable. DOD also stated that the current arrangement between its Cost Analysis Improvement Group and MDA provides for independent cost estimates based on the MDA Director’s priorities and that additional direction from the Under Secretary on the timing and frequency of independent cost estimates could facilitate planning for and executing these estimates. Although DOD agreed with this recommendation, its response did not indicate when it would implement the recommendation. Since independent verification of cost estimates is necessary to assure accuracy, completeness, and reliability, we encourage DOD to implement this recommendation as soon as possible. Without credible long-term operation and support cost estimates, DOD and the services face unknown financial obligations for supporting BMDS fielding plans, which will hinder assessing long-term affordability. DOD partially agreed with our recommendation that the Secretary of Defense direct USD (AT&L) to detail when credible estimates are to be developed, updated, and reviewed; specify criteria for prioritizing, allocating, and distributing funds; and clearly identify who is responsible for oversight of this process. In its comments, DOD stated that it does not require specific direction from the Under Secretary at this time. However, we reported that DOD has not clearly identified operation and support costs because the department has not required that these costs be developed, validated, and reviewed. Therefore, we continue to believe that, in the absence of a clear requirement for estimating long-term operation and support costs, direction from senior DOD leadership is needed. DOD also stated in its comments that it remains confident its proposed BMDS life-cycle management process and the efforts of the Missile Defense Executive Board will be successful in ensuring that decision-makers have complete, credible, and transparent cost information before the services assume and/or fund any responsibilities transitioned to them. However, as we reported, the BMDS draft proposal for the life-cycle management process is not well defined and does not specify when and how operation and support cost estimates are to be developed, validated, and reviewed or who should be responsible for doing so. Also, we reported that it is not clear when the draft proposal will be approved or implemented and DOD’s comments did not provide us with a schedule or time frame for taking action. Without taking specific action on this recommendation, it is not clear who will be responsible for ensuring credible operation and support estimates are developed or how these funds will be managed. Further, decision-makers inside and outside DOD will not have a sound basis with which to make difficult funding tradeoffs among competing priorities both across BMDS elements and across the department. Finally, DOD partially agreed with our recommendation that the Secretary of Defense direct USD (AT&L) to use independently validated operation and support cost estimates to support preparation of complete and credible budget submissions and DOD’s spending plan and to assess the long-term affordability of the integrated system and individual elements for informing key trade-off decisions. In its comments, DOD agreed that, whenever possible, independent cost estimates should be used to support its planning, programming, and budgeting decisions, but stated that the department does not believe that specific direction from the Under Secretary is needed. We reported that BMDS operation and support costs are not transparent in DOD’s spending plan, the Future Years Defense Program, and that DOD has not yet completed operation and support cost estimates for several BMDS elements. Although DOD agreed that independent cost estimates should be used to support planning, programming, and budgeting decisions, its draft proposal for the life-cycle management process does not address this issue. Without specific direction to use independently validated cost estimates to prepare budget submissions and spending plans, there is little assurance that DOD’s future spending plans will contain significant improvements in the credibility of BMDS operation and support costs. We are sending copies of this report to the Secretary of Defense; the Director, Missile Defense Agency; Chairman, Joint Chiefs of Staff; and the Chiefs of Staff and Secretaries of the Army, Navy, and Air Force. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions, please call me at (404) 679-1816. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff members who made key contributions to this report are listed in appendix III. To determine the extent to which the Department of Defense (DOD) has (1) planned for the support of Ballistic Missile Defense System (BMDS) elements over the long term and (2) identified the long-term operation and support costs for the BMDS elements it plans to field, we conducted various analyses, reviewed key documentation, and interviewed relevant DOD officials. During this review, we focused on the seven BMDS elements that are already fielded or planned for fielding over fiscal years 2008 through 2015. Since all the BMDS elements are in various stages of development and transition to a military service, we selected a nongeneralizable sample to provide illustrative examples of issues related to both objectives. The illustrative sampling strategy identifies examples to gain deeper insight, demonstrate consequences, and provide practical, significant information about the BMDS elements under a variety of conditions, such as identifying at least one element that is intended to transition to each of the services, some elements that are already fielded, and some elements that will be fielded by 2015. As a result, we selected seven BMDS elements: Aegis Ballistic Missile Defense, Ground-based Midcourse Defense, Terminal High Altitude Area Defense, AN/TPY-2 (forward-based radar), Sea-based X-band Radar, Upgraded Early Warning Radar, and European Midcourse Radar. To assess the extent to which DOD has developed plans for how to support BMDS elements over the long term, we compared the planning that had been done with key principles embodied in DOD and Missile Defense Agency (MDA) policies and guidance for life-cycle management to determine what aspects may be missing or have limited service involvement that could hinder transition of responsibility for support of BMDS elements from MDA to the services and hinder the ability to provide long-term support. To do so, we obtained and assessed relevant documents such as BMDS element support plans, MDA support documents, DOD guidance for MDA and the Missile Defense Executive Board, and MDA documents explaining program status and plans such as the 2007 BMDS Transition and Transfer Plan signed February 4, 2008. We also discussed the extent of support planning, the level of service involvement in support and transition planning, and whether the assignment of life-cycle management responsibilities was clearly designated with MDA and relevant officials from the Army, Navy, and Air Force. Further, using DOD briefings, memorandums, and discussions with DOD officials, we compared the draft Missile Defense Executive Board draft proposal for BMDS management with the shortfalls in support planning we identified to determine the extent to which the draft proposal may address those shortfalls. Finally, we discussed the results of our comparisons with officials from the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics; Missile Defense Agency; Air Force Headquarters Strategic Plans and Policy Directorate; U.S. Air Force Space Command; U.S. Army Headquarters and Space and Missile Defense Command; and the Office of Naval Operations Theater Air and Missile Defense Branch. To assess whether DOD has identified the long-term operation and support costs for the BMDS elements it plans to field, we evaluated how MDA and the services developed cost estimates and then compared the method by which those estimates were prepared with key principles compiled from DOD and GAO sources that describe how to develop accurate and reliable cost estimates to determine their completeness and the extent to which DOD took steps to assess the confidence in the estimates. We then discussed the results of our comparison and the status of the operation and support cost estimates with officials from the Office of the Deputy Assistant Secretary of the Army for Cost and Economics; the Naval Center for Cost Analysis; Air Force Space Command; the Missile Defense Agency; the Office of the Secretary of Defense Program, Analysis, and Evaluation and its Cost Analysis Improvement Group, and the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics. In addition, we assessed key documents such as the 2007 Transition and Transfer Plan and the Aegis memorandum of agreement to determine the extent to which MDA and the services have or have not agreed to fund operation and support costs for BMDS elements after 2013 and confirmed our understanding with MDA and service officials. Furthermore, to follow- up on our previous recommendation, we interviewed an official in the Office of the Under Secretary of Defense (Comptroller) to determine whether DOD had taken any action on our recommendation to develop a structure in the FYDP to identify all ballistic missile defense operational costs. Finally, using DOD briefings and other documents, we compared the Missile Defense Executive Board draft proposal for BMDS management with the shortfalls in estimating and funding operation and support costs we identified to determine the extent to which the draft proposal may address those shortfalls. We discussed our findings with officials from the Office of the Under Secretary of Defense for Acquisitions, Technology, and Logistics and the Missile Defense Agency. Other organizations we visited to gain an understanding of their roles in support planning and cost estimating included the Joint Staff, U.S. Strategic Command and its Joint Functional Component Command for Integrated Missile Defense, and U.S. Northern Command. We conducted this performance audit from August 2007 through September 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Gwendolyn R. Jaffe and Marie A. Mak, Assistant Directors; Brenda M. Waterfield; Whitney E. Havens; Pat L Bohan; Pamela N. Harris; Kasea Hamar; Nicolaas C. Cornelisse; and Susan C. Ditto made key contributions to this report. Ballistic Missile Defense: Actions Needed to Improve the Process for Identifying and Addressing Combatant Command Priorities. GAO-08-740. Washington, D.C.: July 31, 2008. Defense Acquisitions: Progress Made in Fielding Missile Defense, but Program Is Short of Meeting Goals. GAO-08-448. Washington, D.C.: March 14, 2008. Cost Assessment Guide: Best Practices for Estimating and Managing Program Costs, Exposure Draft. GAO-07-1134SP. Washington, D.C.: July 2007. Missile Defense: Actions Needed to Improve Information for Supporting Future Key Decisions for Boost and Ascent Phase Element. GAO-07-430. Washington, D.C.: April 17, 2007. Defense Acquisitions: Missile Defense Acquisition Strategy Generates Results, but Delivers Less at a Higher Cost. GAO-07-387. Washington, D.C.: March 15, 2007. Defense Management: Actions Needed to Improve Operational Planning and Visibility of Costs for Ballistic Missile Defense. GAO-06-473. Washington, D.C.: May 31, 2006. Defense Acquisitions: Missile Defense Agency Fields Initial Capability but Falls Short of Original Goal. GAO-06-327. Washington, D.C.: March 15, 2006. Defense Acquisitions: Actions Needed to Ensure Adequate Funding for Operation and Sustainment of the Ballistic Missile Defense System. GAO- 05-817. Washington, D.C.: September 6, 2005. Military Transformation: Actions Needed by DOD to More Clearly Identify New Triad Spending and Develop a Long-term Investment Approach. GAO- 05-962R. Washington, D.C.: August 4, 2005. Military Transformation: Actions Needed by DOD to More Clearly Identify New Triad Spending and Develop a Long-term Investment Approach. GAO- 05-540. Washington, D.C.: June 30, 2005. Defense Acquisitions: Status of Ballistic Missile Defense Program in 2004. GAO-05-243. Washington, D.C.: March 31, 2005. Future Years Defense Program: Actions Needed to Improve Transparency of DOD’s Projected Resource Needs. GAO-04-514. Washington, D.C.: May 7, 2004. Missile Defense: Actions Are Needed to Enhance Testing and Accountability. GAO-04-409. Washington, D.C.: April 23, 2004. Missile Defense: Actions Being Taken to Address Testing Recommendations, but Updated Assessment Needed. GAO-04-254. Washington, D.C.: February 26, 2004. Missile Defense: Additional Knowledge Needed in Developing System for Intercepting Long-Range Missiles. GAO-03-600. Washington, D.C.: August 21, 2003. Missile Defense: Alternate Approaches to Space Tracking and Surveillance System Need to Be Considered. GAO-03-597. Washington, D.C.: May 23, 2003. Missile Defense: Knowledge-Based Practices Are Being Adopted, but Risks Remain. GAO-03-441. Washington, D.C.: April 30, 2003. Missile Defense: Knowledge-Based Decision Making Needed to Reduce Risks in Developing Airborne Laser. GAO-02-631. Washington, D.C.: July 12, 2002. Missile Defense: Review of Results and Limitations of an Early National Missile Defense Flight Test. GAO-02-124. Washington, D.C.: February 28, 2002. Missile Defense: Cost Increases Call for Analysis of How Many New Patriot Missiles to Buy. GAO-NSIAD-00-153. Washington, D.C.: June 29, 2000. Missile Defense: Schedule for Navy Theater Wide Program Should Be Revised to Reduce Risk. GAO/NSIAD-00-121. Washington, D.C.: May 31, 2000.
The Department of Defense (DOD) has spent a total of over $115 billion since the mid-1980s to develop a Ballistic Missile Defense System (BMDS) comprised of land, air, and sea-based elements--such as missiles and radars--working together as an integrated system. Since the cost to operate and support a weapon system usually accounts for most of a system's lifetime costs, the resources needed to fund BMDS could be significant as DOD fields an increasing number of BMDS elements. In 2005, DOD began planning to transition responsibility for supporting BMDS elements from the Missile Defense Agency (MDA) to the services. GAO was asked to assess the extent to which DOD has (1) planned to support BMDS elements over the long-term, and (2) identified long-term operation and support costs. To do so, GAO analyzed 7 BMDS elements that will be fielded by 2015, compared DOD's plans and cost estimates to DOD and GAO key principles, and assessed the extent to which MDA and the services have agreed on responsibilities for supporting and funding BMDS elements. DOD has taken some initial steps to plan for BMDS support, but efforts to date are incomplete, and difficulties in transitioning responsibilities from MDA to the services have complicated long-term planning. DOD key principles for weapon system life-cycle management stress the importance of completing support plans that cover a system's expected useful life before it is fielded. Although MDA has developed some policies and guidance for BMDS support planning, it has not developed support plans for three of the seven elements that GAO examined, and MDA has not completed an overall support plan for the integrated system. DOD's long-term support planning for BMDS is incomplete because it has not established a standard process clearly specifying what support planning should be completed before fielding or how to transition the responsibility for supporting BMDS elements from MDA to the services. For five of the seven elements GAO examined, MDA and the services have been unable to reach agreement on who will be responsible for providing support after 2013. DOD has drafted a proposal for BMDS management that DOD officials have stated is intended, in part, to address these issues. However, the draft proposal lacks important details, and it is not clear when it is expected to be approved and implemented. Without a standardized process for long-term support planning, uncertainty will persist regarding how the elements will be supported over the long term. DOD's recent efforts to develop operation and support cost estimates for BMDS elements have limitations and are not transparent for DOD and congressional decision makers. DOD and GAO key principles for cost estimating state that complete, credible, and independently verified cost estimates are important to support preparation of budget submissions over the short term as well as for assessing the long-term affordability of a program. DOD has started to develop operation and support cost estimates for the seven elements GAO examined, but those efforts are not yet complete and have limitations. First, the estimates are likely to change since DOD is still determining key assumptions. Second, DOD does not plan to have the estimates independently verified. Furthermore, the Future Years Defense Program, DOD's 6-year spending plan, does not fully reflect BMDS operation and support costs. DOD has not yet clearly identified BMDS operation and support costs because the department has not required that these costs are to be developed, validated, and reviewed, and it has not specified when this should be done or who is responsible for doing so. Although DOD's draft proposal for managing BMDS contains some funding suggestions, it does not address the operation and support cost limitations GAO identified. Without a requirement to develop and validate BMDS operation and support cost estimates, DOD will have difficulty preparing credible budget requests and assessing the affordability of BMDS over the long term.
USPS has a vast mail processing network consisting of multiple facilities with different functions, as shown in figure 2. In fiscal year 2011, according to USPS, it had a nationwide mail processing network that included 461 facilities, 154,325 full-time employees, and about 8,000 pieces of mail processing equipment. This network transports mail from where it is entered into USPS’s network, sorts it for carriers to deliver, and distributes it to a location near its destination in accordance with specific delivery standards. USPS receives mail into its processing network from different sources such as mail carriers, post offices, and commercial entities. Once USPS receives mail from the public and commercial entities, it uses automated equipment to sort and prepare mail for distribution. The mail is then transported between processing facilities where it will be further processed for mail carriers to pick up for delivery. Trends in mail use underscore the need for fundamental changes to USPS’s business model. First-Class Mail volume peaked in fiscal year 2001 at nearly 104 billion pieces and has fallen about 29 percent, or 30 billion pieces, as of fiscal year 2011. Although First-Class Mail volume accounted for 44 percent of total mail volume in fiscal year 2011, it generated about 49 percent of USPS’s revenue. In comparison, Standard Mail (primarily advertising) accounted for 51 percent of total mail volume but generated only about 27 percent of USPS’s revenue. Further, it takes about three pieces of Standard Mail, on average, to equal the financial contribution from one piece of First-Class Mail. Looking forward, USPS projects that First-Class Mail will decline significantly between now and 2020. For the first time, in 2010, less than 50 percent of all bills were paid by mail as consumers continue to switch to electronic alternatives. USPS projects that Standard Mail volume will remain roughly flat between now and 2020, thereby increasing its share of revenues generated. Almost 60 percent of mail received by households in 2010 was advertising. USPS has said that its mail processing network is configured primarily so that it can meet the First-Class Mail delivery standards within a 1- to 5-day window, depending on where the mail is entered into the postal system and where it will be delivered. Most First-Class Mail is to be delivered in 1 day when it is sent within the local area served by the destinating mail processing center; 2 days when it is sent within reasonable driving distance, which USPS considers within a 12-hour drive time; 3 days for other mail, such as mail transported over long distances by air; and 4 to 5 days if delivery is from the 48 contiguous states to the noncontiguous states, Puerto Rico, the U.S. Virgin Islands, or Guam. Delivery service standards within the contiguous 48 states generally range from 1 to 10 days for other types of mail. Delivery service standards help USPS, mailers, and customers set realistic expectations for the number of days mail takes to be delivered, and to plan their activities accordingly. USPS requires a certain level of facilities, staff, equipment, and transportation resources to consistently meet First-Class Mail and other delivery service standards as expected by its customers. The USPS processing and transportation networks were developed during a time of growing mail volume, largely to achieve service standards for First-Class Mail and Periodicals, particularly the overnight service standards. To revise service standards, USPS can propose changes, such as elimination of overnight delivery for First-Class Mail, through a regulatory proceeding that includes the consideration of public comments. Further, whenever USPS proposes a change in the nature of postal services that affects service on a nationwide basis, USPS must request an advisory opinion on the change from PRC. In addition, USPS annual appropriations have mandated 6-day delivery and rural mail delivery at certain levels. USPS has asked Congress to allow it to change the delivery standard from 6- to 5-day-a-week delivery. USPS and other stakeholders have long recognized the need for USPS to reduce excess capacity in its mail processing network. In 2002, USPS released a Transformation Plan that provided a comprehensive strategy to adapt the mail processing and delivery networks to changing customer demands, eroding mail volumes, and rising costs. One key goal cited in the plan was for USPS to become more efficient by standardizing operations and reducing excess capacity in its mail processing and distribution infrastructure. In 2003, a presidential commission examining USPS’s future issued a report recommending several actions that would facilitate USPS efforts to consolidate its mail processing network. The commission determined that USPS had far more facilities than it needed and those facilities that it did require often were not used in the most efficient manner. The commission recommended that Congress create a Postal Network Optimization Commission modeled in part on the Department of Defense’s Base Realignment and Closure (BRAC) Commission, to make recommendations relating to the consolidation and rationalization of USPS’s mail processing and distribution infrastructure. We reported in 2010 that Congress has considered BRAC-type approaches to assist in restructuring organizations that are facing key financial challenges. These commissions have gained consensus and developed proposed legislative or other changes to address difficult public policy issues. The 2003 presidential commission also recommended that USPS exercise discipline in its hiring practices to “rightsize” and realign its workforce with minimal displacement. The Postal Accountability and Enhancement Act (PAEA), enacted in 2006, encouraged USPS to expeditiously move forward in its streamlining efforts and required USPS to develop a network plan describing its long-term vision for rationalizing its infrastructure and workforce. The plan was to include a strategy to consolidate its mail processing network by eliminating excess capacity and identifying cost savings.Congress, which we describe in more detail later in the report. In June 2008, USPS provided its Network Plan to In 2009, GAO added USPS to its list of high-risk areas needing attention by Congress and the executive branch to achieve broad- based transformation. Given the decline in mail volume and revenue, we suggested that USPS develop and implement a broad restructuring plan—with input from PRC and other stakeholders and approval by Congress and the administration. We added that this plan should address how USPS plans to realign postal services (such as delivery frequency and delivery standards); better align its costs and revenues; optimize its operations, network, and workforce; increase mail volume and revenue; and retain earnings so that it can finance needed capital investments and repay its growing debt. In 2009, we testified that maintaining USPS’s financial viability as the provider of affordable, high-quality universal postal services would require actions in a number of areas, such as rightsizing its retail and mail processing networks by consolidating operations and closing unnecessary facilities. Furthermore, in 2010 we provided strategies and options that Congress could consider to better align USPS costs with revenues and address constraints and legal restrictions that limit USPS’s ability to reduce costs and improve efficiency. We reported that USPS could close major mail processing facilities and relax delivery standards to facilitate consolidations and closures of mail processing facilities as options for reducing network costs. From fiscal years 2006 through 2011, USPS data showed that it reduced mail processing and transportation costs by $2.4 billion—or 16 percent— by reducing the number of mail processing work hours, facilities, and employees as shown in table 1. Specifically, USPS data show that it eliminated about 35 percent of its total mail processing work hours, 32 percent of its mail processing facilities, and 20 percent of its full-time mail processing employees. USPS’s OIG report determined that a valid business case existed for 31 of the 32 implemented AMP studies (97 percent) reviewed, and that those cases were supported by adequate capacity, increased efficiency, reduced work hours, and mail processing costs, and improved service standards. United States Postal Service, Office of Inspector General, U.S. Postal Service Past Network Optimization Initiatives, CI-AR-12-003 (Arlington, VA: Jan. 9, 2012). not to approve the AMP study.savings of $167 million from AMP consolidations in fiscal years 2010 and 2011. According to USPS data, it achieved Transformed the Bulk Mail Center network: In the past, mailers dropped their bulk mail at a network of 21 Bulk Mail Centers. USPS would then process and transport the bulk mail to its final destinations. By 2007, however, a significant portion of this mail bypassed the Bulk Mail Center network and was dropped at a processing plant closer to its final delivery point. In fiscal year 2009, USPS reported that it had begun transforming its 21 Bulk Mail Centers into Network Distribution Centers and completed the transformation in fiscal year 2010. According to USPS, this was designed to better align work hours with workload and improve transportation utilization, resulting in cost savings of $129 million. Even after taking these actions to reduce excess capacity, USPS stated that excess capacity continues and structural changes are necessary to eliminate it. Three major reasons for continued excess capacity include the following: Accelerating declines in mail volume: Since 2006, declines in mail volume have continued to worsen. For example, single-piece First- Class Mail has dropped by almost 19 billion pieces. Furthermore, USPS’s volume forecasts to 2020 indicate that the decline in First- Class Mail volume will not abate going forward but instead will continue—from 73 billion pieces in 2011 to 39 billion pieces in 2020— further exacerbating the problem of costly excess capacity (see fig. 3). Declining First-Class Mail volume is primarily attributed to the increasing number of electronic communications and transactions. The recent recession and other economic difficulties have further accelerated mail volume decline. Continuing automation improvements: These improvements have enabled USPS to sort mail faster and more efficiently. For example, USPS’s Flats Sequencing System machines automatically sort larger mail pieces (e.g., magazines and catalogs) into the order that they will be delivered. At the end of fiscal year 2011, USPS reported that it had deployed 100 flats sequencing machines to 46 sites and the Flats Sequencing System covered nearly 43,000 delivery routes and processed an average of almost 60 percent of flats at more than half of those sites. Increasing mail preparation and transportation by mailers: While most First-Class Mail goes through USPS’s entire mail processing network, around 83 percent of Standard Mail is destination entered—that is, business mailers enter mail within a local area where it will be delivered, bypassing most of USPS’s mail processing network and long-distance transportation.destination entered has increased 16 percent over the last decade. On December 15, 2011, USPS asked PRC to review and provide an advisory opinion on its proposal to change its delivery service standards, primarily by changing its delivery standards to eliminate overnight delivery service for most First-Class Mail and Periodicals. USPS has stated that these changes in delivery service standards are a necessary part of its plan to consolidate its mail processing operations, workforce, and facilities. Under this plan, the 42 percent of First-Class Mail that is currently delivered within 1 day would be delivered within 2 to 3 days. See table 2 for the percentage of First-Class Mail volume that is intended to be delivered within the current and proposed delivery service standards. USPS’s plan included details on facilities, staff, equipment, and transportation that USPS would eliminate as a result of the change in delivery service standards and the estimated cost savings from these changes. On the basis of an analysis of fiscal year 2010 costs, USPS estimated that service standard changes centered on eliminating overnight service for significant portions of First-Class Mail and Periodicals could save approximately $2 billion annually when fully implemented. To save this amount, USPS stated that it plans to use the already established AMP study process, which was designed to achieve cost savings through the consolidation of operations and facilities with excess capacity. USPS has stated that the AMP process provides opportunities for USPS to reduce costs, improve service, and operate as a leaner, more efficient organization by making better use of resources, space, staffing, processing equipment, and transportation. In a February 2012 press release, USPS announced that it would begin consolidating or closing 223 processing facilities during the summer and fall of 2012— contingent on a final decision to change service standards, which it said it expects to complete sometime in March. USPS added that it will not close any facilities prior to May 15, 2012, as agreed upon with some Members of Congress. PRC is currently reviewing the details of USPS’s proposal to revise service standards, the estimated cost savings, the potential impacts on both senders and recipients, and USPS’s justification for the change to advise USPS and Congress on the merits of USPS’s proposal. PRC procedures enable interested stakeholders, including the public, to file questions and comments to PRC regarding USPS’s proposal.after the time for obtaining public input is concluded in July 2012. PRC expects to issue its advisory opinion on USPS’s proposal USPS has stated that consolidating its networks is unachievable without relaxing delivery service standards. The Postmaster General testified last September that such a change would allow for a longer operating window to process mail, which would enable USPS to reduce unneeded facilities, work hours, workforce positions, and equipment. USPS identified scenarios looking at how constraints within the mail processing network affected excess capacity and found that if the current standard for overnight First-Class Mail service was relaxed, plant consolidation could occur, which would more fully maximize the use of facilities, labor, and equipment. USPS estimates of excess capacity it wants to eliminate based on proposed changes to its overnight delivery service standards are shown in table 3. USPS estimated that it could consolidate, all or in part, 223 processing facilities based on its proposed changes in First-Class and Periodical delivery service standards. USPS has also specified that changing delivery service standards would enable it to remove up to 35,000 mail processing positions as it consolidates operations into fewer facilities. The number of employees per facility ranges from 50 to 2,000. Reducing work hours and the size and cost of its workforce will be key for USPS, since its workforce generates about 80 percent of its costs. In addition, USPS entered into a collective bargaining agreement with the American Postal Workers Union in April 2011 that established a two-tier career pay schedule for new employees that is 10.2 percent lower than the existing pay schedule. This labor agreement also allowed USPS to increase its use of noncareer employees from 5.9 percent to 20 percent, thereby enabling USPS to hire more lower-paid noncareer employees when replacing full-time career employees. USPS has also pointed out that it has about 8,000 pieces of equipment used for processing mail, but could function with as few as 5,000 pieces if it adopts the proposed delivery service standards. Declining mail volume has resulted in a reduced need for machines that sort mail using Delivery Point Sequencing (DPS) programs, on a national level, by approximately one-half.Sequencing machinery use would allow for greater reliance on machinery that incurs lower maintenance costs. In addition, much of this equipment According to USPS, however, a reduction of Delivery Point is currently used to sort mail only 4 to 6 hours per day. USPS plans to optimize the use of its remaining equipment to sort mail by increasing its maximum usage up to 20 hours per day. USPS estimates that it makes more transportation trips than are currently necessary. USPS’s transportation network includes the movement of mail between origin and destination processing plants. USPS, however, has estimated that changing its delivery service standards as proposed in December 2011 would enable it to reduce these facility-to-facility trips by about 25 percent, or 376 million trips. Relaxing delivery standards and consolidating its mail processing network is just one part of USPS’s overall strategy to achieve financial stability. On the revenue side, USPS has noted that it cannot increase mail prices beyond the Consumer Price Index cap, and price increases cannot remedy the revenue loss resulting from First-Class Mail volume loss. USPS has also reported that it faces restrictions on entering new lines of business and does not see any revenue growth solution to its current financial problems. In February 2012, USPS announced a 5-year business plan to achieve financial stability that included a goal of achieving $22.5 billion in annual cost savings by the end of fiscal year 2016. USPS’s proposed changes in its mail processing and transportation networks are included in its 5-year business plan, as are initiatives to save 1. $9 billion in network operations, of which $4 billion would come from consolidating its mail processing and transportation networks; 2. $5 billion in compensation and benefits; and 3. $8.5 billion through legislative changes, such as moving to a 5-day delivery schedule, and resolving funding issues associated with USPS’s retiree health benefits. At the same time, USPS’s 5-year plan would also reduce the overall size of the postal workforce by roughly 155,000 career employees, of which up to 35,000 would come from consolidating the mail processing network, with many of those reductions expected to result from attrition. According to the 5-year plan, half of USPS’s career employees—283,000 employees—will be retirement eligible by 2016. In March 2010, USPS presented a detailed proposal to PRC to move from a 6-day to a 5-day delivery schedule to achieve its workforce and cost savings reduction goals. USPS projected that its proposal to move to 5-day delivery by ending Saturday delivery would save about $3 billion annually and would reduce mail volume by less than 1 percent. However, on the basis of its review, PRC estimated a lower annual net savings—about $1.7 billion after a 3-year phase-in period—as it noted that higher revenue losses were possible. In February 2012, USPS updated its projected net savings to $2.7 billion after a 3-year implementation period. Implementing 5-day delivery would require USPS to realign its operations network to increase efficiency, maintain service, and address operational issues. Some business mailers have expressed concern that reducing processing facilities as a result of eliminating overnight delivery service could increase costs for business mailers who will have to travel farther to drop off their mail. In addition, business mailers have expressed concern that service could decline as USPS plans to close an unprecedented number of processing facilities in a short period. USPS employee associations have said that the proposed changes would reduce mail volume and revenue, thus making USPS’s financial condition worse. Business mailers have commented that such a change in delivery service standards and postal facility locations could shift mail processing costs to them and reduce the value of mail for their businesses. While many of USPS’s customers who are business mailers indicated they would be willing to accept the service standard changes and understood the need for such a change, several mailers noted that it is never good when an organization reduces services. As a result of USPS’s plan, businesses using bulk First-Class Mail, Standard Mail, or Periodicals may have fewer locations where mail can be entered and may therefore need to transport it to locations different from those now in use. Furthermore, businesses using Standard Mail may have to transport their bulk mail to other locations to take advantage of discounts. USPS officials told business mailers in February 2012, when it announced the facilities it planned to close, that it did not plan immediate changes to the locations where business mailers drop off their mail or to the associated discounts. USPS officials told us that they plan to retain business mail locations at their current locations or in close proximity. Additionally, businesses that publish Periodicals, like daily or weekly news magazines, have expressed concern over the elimination of overnight delivery leading to deliveries not being made in a timely fashion. Delivery delays could result in customers canceling their subscriptions, thereby reducing the value of mail. These business mailers have indicated that they will most likely accelerate shifting their hard copy mail to electronic communications or otherwise stop using USPS if it is unable to provide reliable service as a result of these changes. Business mailers have also stated their concern that service could be significantly disrupted as a result of closing an unprecedented number of processing facilities by 2016. If service declines, mail users stated they are likely to lose confidence in the medium and choose to move volume and revenue from the mail to other media. Business mailers have stressed the need for USPS to put forward and share with stakeholders a comprehensive, detailed plan for consolidating its network and changes in service standards that explains to mail users what it intends to do, what changes will occur, and milestones and timelines for measuring progress in how it is achieving its plans. In sum, a key message from USPS customers is that while many support efforts to consolidate the mail processing network, it is imperative for USPS to provide consistent mail delivery and work with mailers to keep their costs down. Employee associations have expressed concern that USPS’s proposed changes may result in even greater losses in mail volume and revenue, which would further harm USPS financially. The National Association of Letter Carriers commented that downgrading service would serve only to drive customers away, reduce revenue and compromise potential growth. Further, the American Postal Workers Union and the National Rural Letter Carriers’ Association commented that USPS’s proposal would degrade existing USPS products, limit USPS’s ability to introduce new products, place the USPS at a distinct competitive disadvantage, and severely hamper its ability to accommodate growth. USPS responded to these comments by acknowledging that its proposal would, to some degree, reduce the value of the mail to customers, but on balance is in the long- term interests of USPS to help maintain its viability for all customers into the future. USPS estimated that its proposal would result in additional volume decline of almost 2 percent, revenue decline of about $1.3 billion, with a net annual benefit of about $2 billion. USPS faces major challenges in two areas related to consolidating its mail processing network and has told Congress that it needs legislative action to address them. Specifically, these challenges include the following: Lack of flexibility to consolidate its workforce: USPS stated it must be able to reduce the size of its workforce in order to ensure that its costs are less than revenue. Action in this area is important since USPS’s workforce accounts for about 80 percent of its costs. The Postmaster General testified last September, however, that current collective bargaining agreements prevent USPS from moving swiftly enough to achieve its planned workforce reductions. In addition, USPS has requested legislative action to eliminate the layoff protections in its collective bargaining agreements. The key challenges in this area include the following: No-layoff clauses: About 85 percent of USPS’s 557,000 employees are covered by collective bargaining agreements that contain, among other provisions, employment protections such as no-layoff provisions. Currently, USPS’s collective bargaining agreements with three of its major unions contain a provision stating that postal bargaining unit employees who were employed as of September 15, 1978, or, if hired after that date, have completed 6 years of continuous service are protected against any involuntary layoff or reduction in force. Furthermore, USPS’s memorandum of understanding with the American Postal Workers Union extends this no-layoff protection to cover those employed as of November 20, 2010—even if those employees were not otherwise eligible for no-layoff protection. The collective bargaining agreement with its fourth major union—the National Rural Letter Carriers’ Association—states that that no bargaining unit employees employed in the career workforce will be laid off on an involuntary basis during the period of the agreement. The no-layoff clauses will be a challenge to USPS primarily if it cannot achieve its workforce reductions through attrition. With the large number of employees eligible for retirement, USPS has a window of opportunity to avoid layoffs of non-bargaining unit employees who are not eligible for no-layoff protection. Fifty-mile limits on employee transfers: In 2011, the American Postal Workers Union (which represents USPS clerks, maintenance employees, and motor vehicle service workers) and USPS management negotiated a 4-year agreement that limits transferring employees of an installation or craft to no more than 50 miles away. If USPS management cannot place employees within 50 miles, the parties are to jointly determine what steps may be taken, which includes putting postal employees on “standby,” which occurs when workers are idled but paid their full salary because of reassignments and reorganization efforts. USPS may face challenges in capturing cost savings as a result of its initiatives to reduce excess capacity because of its limited ability to move mail processing clerks from a facility where workloads no longer support the number of clerk positions needed to facilities with vacant positions. Collective bargaining agreements have expired for three of the four major postal unions, and because of impasses in negotiations, USPS has moved to arbitration with these unions. In 2011, USPS reported that it had no assurance that it would be able to negotiate collective bargaining agreements with its unions that would result in a cost structure that is sustainable within current and projected future mail revenue levels. It noted that there is no current mandate requiring an arbitrator to consider the financial health of USPS in its decision and an unfavorable arbitration decision could have significant adverse consequences on its ability to meet future financial obligations. Resistance to facility closures: USPS is facing resistance to its plans to consolidate or close postal facilities from Members of Congress, affected communities, and its employees and has requested congressional action to enable it to consolidate and close facilities. We reviewed numerous comments from Members of Congress, affected communities, and employee organizations that have expressed opposition to closing facilities. Such concerns are particularly heightened for postal facilities identified for closure that may consolidate functions to another state, causing political leaders to oppose and potentially prevent such consolidations. For example, Members of Congress have resisted a recent proposal to move certain processing functions from its Rockford, Illinois, Processing and Distribution Center to a processing facility in Madison, Wisconsin. This proposal would eliminate the need for 82 employees (77 bargaining unit and 5 management staff) in Rockford that USPS would need to transfer into new roles or to another facility. The president of the Springfield Chamber of Commerce sent a letter to PRC to protest USPS’s planned consolidation of the Springfield, Illinois, processing facility into St. Louis, Missouri, stating that this move would reduce service quality and increase costs, affecting its members’ profitability and operations. He added that Springfield would lose up to 300 jobs in an area of the community that qualifies as an “Area of Greatest Need,” according to the U.S. Department of Housing and Urban Development. In contrast, however, other business mailers and Members of Congress have expressed support for consolidating the mail processing network to reduce costs. Some business mailers have stated that USPS needs to take cost-saving action to reduce the need for significant postal rate increases. A significant postal increase would have a detrimental financial impact on mailers by decreasing mail’s return on investment and may also accelerate mailers’ shift toward electronic communication. In addition, as we discuss below, some Members of Congress have proposed legislation supporting USPS efforts to consolidate its mail processing network. Other stakeholders, including USPS’s employee associations, have questioned whether USPS needs to make drastic changes by reducing service and the size of its networks and workforce, since they believe that USPS’s financial crisis is, at least in part, artificial. They point out that most of USPS’s losses since fiscal year 2006 are due to the requirement to prefund its future retiree health benefits. In 2006, PAEA established a 10-year schedule of USPS payments into a fund (the Postal Service Retiree Health Benefits Fund) that averaged $5.6 billion per year through fiscal year 2016. Employee associations have stated that such a requirement is exceptional and unfair, since no other federal agency is forced to prefund its employees’ health benefits at this level and no company has such a mandate. They have suggested that instead of reducing costs, Congress should eliminate the prefunding requirements, return surpluses in its retirement accounts, and allow USPS to earn additional revenue by offering new services. USPS responded that given the multibillion-dollar deficits that it has experienced in each of the last 5 years, and given the over $14 billion loss it expects in fiscal year 2012, capturing cost savings wherever possible will be vital to USPS’s financial viability. If USPS cannot increase revenues enough to eliminate its net losses, it will have to do more to reduce costs. To address USPS prefunding issues, we testified that deferring some prefunding of USPS’s retiree health benefits would serve as short-term fiscal relief. However, deferrals also increase the risk that USPS will not be able to make future payments as its core business declines. Therefore, we concluded that it is important for USPS to continue funding its retiree health benefit obligations— including prefunding these obligations—to the maximum extent that its finances permit. USPS has stated that it needs action from Congress to address restrictions that limit its ability to consolidate its mail processing network, including annual appropriations provisions that mandate 6-day delivery,and granting USPS authority to determine delivery frequency. Some Members have asked USPS to postpone actions to consolidate mail processing facilities so it would not preempt Congress on postal reform. In response to the Members’ request, USPS agreed last December to place a moratorium on closing facilities until May 15, 2012. As of April 2012, the House of Representatives and Senate committees with USPS oversight responsibility have passed bills to help USPS achieve financial viability. These bills, as well as other postal reform bills, include provisions that could affect USPS’s ability to consolidate its mail processing network. Table 4 summarizes the key provisions of the House of Representatives bill—H.R. 2309, the Postal Reform Act of 2011-—and Senate bill—S. 1789, the 21st Century Postal Service Act. Pending legislation originating in the Senate (S.1789) includes provisions that would affect USPS’s ability to consolidate its networks by delaying USPS’s move to 5-day delivery by 2 years and requiring USPS to consider downsizing rather than closing facilities. Delaying USPS’s move to a 5-day delivery schedule could make it difficult for USPS to save $22.5 billion by 2016. On the other hand, the Senate bill includes a requirement for arbitrators to consider USPS’s financial condition and could facilitate attrition by allowing USPS to use surplus pension funds to pay for employee buyouts of up to $25,000 for as many as 100,000 eligible postal workers. Such buyouts may make it easier to reduce USPS’s workforce in facilities targeted for closure. Another legislative proposal, originating in the House of Representatives, (H.R. 2309) includes provisions that would enhance USPS’s ability to consolidate its mail processing network by allowing changes in service standards and using a BRAC framework to approve a consolidation plan, address some of the political resistance to closing postal facilities, and potentially reform the collective bargaining process. The proposed Commission on Postal Reorganization could broaden the current focus on individual facility closures—which are often contentious, time–consuming, and inefficient—to a broader networkwide restructuring, similar to the BRAC approach. In other restructuring efforts where this approach has been used, expert panels have successfully informed and permitted difficult restructuring decisions, helping to provide consensus on intractable decisions. As previously noted, the 2003 Report of the President’s Commission on the USPS also recommended such an approach relating to the consolidation and rationalization of USPS’s mail processing and distribution infrastructure. We also reported in 2010 that Congress may want to consider this approach to assist in restructuring organizations that are facing key financial challenges. In addition, the House bill authorizes USPS to declare up to 12 non-mail delivery days annually so long as USPS is required to deliver mail 6 days per week and reforms the collective bargaining process, including requiring arbitrators to consider USPS’s financial condition. Developing an optimal mail processing network will require both congressional support and USPS leadership. Moreover, we have previously reported that Congress and USPS need to reach agreement on a comprehensive package of actions to improve USPS’s financial viability. In these previous reports, we provided strategies and options that Congress could consider to better align USPS costs with revenues and address constraints and legal restrictions that limit USPS’s ability to reduce costs and improve efficiency. Consequently, we are not making new recommendations or presenting a matter for Congress to consider at this time. Without congressional action to help USPS address its financial problems, USPS will be limited in the amount of rate increase it may seek and may fall even further into debt. USPS had $2 billion remaining on its $15 billion statutory borrowing limit at the end of fiscal year 2011. It is now abundantly clear that the postal business model must be fixed given the dramatic and estimated decline in volume, particularly for First-Class Mail. If Congress prefers to retain the current delivery service standards and associated network, decisions will be needed about how USPS’s costs for providing these services will be paid, including additional cost reductions or revenue sources. We provided a draft of this report to USPS for review and comment. USPS had no comments, but provided technical clarifications, which we incorporated into the report as appropriate. We are sending copies of this report to the appropriate congressional committees, the Postmaster General, and other interested parties. In addition, the report is available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions on this report, please contact me at (202) 512-2834 or stjamesl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Contact information and key contributors to the report are listed in appendix II. This report addresses (1) past actions the U.S. Postal Service (USPS) has taken to reduce excess capacity, (2) USPS’s plans to consolidate its mail processing network, and (3) key stakeholder issues and challenges USPS faces in consolidating its mail processing network. To describe what actions USPS has taken to reduce excess capacity, we obtained data from USPS related to changes in its mail processing network, workforce, and costs as well as an updated 10-year volume forecast for First-Class Mail. To calculate the 5-year cost savings that USPS achieved, we took the difference of the network costs for fiscal years 2006 and 2011 that USPS reported to us. We also obtained data from USPS and USPS Office of Inspector General (OIG) reports regarding cost savings related to USPS initiatives to reduce excess capacity. Further, we reviewed USPS annual reports to Congress and its network plans as section 302 of the Postal Accountability and Enhancement Act of 2006 requires USPS to submit; related GAO and USPS OIG reports, as well as other relevant studies relating to reducing excess capacity in USPS’s mail processing network. To examine USPS’s future plans to consolidate its mail processing network, we reviewed USPS’s December 2011 proposal to change delivery service standards and its plan to consolidate its mail processing network by reducing facilities, staff, equipment, and transportation resources. We also reviewed USPS’s 5-year business plan to profitability issued in February 2012. We interviewed USPS senior management and local facility mangers in Illinois about the current processing network and future plans for that network. We also reviewed documents in the ongoing Postal Regulatory Commission (PRC) review of USPS’s proposed changes in service standards and its plan for consolidating its mail processing network. PRC is reviewing USPS’s estimated cost savings, service impacts, and public input on the proposed service standard changes and expects to complete its review sometime after July 2012. To determine key issues and challenges USPS faces in consolidating its mail processing network, we reviewed and summarized concerns from postal stakeholders responding to the USPS’s September 2011 Federal Register notice on its proposed changes to service standards for First- Class Mail, Periodicals, and Standard Mail. We also interviewed USPS officials, and reviewed stakeholder testimonies and published letters from Members of Congress commenting on USPS plans to change delivery service standards and close facilities. We further reviewed pending legislative proposals that could affect USPS’s efforts to address excess capacity and consolidate its mail processing network. We conducted this performance audit from April 2011 through April 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Teresa Anderson (Assistant Director), Samer Abbas, Joshua Bartzen, Erin R. Cohen, Sara Ann Moessbauer, Amy Rosewarne, and Crystal Wesco made key contributions to this report.
Since 2006, the U.S. Postal Service has taken actions to reduce its excess capacity. Such actions have made progress toward consolidating the mail processing network to increase efficiency and reduce costs while meeting delivery standards. However, since 2006, the gap between USPS expenses and revenues has grown significantly. In February 2012, USPS projected that its net losses would reach $21 billion by 2016. As requested, this report addresses (1) actions USPS has taken since 2006 to reduce excess capacity— in facilities, staff, equipment, and transportation; (2) USPS plans to consolidate its mail processing network; and (3) key stakeholder issues and challenges related to USPS’s plans. GAO reviewed relevant documents and data, interviewed USPS officials, reviewed proposed legislation, and reviewed stakeholder comments to USPS plans for changing delivery service standards. Since 2006, the U.S. Postal Service (USPS) has closed redundant facilities and consolidated mail processing operations and transportation to reduce excess capacity in its network, resulting in reported cost savings of about $2.4 billion. Excess capacity remains, however, because of continuing and accelerating declines in First-Class Mail volume, automation improvements that sort mail faster and more efficiently, and increasing mail preparation and transportation by business mailers, much of whose mail now bypasses most of USPS’s processing network. In December 2011, USPS issued a proposal for consolidating its mail processing network, which is based on proposed changes to overnight delivery service standards for First-Class Mail and Periodicals. Consolidating its network is one of several initiatives, including moving from a 6-day to a 5-day delivery schedule and reducing compensation and benefits, that USPS has proposed to meet a savings goal of $22.5 billion by 2016. This goal includes saving $4 billion by consolidating its mail processing and transportation network and reducing excess capacity as indicated in the table below. The Postal Regulatory Commission is currently reviewing USPS’s proposal to change delivery service standards. Stakeholder issues and other challenges could prevent USPS from implementing its plan for consolidating its mail processing network or achieving its cost savings goals. Although some business mailers and Members of Congress have expressed support for consolidating mail processing facilities, other mailers, Members of Congress, affected communities, and employee organizations have raised issues. Key issues raised by business mailers are that closing facilities could increase their transportation costs and decrease service. Employee associations are concerned that reducing service could result in a greater loss of mail volume and revenue that could worsen USPS’s financial condition. USPS has said that given its huge deficits, capturing cost savings wherever possible will be vital. USPS has asked Congress to address its challenges, and Congress is considering legislation that would include different approaches to addressing USPS’s financial problems. A bill originating in the Senate provides for employee buyouts but delays moving to 5-day delivery, while a House bill creates a commission to make operational decisions such as facility closures and permits USPS to reduce delivery days. If Congress prefers to retain the current delivery service standards and associated network, decisions will need to be made about how USPS’s costs for providing these services will be paid, including additional cost reductions or revenue sources. GAO is not making new recommendations in this report, as it has previously reported to Congress on the urgent need for a comprehensive package of actions to improve USPS’s financial viability and has provided Congress with strategies and options to consider. USPS had no comments on a draft of this report.
The Treasury first began its technical assistance program in 1990 in Central Europe and then expanded it to include the countries of the former Soviet Union. The Treasury receives funds from USAID to pay for the cost of advisors. For fiscal years 1990-98, USAID transferred about $134.2 million to the Treasury to support the Treasury program. The Treasury had 39 resident advisors, as of September 1998, who covered 13 Central Europe or former Soviet Union countries. There are regional advisors in Budapest, Hungary, who are technical experts and assist where they are needed. In addition, the Treasury uses short-term advisors based in the United States. The overall strategy of the Treasury advisor program is to provide direct, technical advice to senior host-government officials on the development of laws and administrative procedures and institutions to promote fiscal stability, efficient resource allocation, transparent and democratic processes, and private-sector growth. The Treasury tries to obtain advisors who are experts in each of its five program areas. The program areas are (1) tax policy and administration, which is to help countries establish tax systems that are fair and objective and that generate necessary revenues for government operations; (2) financial institutions, policy, and regulation, which is to develop policies and activities relating to privatization of state-owned commercial banks and improve their management; (3) budget policy, formulation, and execution, which is intended to help strengthen ministries of finance by helping introduce modern budget processes; (4) government debt issuance and management, which is to provide advice to host-government officials on developing markets for the sale of government securities; and (5) law enforcement, which is to enhance enforcement capabilities of the government to address crimes that can undermine privatization, developing financial systems, and other economic reforms. The first four areas are each managed by an associate director in Washington, D.C., and a senior advisor in Budapest. The fifth program area is managed by a senior advisor based in Washington, D.C. The Treasury plans worldwide expansion of its advisor program. In the Omnibus Appropriations Act of 1999 (P.L. 105-277), Congress provided $1.5 million to the Treasury for its own advisor programs in countries outside Central Europe and the former Soviet Union. The programs in Central Europe and the former Soviet Union will continue to be funded through transfers from USAID. Treasury advisors’ contributions have ranged from the provision of policy advice on major initiatives, such as reform of a tax code or a budget system, to discrete projects such as the development of comparative analyses of how selected countries restructured insolvent banks, to the completion of a model to forecast gross domestic product (GDP). For the most part, Russian and Romanian officials complimented the Treasury advisors’ work. The following paragraphs contain a description, by country, of the Treasury advisors’ activities in Russia and Romania, including the Treasury program areas and specific activities and tasks. In Russia, OTA is providing technical assistance in three program areas—tax policy and administration, financial institutions, and government budgets. The development of a sound tax system to support economic initiatives and reforms in Russia is a high U.S. government priority. According to OTA’s Associate Director, Tax Advisory Program, the advisors’ most concerted efforts were in the preparation of draft tax legislation to reform the Russian tax code. In the tax policy and administration area, the Treasury advisors have helped analyze a proposed Russian tax code and have prepared memos for Russian officials on issues that included tax compliance, revenue forecasting estimates; property and business taxation, depreciation and investment issues, and formulation of a Russian-Cyprus tax treaty. As a general rule, advisors are not physically located within government ministries in Russia. Although one advisor told us that he did not have direct, daily contact with host-government officials and had received little feedback on how useful his work has been, the Russian Vice Minister of Finance told us that overall the Treasury’s tax advisors have been helpful to the State Tax Service and were completing a tax model for the Ministry. In addition, the Deputy Minister of Taxation noted that U.S. advisors from the Treasury and USAID were helpful in providing advice and assistance in all areas of tax aid. At the time of our work, the Russian government had not approved major tax reforms. OTA’s Associate Director, Tax Advisory Program, told us he believed that the advisors should continue providing tax assistance because tax reform is critical to Russia’s economic development. The financial institutions advisor in Russia was providing assistance on several banking issues, especially on how to deal with banking insolvencies—a widespread problem in Russia. Since the first financial institutions advisor was sent to Russia in December 1997, the objectives have evolved. Initially, the advisor was to work with the bank rehabilitation department of the Central Bank and focus on (1) identifying problems in the banking system, (2) providing recommendations for an early warning system regarding banks in financial trouble, and (3) making recommendations for dealing with banking problems before they reached a crisis stage. His Russian counterpart’s superior at the Central Bank was removed from office 2 months later, and the replacement official wanted assistance in other areas. Under the new official, the advisor was to (1) advise/prepare drafts of federal laws, programs, and regulations on bank restructuring, rehabilitation of problem banks, and liquidation of insolvent banks; (2) advise on establishing and managing a deposit guaranty system; and (3) advise and assist on training staff in bank rehabilitation and restructuring. At the time of our visit, the advisor was completing a comparative analysis of how 12 countries restructured their insolvent banks, to identify “lessons learned” for the Russian Central Bank. A Russian Central Bank official said he was satisfied with the assistance being provided by the Treasury advisor. This official told us that the Bank recognizes that the Treasury had a wealth of information on banking in other countries, that relations were good with the Treasury advisor, and the Bank has benefited from Treasury seminars, including one on the U.S. Resolution Trust Corporation. He said he and the advisor normally meet once a week and that there have been no Bank requests for assistance to which the Treasury advisor has not responded. The three budget advisors in Russia have primarily focused on devising statistical measures that provide data for budget formulation. A resident advisor who is a macroeconomist had spent most of his time developing a model to forecast GDP. Another advisor developed a consumer sentiment index to measure the attitudes and expectations of the Russian consumer, which provides indicators of potential economic growth and is relevant to budget planning. This particular budget advisor’s task at the time of our fieldwork was to analyze and monitor the savings behavior of the population for use by the Ministry of Finance. A third budget advisor prepared a study on standards of living in Russia that was discussed in hearings by the budget committee of the upper house of the parliament. The advisor is now completing a study for the budget committee on the relationship between standards of living and the allocation of federal funds among the regions. The Russian Vice Minister of Finance praised the assistance of the macroeconomist and told us that he uses the GDP model on a daily basis. The Vice Minister was highly complimentary of the advisors’ work on the consumer sentiment index. Also, the Chief of Staff of the parliament budget committee said the advisor’s report on standards of living in Russia was appreciated by the committee and by regional authorities throughout Russia. In Romania, OTA is providing assistance in four program areas—tax policy and administration, budgets, government debt issuance, and law enforcement. OTA suspended advisors’ work on financial institutions because it concluded that the government was not ready to move forward on reforms. In Romania, the tax advisor has provided assistance on ways to improve the Romanian tax system. The tax advisor had prepared a plan for a new national tax administration system that included recommendations for restructuring the lines of authority over tax administration at both the central government and local levels; identified the need for standard manuals, procedures, and processes for tax administration activities; and suggested the creation of a taxpayer service section to assist and educate the public and a training academy that would use a standard curriculum to teach procedures and methodologies of functions such as investigation and collection to tax administration staff. According to a cognizant official, the Ministry was undecided whether certain components of the proposed plan would be fully implemented. The tax advisor also has been helping Ministry staff prepare manuals for large audits and collections. In addition, she has provided comments on a proposed income tax law. The advisor told us that she spends about half of her time on her main assignment—the tax administration reorganization project—with the remaining time spent providing advice and assistance in other areas. For example, she told us that she has both arranged and taught courses for the Ministry’s tax controls department, presented a general management practices seminar, and was assisting in the development of a forms design workshop. She also organized study tours for Ministry staff to observe how tax administration is done in other countries, facilitated the participation of Ministry of Finance tax police in some of the Treasury’s law enforcement assistance training, and advised Ministry staff on assistance they could seek from other sources. The tax advisor, along with the Treasury’s budget advisor, was asked by the Finance Minister in 1998 to help prepare a reorganization plan for the entire Ministry of Finance. Their advice was used to develop a plan to streamline the management structure and eliminate some management positions that had been filled by political appointees. However, before more of the plan could be implemented, the Minister was replaced and the reorganization plan was suspended. The advisor told us that the new Minister did not accept the proposed changes and asked her to develop a new reorganization plan. A Ministry of Finance official, who was the liaison between the Treasury advisors and the Ministry officials, said he was very pleased with the advisor’s work. Also, the Romanian Ministry of Finance official in charge of information systems told us he was very pleased with the management seminar that the tax advisor had prepared for his staff. In addition, he noted that the Treasury advisors have been working with his staff for a longer period of time than advisors from other organizations and that there was continuity to their work. Furthermore, he said the advice provided by the Treasury on discrete projects, such as specific training courses and assistance with adaptation of new technology for tax administration, was very beneficial to the Ministry. In Romania, the budget advisor is assisting in the phasing in of a new budget system that was adopted by the parliament and is expected to be fully implemented by the year 2000. The advisor helped develop a new budget format, which presents program objectives, desired outcomes, and program costs and provides a clearer view of government spending and program results. The budget advisor told us that he has given formal presentations and informal consultations to budget officials in ministries throughout the Romanian government, explaining the details of the new budget process. The advisor has also reviewed drafts of a local public finance law that could extend performance budgeting to local jurisdictions. The advisor has also been consulted on issues such as the impact of debt on the budget; alternative funding sources for public education, health, and cultural programs; reorganization of the Ministry of Finance; and a proposed food stamp program. In addition, he has participated in budget-related training sessions, coordinated a study tour of Romanian officials to the United States, and obtained short-term assistance from other U.S. experts on performance budgets and alternative funding sources to reduce the need for future government expenditures such as financing for cultural programs. Romanian Ministry of Finance officials told us that budget reform is a very high priority for the Ministry and the budget advisor has played an integral role in its efforts. A Ministry official who worked on the education budget said that it was very helpful to have the budget advisor accompany her on visits to the Ministry of Education to discuss the new budget format. The General Director for the State Budget said that the advisor was also helpful in providing officials at ministries with information on how health and education programs were financed in other countries, which was of interest to officials at ministries who are seeking alternatives to government funding of programs in light of anticipated future budget shortfalls. The Executive Secretary for the State Budget said that the budget advisor, along with the other Treasury advisors, has provided valuable comments on a pending local public finance law. The Treasury advisor in government debt issuance has worked in Romania since 1996. The advisor coordinated the work of several Romanian agencies involved in issuing government securities, helped develop securities markets, and provided advice on the legal framework that would be required for further development. The Treasury advisor helped facilitate cooperation between the Ministry of Finance and the Central Bank to develop procedures for the issuance of government securities. At the start of the advisor’s tenure, Romania had a rudimentary primary market, no secondary market, and a poorly functioning auction system. The advisor’s technical assistance facilitated a host-government decision that resulted in a strengthened primary market and the ancillary auction system. He also advised the Central Bank on regulations that would be needed to establish a secondary securities market and conducted training for Ministry and Bank staff on the functioning of markets and auctions. The resident and regional advisors provided advice to the Romanian government on the timing of its entry into international financial markets and helped Romania enter the Eurobond market for the first time. In addition, a short-term Treasury advisor helped the government improve its central clearing house and registration system for government securities. The Romanian Ministry of Finance official in charge of the domestic public debt said that the resident advisor in Romania and the regional advisor from Budapest, who was an expert on international financing and came to Bucharest to provide advice on Eurobonds, taught her step-by-step how to issue an international bond. She also credited the resident advisor with helping build a communication bridge between the Central Bank and the Ministry of Finance, a relationship that had not before been strong but is crucial since both play a role in the issuance of government securities. She emphasized that reforms in government debt issuance benefited extensively from having the resident advisor located in the Ministry and available to answer questions and provide advice as needed. During 1998, temporary advisors were helping reform the Romanian bank fraud enforcement efforts of the General Prosecutor’s office by helping coordinate their activities with the work of the Ministry of Interior. The advisors created workshops for reviewing cases and focusing on issues of evidence and the use of technology. The Deputy Director in the General Prosecutor’s office said that many of the bank fraud cases were hard to prosecute, since the prosecutors had little experience in such cases. He also said that the workshops held by the Treasury advisors helped the prosecutors and investigators understand the complexities of each other’s work and would enable them to more effectively handle cases. At the time of our review, the Treasury’s financial institutions advisory program was suspended in Romania. A previous resident financial institutions advisor in Romania helped provide technical advice on the privatization of one of the state banks. However, after 2 years of effort, the Romanian government was not ready to undertake privatization of the bank to which he was assigned. For example, he advised the bank to increase its capitalization and dilute government ownership by selling some bank shares to private investors. The bank did issue certificates of deposit but chose to sell most of its bank shares to other government entities instead of the private sector. The advisor indicated that there did not seem to be a firm commitment by host-government officials to privatization and that some officials were not willing to accept his advice. A second financial institutions advisor was sent to Romania from September 1995 to mid-1996 and was involved in assisting in the privatization of other state banks and reviewing drafts of a bank privatization law. The Romanian government showed little interest in privatizing its state banks, so OTA officials told us they suspended the financial institutions program in Romania. OTA indicated that the Treasury may attempt to send another advisor if OTA is convinced that the government of Romania shows greater commitment to privatizing its state banks. The advisor program has been carried out with little formal structure. OTA has few written policies and procedures specifying oversight requirements. Advisors generally filed monthly reports as required, but the contents of the reports varied in the level of detail on program progress. We noted that other documents that OTA officials told us they used for program oversight, such as country agreements, work plans, and reports of supervisor visits, were not available. For example, initially we requested copies of documents used for oversight for a sample of five countries. OTA told us it would have difficulty in assembling these documents in a timely manner, and we narrowed our request to two countries—Russia and Romania. After 4 months, OTA notified us that it could not locate a significant number of the documents we requested. OTA’s oversight of its advisor financial disclosure reporting requirements was also lax. OTA officials told us they frequently used informal means, such as electronic mail and telephonic communication, to oversee the work of resident advisors. OTA’s Employee Handbook issued in March 1997 requires that monthly reports from all full-time employees stationed abroad include project highlights and project accomplishments for each objective in the advisor’s work plan. OTA associate directors told us that these reports are used as the principal means for monitoring the resident advisors’ activities. Although we identified one advisor who failed to file 12 monthly reports, advisors had filed most of the required reports. However, the content of the reports varied. In some cases, reports included detailed discussions that linked the advisor’s work to the objectives and strategies described in the work plans, when one was completed, and outlined progress made to date along with timetables for ongoing and future work. In contrast, other reports were brief statements of the status of work without reference to the work plan and frequently repeated verbatim the prior month’s reporting. For example, one advisor’s report consisted of nine lines that outlined in very general terms what he did that month. The Deputy Assistant Secretary for Technical Assistance Policy told us formal agreements setting forth the understanding between the Treasury and the host government on the specific technical assistance that will be provided were to be prepared for each advisor. He said the formalization of these agreements is needed to ensure that the host government is committed to working with the advisor and willing to accept advice. However, we found few resident advisors in Russia and Romania had signed letters of agreement with their host government. We could not locate signed host-government agreements for 15 of the 19 resident advisors that have been assigned to Russia and Romania since 1992. OTA officials told us that resident advisor work plans were to be prepared by each advisor. In some cases, the requirement to do work plans was included in the advisor contract. OTA officials said the plans are an important element in overseeing their activities. The work plan serves to (1) lay out the objectives and strategies of what the Treasury intends to accomplish, (2) specify how advisors will accomplish these objectives and strategies, and (3) provide a means to hold advisors accountable for their work. OTA could not locate work plans for 13 of 19 advisors assigned to Russia and Romania since 1992. According to OTA officials, supervisory officials, such as senior advisors and associate directors, are also expected to prepare written reports on program progress and accomplishments and the status of resident advisors’ activities. In particular, supervisors were expected to file written trip reports whenever they visited the resident advisors. We found that supervisory officials were not consistently reporting on their work to senior management. Supervisory reports were prepared for only 9 of 33 trips to Russia and/or Romania over a 2-year period covering May 1996 to May 1998. One senior advisor did not file any reports over a 16-month period during which he made 13 supervisory trips to countries under his responsibility. Further, the content of the available reports did not always parallel the stated purposes of the trips, and few reports discussed progress being made by resident advisors. For example, only two of the nine reports commented in detail on a resident advisor’s work. According to the former OTA Director, the absence of documentation on country agreements, advisor work plans, and supervisory reports did not adversely affect program oversight. He believed that information on program activities was communicated to the right people and that senior OTA officials were kept abreast of program status through telephone conversations, electronic messages, and twice yearly, programwide conferences. He also noted that OTA had not established a structured approach to oversight requirements because the program was relatively small. He indicated that with the pending expansion of the program, OTA should consider establishing more written policies and procedures outlining these requirements. Determining how much formal structure is necessary to assure accountability for funds is not easy. The Office of Management and Budget (OMB) and our guide on internal controls provide some guidance. According to these documents, written policies and procedures, manuals, and other related materials are necessary to describe and communicate the responsibilities and authorities of management and staff, organizational structure, operating procedures, and administrative practices. It is not clear whether OTA’s current approach is consistent with these basic guidelines. OTA requires most of its advisors to file annual financial disclosure reports by October 31 of each year, detailing their financial interests. The reports are used to help determine whether a potential conflict of interest exists, since advisors may be in a position to render advice in an area where they may have a financial interest. OTA management is required to review the statements for potential conflicts of interest within 60 days of filing and, if necessary, resolve conflicts. For 1997, OTA extended the filing deadline until November 30, because some advisors had received their filing paperwork late. OTA has not enforced compliance with these requirements. For 1997, we found that only 18 of 73 advisors had filed statements by the extended deadline. In response to our questions on this issue, on March 10 and April 2, 1998, OTA sent memos to the advisors who had not filed reports and requested that the advisors comply with office requirements. Eventually, an additional 40 reports were filed. To improve OTA oversight, the Treasury’s Deputy Assistant General Counsel said that her office held a training session for OTA in April 1998 on the rules and regulations of federal financial disclosure procedures. Treasury advisors in Romania and Russia have provided advice on a variety of economic reform efforts. The nature of advice being provided has varied from addressing broad policy and operational issues to handling discrete projects such as devising economic forecasting models. Host-government officials told us they found the Treasury advice to be beneficial. OTA has largely relied upon informal mechanisms to maintain program oversight. However, as OTA expands its assistance to other countries, we believe it should develop a more formal set of policies and procedures for conducting program oversight. It also needs to enforce these requirements. Although we did not identify an adverse effect from the current oversight approach exercised by OTA, we believe that a more structured approach to program oversight and accountability, particularly in light of the program’s pending expansion, is needed to provide reasonable assurance that the government interest is protected and to provide an institutional memory that could be the basis for future programmatic decisions. Thus, we recommend that the Secretary of the Treasury establish formal requirements and procedures that clearly state advisor responsibilities, and take steps to ensure compliance with these requirements. In its written comments, the Department of the Treasury agreed with the thrust of this report and cited several actions that will be considered to improve program oversight. The Treasury did, however, raise concerns about the methodology we used to compare our advisor costs to USAID’s advisor costs. The Treasury cited a State Department Office of the Inspector General report that concluded that the costs were similar and questioned whether certain OTA costs were appropriately included in our analysis. The Treasury also expressed concern that our draft report did not sufficiently capture its financial disclosure report practices. We believe that our analysis of Treasury and USAID advisor costs is fair and that the cost elements that we weighed are appropriate. We have added additional information clarifying our methodology. We have also added information regarding the process the Treasury uses to address potential conflicts of interest beyond the annual financial disclosure statements that we analyzed (see app. V for the Treasury’s comments and our response to them). We are providing copies of this report to the Chairman and Ranking Minority Members of the House and Senate Committees on Appropriations, the Senate Committee on Foreign Relations, and the Ranking Minority Member of the House Committee on International Relations. We are also sending copies to interested congressional committees and to the Secretaries of the Treasury and State; the Administrator, U.S. Agency for International Development; and the Director, Office of Management and Budget. Copies will be made available to others upon request. Please contact me at (202) 512-4128 if you or your staff have any questions about this report. Major contributors to this report are listed in appendix VII. The Treasury’s Office of Technical Assistance (OTA) requires general personnel qualifications for the advisors in its program. These advisor qualifications include a combination of expertise in one of five program areas (tax policy and administration; financial institutions, policy, and regulation; budget policy, formulation, and execution; government debt issuance and management; and law enforcement), significant prior senior-level experience, and relevant educational backgrounds. Additionally, foreign language expertise and prior overseas work experience were indicated as desirable. To determine whether the advisors appear to have the indicated job qualifications, we reviewed the advisors’ application information and a sample of 24 job advertisements. We attempted to compare the application material presented by the applicants in U.S. government Standard Form 171, Optional Form 612, or resumés to the qualifications listed in the job announcement. Since the Treasury did not retain copies of all of the advertisements it used, we were not able to match specific applications with the specific advertisements that the applicants were answering. However, an OTA official said that the advertisements were generally the same. Our analysis shows that the advisors have an average of 19 years of experience in their field; have held positions of responsibility such as Deputy Assistant Commissioner of the Internal Revenue Service, bank senior vice president, and chief financial officer; 32, or 70 percent, have graduate degrees; 11, or 24 percent, have some knowledge of a regional language; and 16, or 35 percent, have prior overseas experience. The average cost for an OTA advisor in Russia and Ukraine is higher than that of the USAID program. Some of the reasons for this are that OTA’s program costs have included higher wages and greater use of short-term advisors. In comparing the cost of the Treasury’s Office of Technical Assistance and USAID advisors, we selected Russia and Ukraine as countries for comparison because both the Treasury and USAID have similar technical assistance programs in the countries, about 60 percent of the resident advisors working in the former Soviet Union in fiscal year 1997 were posted to these countries, and the two countries account for about 30 percent of the total program costs in the former Soviet Union funded by the Treasury that year. To determine the average cost per OTA advisor, we obtained the Treasury’s financial reports and advisor cost data for fiscal years 1995-97. We obtained cost data for a USAID contractor that performs similar activities in Russia and Ukraine to calculate USAID’s average advisor costs. For OTA and USAID we compared wages paid to advisors overseas, including salary and benefits, housing, post allowances, and post differential; support costs for the advisors, including office rent and utilities, administrative support in the field such as local-hire staff salaries and benefits, travel, and transportation; we included the program management and administrative costs of OTA and the USAID contractor responsible for the USAID program; and short-term advisor costs, including salary and travel expenses for expert assistance on an as-needed basis. We used average advisor cost as a unit of measurement, because the Department of the Treasury requests funds for the entire OTA program on a per-resident-advisor basis. We note that cost for support of the advisors is the largest cost element in our analysis of cost per advisor. The average cost per OTA advisor in Russia for fiscal years 1995-97 was $567,000, and the average cost per USAID advisor in Russia was $453,000.The major cost differences between the two programs were due to wagesand the use of short-term advisors. For the 3-year period, OTA paid its advisors an average of $184,300 in wages compared to $153,500 paid by the USAID contractor. OTA officials told us that they employ senior government and high-level officials from the private sector who are eligible for higher salaries. In addition, OTA relied more on short-term advisors for technical assistance than did the USAID contractor. OTA spent an average of $91,400 for short-term advisor assistance during the 3 fiscal years, compared to $30,000 for USAID. We note that the average advisor costs sometimes can provide a skewed indication of the actual costs of a program. For example, average advisor costs for USAID in Russia for fiscal year 1997 were higher than the previous 2 fiscal years because as the USAID program was ending, although the number of resident advisors had declined, the fixed costs associated with support and short-term advisor costs were averaged over fewer advisors. Table II.1 depicts the average costs for OTA and USAID advisors in Russia for fiscal years 1995-97 and the cost categories. Average cost of short-term advisors 1. Totals may not add due to rounding. 2. We used full-time equivalent (FTE) to account for staff not assigned for a full year. Average cost per advisor is a weighted average. The average cost per OTA advisor in Ukraine for fiscal years 1995-97 was $448,700, and the average cost per USAID advisor in Ukraine was $397,200. The major cost differences between the two programs were due to wages and the use of short-term advisors. For the 3-year period, OTA paid its advisors an average of $142,500 in wages, compared to $125,500 paid by the USAID contractor. Again, the higher salaries were based on the advisor’s prior salary. As in Russia, OTA relied more on short-term technical assistance than did the USAID contractor. OTA spent an average of $89,800 for short-term assistance during the 3 years, as compared to $33,400 for USAID. Table II.2 depicts the average costs for OTA and USAID advisors in Ukraine for fiscal years 1995-97 and the cost categories. Average cost of short-term advisors 1. Totals may not add due to rounding. 2. We used full-time equivalent (FTE) to account for staff not assigned for a full year. Average cost per advisor is a weighted average. The Treasury does not receive direct appropriations for the foreign assistance activities of its technical assistance program to Central Europe and the former Soviet Union. Rather, USAID transfers funds appropriated under the Foreign Assistance Act of 1961, as amended (P.L. 87-195), to the Treasury. USAID transfers the funds by agreements authorized under section 632 (a) or 632 (b) of the act. Responsibility for program monitoring and evaluation is linked to the method used by USAID to transfer the funds. When USAID transfers funds under 632 (a), the transfer agreements are brief documents that do not obligate funds. Instead, these agreements are simply an allocation of funds from USAID to the Treasury for use in activities under the Treasury’s funding obligation process. USAID has minimal responsibility for approving these activities, and program monitoring and evaluation are the responsibility of the Treasury. When USAID transfers funds to the Treasury under 632 (b), USAID essentially retains control over how the funds are used and accounted for. Under a 632 (b) transfer, USAID and the Treasury negotiate and agree upon how the funds will be used, and the transfer agreement includes a requirement that the Treasury follow USAID’s procurement and reporting rules. The funds are obligated by USAID, which is responsible for program monitoring and evaluation. The Treasury has received funds from USAID since fiscal year 1990 for its technical assistance program in Central Europe and since fiscal year 1992 for its program in the former Soviet Union. During fiscal years 1991 and 1992, USAID’s transfers for Central Europe and the former Soviet Union were made primarily under 632 (a) authority. During fiscal years 1993-96, the transfers took place primarily under 632 (b) authority. In 1996, the State Department Coordinator of assistance to the former Soviet Union directed USAID to switch most funding authority from 632 (b) to 632 (a). The decision was made, according to the Coordinator, because disputes about money and policies between USAID and the Treasury were delaying program implementation. The Coordinator of assistance to Central Europe said he directed USAID to switch funding authority from 632 (b) to 632 (a) in 1997 after USAID said it could no longer provide adequate monitoring of 632 (b) programs, given USAID’s downsizing over the past several years. From July 1991 to September 1997, the Treasury’s senior advisors for the tax team were based in Paris, and from November 1994 to September 1997, the senior advisor for the financial institutions team was based in London. This appendix discusses the basis and costs of placing these advisors in those locations. In 1991, the Treasury identified Paris as the best location for its senior advisor for tax since it offered good communications and transportation links. It also provided the advantage of on-site coordination with the Organization for Economic Cooperation and Development, which was active in providing tax training for government officials from Central Europe. In 1994, the Treasury decided to post a senior advisor for the financial institutions program in London because it also provided for better transportation links and an opportunity for on-site coordination with the European Bank for Reconstruction and Development that planned to work on bank privatization. Also, in 1994, the Treasury based its newly assigned senior advisor for the government debt issuance area in Budapest, Hungary. The senior advisor for the budget area was likewise placed in Budapest in 1995. In 1997, the senior advisors for tax and financial institutions closed their Paris and London offices, and all senior Treasury advisors were consolidated in Budapest. Expenditure data provided by the Treasury showed the average annual support costs of basing the senior advisor in Paris from fiscal year 1992 to 1997 were $103,796 in 1997 dollars. These costs included housing, travel, and office support. They did not include compensation and benefits nor overall program management costs, since these costs are the same regardless of location. Comparable costs for the advisor in London from fiscal year 1995 to 1997 were $137,675. According to a Treasury official, these costs were lower than anticipated because in Paris the advisors lived in embassy housing, and in both Paris and London the embassies provided logistical support. The following are GAO’s comments on the Department of the Treasury’s letter dated February 5, 1999. 1. The primary difference between our calculations of advisor costs and those of the Department of State, Office of the Inspector General (OIG), is that we included the cost of short-term advisors, while the OIG’s review did not. We included the short-term advisor costs because they represent a key component of the Treasury program. We discussed our methodology with OTA management and staff on several occasions and have clarified our presentation of the methodology in the body of the report. 2. Our review focused on the annual OTA requirement to review the financial disclosure statements of its advisors. We have clarified the report to include additional information on the other financial disclosure requirements that apply when advisors are hired. 3. Our report is not intended to imply that the Terms of Reference should be considered a principal measure of an advisor’s utility in a particular country. Department of the Treasury officials told us the Terms of Reference are intended to ensure that the United States and a recipient country have a formal understanding on the advisor’s role and that the recipient government is committed to working with the advisor. As we point out in the report, the Terms of Reference were not available for 15 of the 19 advisors assigned to Russia and Romania since 1992. While we understand that it may not be possible to always have signed Terms of Reference for all countries, we believe that the Treasury should strive to establish such formal agreement to the maximum extent possible. 4. The report text has been modified to reflect this information. At the request of the Chairman of the House Committee on International Relations and the Chairman of the Committee’s Subcommittee on Asia and the Pacific, we identified the types of technical assistance the Treasury’s program provided to Russia and Romania and the methods that OTA uses to conduct oversight of its advisors. In addition, we provided information on advisor qualifications, program cost, fund transfers between USAID and the Treasury, and the location of the Treasury’s senior advisors to its program. To identify OTA activities, we reviewed the Department of the Treasury’s and the Department of State’s current strategic planning documents; analyzed program planning and reporting documents; interviewed Treasury, State, and USAID officials in Washington, D.C.; and attended the annual Treasury advisors’ conference in Budapest in November 1997, where we interviewed senior, regional, and resident advisors on their background experience, work in host countries, and interaction and coordination with other bilateral and multilateral advisors. In completing our review, we examined in detail advisor activities in Russia and Romania and visited OTA’s advisors working in these countries. At the time of our fieldwork, programs in Russia and Romania together represented about one quarter of the Treasury’s resident advisors. In Moscow, and Bucharest, Romania, we reviewed program documents such as reports written by the current advisors for the host-government officials and interviewed all OTA resident advisors, the International Monetary Fund advisor and/or representative, USAID contractors, private sector groups such as the Soros Foundation, and host-government officials at the deputy and vice minister levels and below at the ministries of finance and tax, the central banks, the Ministry of Interior and the General Prosecutor’s office in Bucharest, and the Chamber of Accounts in Moscow. We discussed the assistance being provided by Treasury advisors, host-government use of the assistance, and program achievements. To address how OTA conducts program oversight, we interviewed Treasury officials, senior and resident advisors, and program officers; reviewed reports on the OTA program by the USAID and Treasury Inspectors General from 1994 to 1997; a 1998 management review by the Treasury’s Office of Organizational Improvement; and the Treasury’s procedures, strategic planning documents, advisor reports, and consultants’ reports. We also reviewed OTA’s procedures for reviewing financial disclosure reports as well as OTA advisors’ financial disclosure reports on file with the Treasury. To address advisors’ qualifications, we surveyed OTA program managers, interviewed advisors, reviewed personnel records of OTA advisors, and compared personnel records and qualifications to position requirements. We compared the applications of all the resident advisors in the OTA program as of November 1, 1997, to a sample of job announcements the Treasury used to advertise the resident advisor positions. To determine the Treasury’s program costs and USAID contractor costs, we selected Russia and Ukraine because the Treasury and USAID have similar technical assistance programs in the countries, half of the resident advisors working in the former Soviet Union in fiscal year 1997 are posted to these countries, and the two countries account for about 30 percent of the total program costs in the former Soviet Union funded by the Treasury that year. To determine the average cost per advisor, we obtained the Treasury’s financial reports and advisor cost data for fiscal years 1995-97 and reviewed average cost analyses prepared by the Treasury, the State Department’s Inspector General, and USAID. Further, we interviewed Treasury officials and reviewed Treasury budget estimates to understand the elements of the program budget. We did not independently verify the validity of the Treasury’s data. We analyzed USAID contractor cost data to calculate average costs. We also discussed our analysis with USAID contractor officials, who agreed that our analysis accurately depicted USAID costs. Primarily because of different contractual arrangements with advisors and different financial reporting systems, USAID and the Treasury do not present cost data in the same way or with the same cost categories. To present comparable costs in this report, we gathered the various costs into three broad categories—compensation paid to resident advisors and contractors overseas, including housing allowances; support costs for the advisors and contractors; and the cost of employing short-term advisors to assist the resident advisors and contractors. To address the location of the Treasury’s senior advisors to its program in Paris and London, we interviewed Treasury and State officials and reviewed cost data and documents justifying the placement of advisors in those capital cities. To determine the cost of basing senior advisors in Paris and London, we obtained support costs for these locations since 1991, including housing, travel, and transportation by senior advisors, and office support. As agreed to with our requesters, we did not include compensation and benefits or program management costs because these costs are constant regardless of country. To compare an overall average support cost per advisor and location, we calculated an average advisor cost for each fiscal year and in total. Furthermore, to determine overall average support costs for each location, we summarized the costs for all years, divided by the number of years that senior advisors were based at each location. Our analysis of costs for Paris was limited to fiscal years 1992-97 because the initial posting of an advisor was at the end of fiscal year 1991. To address the issue of fund transfers from USAID, we interviewed Treasury, State, and USAID officials and reviewed prior GAO reports on fund transfers from USAID to other agencies. As part of our work, we initially reviewed aspects of OTA’s logistical support contracts. During the course of this review, we became aware of an investigation by the Treasury Inspector General of the contracts and issues surrounding them. Because of this investigation, we terminated this aspect of our work and provided appropriate documents to the Inspector General. We performed our work from October 1997 to December 1998 in accordance with generally accepted government auditing standards. Jess T. Ford Ronald A. Kushner Maria Z. Oliver Bruce L Kutnick Rona H. Mendelsohn Janice V. Morrison James M. Strus The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of the Treasury's Office of Technical Assistance (OTA) foreign aid program, focusing on: (1) the types of technical assistance OTA advisors have provided to Russia and Romania; (2) oversight by OTA of advisors' activities; (3) advisor qualifications, program cost, fund transfers between the Agency for International Development and the Treasury; and (4) the location of the Treasury's senior advisors to its program. GAO noted that: (1) the Treasury's technical assistance advisors in Russia and Romania have assisted in efforts to reform tax and budget systems, improve banking and debt management policies, and enhance law enforcement; (2) in support of these initiatives, Treasury advisors have helped countries devise new systems and approaches to management of their finances, drafted legislation and procedures, and developed economic models; (3) host-government officials for the most part indicated that the advice received from the Treasury's advisors was beneficial to their reform efforts; (4) the advisor program has been carried out with little formal structure; (5) the only clear mandatory requirement is the filing of monthly reports, and reports of varying content were generally filed on a regular basis; (6) however, other documents OTA officials say they use in their oversight, such as host-country agreements, work plans, and the results of supervisory trips, were not available; (7) OTA has also been lax in enforcing advisors' financial disclosure requirements; and (8) in response to GAO's observations, OTA told GAO that they use informal means such as electronic mail and telephone calls to carry out their oversight of advisor activities.
OPM’s mission and responsibilities are found in Title 5 of the U.S. Code, which provides for the effective implementation of civil service laws, rules, and regulations. OPM also evaluates the effectiveness of personnel policies, agency compliance with laws, rules, regulations and office directives, and agency personnel management evaluation systems. Overall, OPM manages the federal government’s human capital and is charged with helping agencies shape their human capital management systems and holding them accountable for effective human capital management practices. OPM does this in such a way to help ensure that: (1) the federal government acquires, develops, manages, and retains employees with the knowledge, skills, and abilities needed to deliver services that the American public want and deserve; and (2) agencies consistently uphold governmentwide values, such as merit system principles, veteran’s preference, and workforce diversity. OPM is also responsible for administering retirement, health benefits, and other insurance services to government employees, annuitants, and beneficiaries. In January 2001, we added strategic human capital management to our list of federal programs and operations identified as high risk. In a July 2001 report, we evaluated OPM’s goals and measures for assessing the state of human capital at federal departments and agencies and found weaknesses in OPM’s measures of workforce skills and employee accountability and made recommendations to help address these issues, among other things. OPM has since taken action on our recommendations. In a January 2003 report, we examined OPM’s progress towards its own transformation, as OPM shifts its role from less of a rule maker and enforcer to more of a consultant and strategic partner in leading and supporting agencies’ human capital initiatives. We concluded that OPM should exert greater leadership to prepare the way for human capital reform. In June 2006, we testified before the Subcommittee that OPM has made commendable efforts towards transforming itself to being a more effective leader of governmentwide human capital reform. We noted however, that it could build upon that progress by addressing challenges that remain in four key areas: (1) leadership; (2) talent and resources; (3) customer focus, communication, and collaboration; and (4) performance culture and accountability. First, in the area of leadership, we reported that information from OPM employees based on our analysis of the 2004 FHCS suggests that information from their top leadership does not cascade effectively throughout the organization and that many employees do not feel their senior leaders generate a high level of motivation and commitment in the workforce. These views on leadership were more strongly expressed by employees in OPM’s Human Capital Leadership and Merit System Accountability (HCLMSA) division—one of OPM’s key divisions and a unit responsible for partnering with agencies and vital to successful human capital reform efforts. In May 2006, OPM developed a series of action plans to address issues raised in the 2004 FHCS, including a number of planned actions to improve overall and cross-divisional communication and employee views of senior management. Second, we reported that in the area of talent and resources, OPM has made progress in assessing current workforce needs and developing leadership succession plans; however, if OPM is to lead governmentwide human capital reform it can do more to identify the skills and competencies of the new OPM, determine any skill and competency gaps, and develop specific steps to fill such gaps. Third, we reported that the views of agency CHCOs and HR directors as well as OPM employees show that OPM can improve its customer service and communication with agencies and that guidance to agencies is not always clear and timely. Executive branch agency officials also pointed to OPM’s Human Capital Officer (HCO) structure as a frequent barrier to efficient customer response and felt there are greater opportunities for OPM to dialogue and collaborate with CHCOs and HR directors. Fourth, with respect to performance culture and accountability, we reported that OPM has made progress in creating a “line of sight” or alignment and accountability across its leaders’ expectations and organizational goals in its strategic and operational plan; however, success in achieving governmentwide reform objectives will rest, in part, on OPM’s ability to align performance and consistently support mission accomplishment for all employees of the organization. As Congress and other stakeholders have recognized the importance of strategic human capital management, several legislative changes have occurred. In November 2002, Congress passed the Homeland Security Act of 2002, which created DHS and provided the department with significant flexibility to design, in consultation with OPM, a modern human capital management system affecting approximately 180,000 personnel. Specifically, the legislation granted DHS certain exemptions from the laws governing federal civilian personnel management in Title 5 of the U.S. Code—providing DHS with certain human capital flexibilities to establish a contemporary human capital system that will enable it to attract, retain, and reward a workforce able to meet its critical mission. To address governmentwide human capital management challenges, Title XIII of the Homeland Security Act, also cited as the Chief Human Capital Officers Act of 2002, established CHCO positions in 23 agencies to advise and assist the heads of agencies and other executive branch agency officials in their strategic human capital management efforts. The act also created the CHCO Council to advise and coordinate the activities of members’ agencies on such matters as the modernization of human resources systems, improved quality of human resources information, and legislation affecting human resources operations and organizations. The act also included significant provisions related to direct hire authority, the use of categorical ranking in the hiring of applicants instead of the “rule of three,” expansion of voluntary early retirement and “buy-out” authority, a requirement to discuss human capital approaches in Government Performance and Results Act reports and plans, and a provision raising the total annual compensation limit for senior executives and other senior professionals in agencies with performance appraisal systems that have been certified by OPM and OMB as making meaningful distinctions in relative performance. In November 2003, the National Defense Authorization Act for Fiscal Year 2004 provided DOD—the largest federal employer—with authority, in conjunction with OPM, to establish a flexible and contemporary human resources system, including a new (1) pay and performance management system, (2) appeals process, and (3) labor relations system—which together comprise the National Security Personnel System (NSPS). Like the Homeland Security Act, this legislation granted DOD certain exemptions from Title 5 of the U.S. Code and provided significant flexibility for designing NSPS, allowing for a new framework of rules, regulations, and processes to govern how defense civilian employees are hired, compensated, promoted, and disciplined. The NSPS would cover approximately 700,000 employees. Also, in the National Defense Authorization Act for Fiscal Year 2004, Congress authorized a new performance-based pay system for members of the SES. Under the new system, which took effect in January 2004, senior executives no longer receive annual across-the-board or locality pay adjustments. Executive branch agencies must now base pay adjustments for senior executives on individual performance and contributions to agency performance through an evaluation of their unique skills, qualifications, or competencies, as well as the individual’s current responsibilities. The new pay system raises the cap on base pay and total compensation. For 2006, the caps are $152,000 for base pay (Level III of the Executive Schedule) with a senior executive’s total compensation not to exceed $183,500 (Level I of the Executive Schedule). If an agency’s senior executive performance appraisal system is certified by OPM and OMB concurs, the caps are increased to $165,200 for base pay (Level II of the Executive Schedule) and $212,100 for total compensation (the total annual compensation payable to the Vice President). In addition to SES employees, many agencies use senior employees with scientific, technical, and professional expertise, commonly known as senior-level (SL) and scientific or professional (ST) positions. SL/ST positions have a lower maximum rate of basic pay than SES employees, and unlike the SES, their individual rate of pay does not necessarily have to be based on individual or agency performance. However, an agency may apply to OPM and OMB for certification of its SL/ST performance appraisal system, and if the system is certified as making meaningful distinctions in relative performance, an agency may raise the total annual compensation maximum for SL/ST employees to the salary of the Vice President. However, certification does not affect the maximum rate of basic pay of SL/ST employees. To qualify for these pay flexibilities, OPM must certify and OMB must concur that an agency’s senior executive performance appraisal system meets certification criteria jointly developed by OPM and OMB. Two levels of performance appraisal system certification are available to agencies: full and provisional. To receive full certification, which lasts for 2 calendar years, the design of agency systems must meet nine certification criteria and agencies must provide documentation of prior performance ratings to demonstrate compliance with the criteria. Agencies can receive provisional certification, which lasts for 1 calendar year, if they have designed but not yet fully implemented a senior executive performance appraisal system, or do not have a history of performance ratings that meets the certification criteria. In September 2006, we testified before the Subcommittee that the certification criteria are generally consistent with our body of work identifying key practices for effective performance management systems. In addition, we testified that these senior executive and senior-level employee performance-based pay systems serve as an important step for agencies in creating alignment or “line of sight” between executives’ performance and organizational results. A detailed description of the certification criteria and process is provided in appendix II. The congressionally authorized senior executive performance-based pay system, implemented in 2004, as well as OPM’s implementation of other governmentwide human capital initiatives, provides an opportunity to learn from experiences gained and apply those lessons to the implementation of future human capital reforms. As OPM is likely to play a similar leadership and oversight role in future reforms, the following lessons learned may also assist OPM as it moves forward in the design and implementation of other human capital reforms and initiatives. To successfully transform or implement a large-scale change initiative such as governmentwide human capital reform, an organization must fundamentally reexamine its processes, organizational structures, and management approaches—including its workforce capacity. Strategic workforce planning addresses two critical needs: (1) aligning an organization’s human capital program with its current and emerging mission and programmatic goals, and (2) developing long-term strategies for acquiring, developing, and retaining staff to achieve programmatic goals. As mentioned previously, in 2003, we reported that OPM was undergoing its own transformation—from less of a rulemaker to more of a consultant in leading and supporting executive agencies’ human capital management systems. As the organization transforms and OPM works to balance rules and tools and change its organizational culture, it is critical that OPM examine its internal capacity to ensure its workforce has the competencies to meet the multiple demands of the future and successfully implement human capital reforms. In particular, we have reported that a one-size-fits-all approach to human capital management is not appropriate for the challenges and demands government faces and that there should be a governmentwide framework to guide human capital reform. Thus, it is particularly important that OPM’s workforce have the knowledge, skills, and abilities to understand how to balance the need for consistency across the federal government with the desire for flexibility, so that they can assist individual agencies in tailoring their human capital systems to best meet their needs. Striking this balance will not be easy to achieve, but is necessary to maintain a governmentwide system that is responsive enough to adapt to agencies’ diverse missions, cultures, and workforces. Executive branch agency experiences with implementing the senior executive performance-based pay systems and other human capital efforts point to a lack of knowledge and experience among OPM staff. Several executive branch agency officials commented that OPM conveyed a “we’ll know it when we see it” method of communicating expectations, and was thus unable to effectively communicate to agencies their expectations regarding the senior executive performance appraisal system certification process. In addition, executive branch agency officials told us they believe the DOD and DHS human capital reform efforts severely taxed OPM technical resources, specifically pay and compensation employees. One CHCO surmised that OPM’s capacity is dependent upon a few key employees skilled in these areas, particularly innovative pay and compensation approaches. An OPM senior executive confirmed this, telling us that turnover and retirement were problematic for pay and compensation experts at OPM. Also, a majority of agency CHCOs, HR directors, and their staffs expressed concern about whether OPM generally has the technical expertise needed to provide timely and accurate human capital guidance and advice both now and in the future. We previously reported that problems arose for many agencies when technical questions had to be communicated via OPM HCOs to the policy experts at OPM. This issue may have been magnified for some agencies by the frequent turnover or reassignments among HCOs. The HCO position was established in 2003 at OPM as part of its transformation efforts to help improve customer service to agencies. An executive branch agency official told us that her agency was assigned four different HCOs in the last 18 months. According to OPM’s most recent strategic human capital plan, OPM recognizes that HCO staff will need to develop greater familiarity with areas beyond each individual’s technical expertise and plans for its staff to gain “cross-functional knowledge” through means such as staff participation on cross-functional work groups that address various initiatives, training opportunities, and other developmental assignments that lend themselves to professional growth and development. Further, our analysis of OPM’s agency results from the 2004 FHCS and 2005 follow-up focus group data suggest that OPM employees may not be receiving sufficient training to enhance their skills and competencies. OPM employees were not as close to the employees in the rest of government in agreeing with the statement “I receive the training I need to perform my job.” Fifty-three percent of OPM employees agreed with this statement as compared with 60 percent of employees from the rest of government. Focus group participants selected this item as one of the most important issues for OPM to address—expressing the view that OPM’s culture does not support training, employees do not have time to attend training classes, and managers are not given sufficient and timely training budgets. An OPM executive supported these views, stating that it can be a struggle to convince managers that people should attend training. A former senior OPM official told us that he did not have an overall budget, including training, for his department while at OPM. OPM has begun to align its workforce skills and competencies to meet additional requirements stemming from future reforms and other environmental changes. For example, OPM conducted agencywide skills and competencies assessments in 2001 and 2003, and has conducted skills assessments for certain targeted occupations—information technology, human resource management, and selected mission-critical positions. Validating skills and competencies is important because the workforce skills and competencies needed to be a strategic partner, toolmaker, or consultant may be different from those needed in the past to be a rulemaker or enforcer of regulations. Importantly, OPM has also updated several of its key strategic management documents. First, in March 2006, OPM issued its Strategic and Operational Plan, 2006-2010—the starting point and basic underpinning for transformation. The plan’s strength is in its definition of clear, tangible goals and deliverables. However, the plan does not include a description of the relationship between the long-term goals and annual goals. Second, in August 2006, OPM updated its Corporate Leadership Succession Management Plan to include all of its supervisory, management, and executive positions with succession planning profiles that contain a list of specific and general technical competency requirements for each position. This is important because the problem of a lack of knowledge and experience at OPM may be compounded by the potential loss of institutional knowledge. In June 2006, we testified that without careful planning, employee attrition, including senior executives, could pose the threat of an eventual loss in institutional knowledge, expertise, and leadership continuity at OPM. OPM’s succession planning data show that as of July 2006, nearly half of its 376 supervisors, managers, and executives were eligible for either early or regular retirement. Based on historical trend data, OPM projects an overall loss (including retirements) of roughly 65 to 75 supervisory, managerial, and executive positions per year. Even more recently, at the end of September 2006, OPM issued its Plan for the Strategic Management of OPM’s Human Capital for fiscal years 2006-2007. According to OPM’s strategic human capital plan, voluntary attrition among employees overall at OPM has averaged approximately 11 percent over a 3-year period and voluntary retirements comprised approximately 25 percent of separations from 2003 to 2006. OPM has developed strategies to help support its succession planning objectives, such as providing resources to improve and develop the competence of internal candidate pools to develop deep “bench strength.” In addition, OPM plans to target recruitment efforts around the critical and core competencies it has identified for each position and to use recruitment incentives and flexibilities to attract the most desirable candidates. These succession planning efforts are important because leading organizations engage in broad, integrated succession planning efforts that focus on strengthening both current and future organizational capacity. OPM’s ability to lead and oversee human capital management policy changes that result from potential human capital reform could be affected by its internal capacity and ability to maintain the right skills and competencies of its workforce, as well as an effective leadership team. The steps taken by OPM demonstrate progress in achieving its transformation and it must continue on this path by closely monitoring its actions to align its workforce to meet current and emerging demands. A new agencywide skills assessment would enable OPM to better align its workforce with needed resources to meet such demands. Building and maintaining expertise in areas that will be critical to future reforms, such as classification and pay and compensation policy, and ensuring that OPM employees receive opportunities for training and development that will help them in assisting agencies with the implementation of reforms, are critical for future reform success. These workforce and training goals and objectives also should be included in the means and strategies developed in OPM’s strategic planning process. Moving forward, OPM can continue to monitor implementation of long-term strategies to better prepare its workforce for change and continue to build its workforce capacity to meet the demands of the future. We have reported that the federal government should follow a phased approach towards human capital reforms that meets a “show me” test. That is, each agency should be authorized to implement a reform only after it has shown it has met certain conditions, including having the institutional infrastructure in place necessary for success. This infrastructure includes, at a minimum, a modern, effective, credible, and validated performance management system that provides a clear linkage between institutional, unit, and individual performance-oriented outcomes, as well as providing for adequate internal and external safeguards to ensure fairness, and prevent abuse, and is nondiscriminatory. The absolutely critical role that a solid infrastructure plays has been amply demonstrated by our own and other organizations’ experiences in shifting to market-based and more performance-oriented pay. These experiences have shown that market-based and performance- oriented pay reforms cannot be simply overlaid on existing ineffective performance management systems, but must be part of a broader strategy of change management and performance improvement initiatives. As the leader of the federal government’s human capital strategies, OPM plays a key role in fostering and guiding improvements in all areas of strategic human capital management across the executive branch. As part of its key leadership role, OPM can assist—and as appropriate, require—the building of the infrastructures within agencies needed to successfully implement and sustain human capital reforms and related initiatives. OPM can do this in part by encouraging continuous improvement and providing appropriate assistance to support agencies’ efforts. As we testified in September 2006, overall, the regulations that OPM and OMB developed to administer a performance-based pay system for executives serve as an important step for agencies in creating an alignment or “line of sight” between executives’ performance and organizational results. However, OPM’s approach to certifying agencies’ senior executive performance appraisal systems could more fully promote the building of the institutional infrastructures needed to effectively implement the senior executive performance and pay reforms. Under OPM and OMB regulations, agencies that are authorized to implement the new pay flexibilities will receive either a provisional or full certification. Provisionally certified agencies receive the same pay flexibilities as those with fully certified systems, even though agencies with provisional certification do not meet all nine of the certification criteria. In essence, the provisional category of certification constitutes a phased approach to implementing performance-based pay systems by allowing agencies to work toward meeting the OPM and OMB full certification requirements as they are implementing the new authorities. Of the 33 performance appraisal systems that have been certified in 2006, only the Department of Labor’s system for its senior executives received full certification. The remaining 32 systems received provisional certification, the majority of which were provisionally certified for the third straight year. (See app. III for the list of agencies that have received certification of their performance appraisal systems since 2004.) An agency that is provisionally certified must reapply annually rather than the every 2 years that is required of agencies with full certification. This annual reapplication process for agencies with provisional certification is important in order to help ensure continued progress in fully meeting congressional intent in authorizing the new performance-based pay system. Moreover, continuing scrutiny from OPM and OMB is important because there is no required time frame under which a provisionally certified agency must demonstrate it meets all the OPM and OMB criteria and thereby receive full certification. In that regard, OPM’s January 2006 guidance required agencies with provisional certification to submit information to OPM and OMB showing improvements the agency has made in response to comments from those agencies. This requirement was underscored in OPM’s October 31, 2006, guidance for calendar year 2007, that asked agencies to highlight in their certification request any description or evidence of improvements made as a result of comments from OPM or OMB in response to the agency’s 2006 certification submission. As noted, OMB and OPM’s efforts represent an important step in fostering “lines of sight” within the agencies. Nonetheless, OPM has opportunities to further strengthen its efforts. More specifically, additional front-end and ongoing OPM involvement appears to be needed to assist agencies in achieving and maintaining full certification. Executive branch agency officials said OPM’s role in the certification process focuses more on enforcing rules regarding applications for certification, rather than guiding an agency to build the necessary infrastructure for a performance-based pay system. In addition, these executive branch agency officials said OPM has helped them improve their pay systems, but they also said OPM should provide more active assistance during the design and implementation of the system rather than waiting to evaluate the end results. Further, an agency CHCO said OPM is not prepared to interact with agencies to progressively develop and sustain their senior executive performance- based pay systems over time once they get through the certification process. Since the certification process started in 2004, OPM has raised the bar for certification by placing a greater emphasis on measurable business outcomes. Raising the bar in the spirit of continuous improvement is appropriate, but agencies can not achieve the higher standards unless they are continually building the foundations essential to support augmented requirements and new improvements. The only two agencies that were fully certified in 2004, the General Services Administration (GSA) and the Pension Benefit Guarantee Corporation, were unable to retain full certification when they reapplied in 2006. An official from one of these agencies said they applied for full certification in 2006, but received provisional certification because OPM had raised the bar for meeting full certification. The agency official stated that upon receiving full certification in 2004, OPM stopped communicating with the agency about new developments in the certification process. In addition, this official said they were “left in the dark” about how OPM’s certification standards were potentially changing, and how the process for certification was evolving. It was not until 4 months after they submitted their application to recertify their system that OPM raised concerns regarding “weak” executive performance measures, though this agency believed that it had achieved the requirement according to OPM’s guidance. The agency opted to accept provisional certification rather than redo its senior executive performance plans and wait for full certification. In general, OPM has recognized that agencies need more assistance and guidance developing an infrastructure to support performance management systems for executive branch employees below the senior executive level. OPM developed the Performance Appraisal Assessment Tool (PAAT) and has promoted performance management beta sites to address this need. The PAAT provides agencies with an assessment tool that focuses on the design and implementation of performance management systems, the training and development of supervisors, and the agency’s accountability for the system. The PAAT helps agencies identify weaknesses in their performance management systems and provides agencies an opportunity to develop a comprehensive strategy for revising its performance management practices to better support a results- focused performance culture. The beta sites give agencies an opportunity to test their nonexecutive performance management systems on a small scale before expanding them agencywide. Agencies and OPM use the PAAT to evaluate the progress of the beta sites. This approach of evaluating and testing allows agencies to build internal capacity, gain experience, and demonstrate that they are prepared to link pay to performance for all employees. However, as one executive branch agency official noted, the PAAT is used more by OPM to ensure accountability than to build agency infrastructure. Similar to concerns expressed about the senior executive system certification process, an agency HR director said OPM does not provide “up-front” implementation plans to agencies that outline the required agency investment and infrastructure needed to successfully meet new human capital requirements. Going forward, OPM can help agencies build this infrastructure by designing its human capital reform efforts to promote and support continuous agency improvement. OPM will need to expand the focus of its efforts to help identify the obstacles that are impeding agencies from achieving desirable human capital outcomes, and then take appropriate measures to address them and set mutually agreed-upon goals for improvement. These actions will help ensure that agencies continue to make substantive progress toward modernized, credible performance management systems, and that provisional certifications do not become the norm. OPM can also take steps to define what it will take in terms of fact-based and data-driven analyses for agencies to demonstrate that they are ready to receive this certification, and then help agencies develop the infrastructure necessary to produce these results. Our prior work has found that high-performing organizations strategically use partnerships and that federal agencies, such as OPM, must effectively manage and influence relationships with organizations outside of their direct control. High-performing organizations strengthen accountability for achieving crosscutting goals by placing greater emphasis on collaboration, interaction, and teamwork across organizational boundaries, to achieve results that often transcend specific boundaries. Communicating with stakeholders is especially crucial in the public sector, where policy making and program management demand transparency and a full range of stakeholders and interested parties are concerned not only with what results are to be achieved, but also which processes are used to achieve those results. Our prior work has identified a number of opportunities where OPM could improve its collaboration with stakeholders. In 2003, we reported that the lack of coordination between OPM and GSA, the lead agencies for the governmentwide telework initiative, created confusion for federal agencies in implementing their individual telework programs. More recently, our review of oversight of Equal Employment Opportunity (EEO) requirements and guidance found little evidence of OPM coordination with the Equal Employment Opportunity Commission (EEOC). Insufficient understanding of OPM and EEOC’s mutual roles, authority, and responsibilities resulted in a lost opportunity to realize consistency, efficiency, and public value in federal EEO and workplace diversity human capital management practice. We have also reported that using interagency councils has emerged as an important leadership strategy in both developing policies and gaining consensus and consistent follow-through within the executive branch. With respect to human capital reforms, we have reported that the CHCO Council should be a key vehicle for this needed collaboration and is vital to addressing crosscutting federal government strategic human capital challenges. Executive branch agency officials said the senior executive performance appraisal certification process was a missed opportunity for OPM to better collaborate with the CHCO Council. One agency CHCO said OPM traditionally uses council meetings to present information to the CHCOs, but does not encourage discussions or seek the council’s input. Another agency CHCO said the council has rarely been used to debate new human capital policies. This one-way communication does not provide a forum for agency CHCOs to contribute ideas or discuss their experiences. Some CHCOs and HR directors pointed to OPM’s successful collaborative efforts through the CHCO Council, such as its assistance to agencies in the aftermath of Hurricane Katrina; however, they also told us that OPM misses opportunities to partner more effectively with agencies. An agency CHCO said that more robust policy discussion on the council would promote community building and collaboration among agencies and OPM. According to OPM officials, OPM provided the CHCO Council with opportunities to discuss the certification process. However, some CHCOs wanted more involvement in crafting the fundamental design and applicable issues of the certification process, rather than commenting on draft regulations after the fact. While the new interim final regulations were being developed and issued in 2004, OPM provided two presentations to the full CHCO Council on the new requirements for senior executive performance appraisal systems along with periodic updates. The CHCO Council minutes show that one presentation focused on the design of the new performance appraisal system and the second on the process for obtaining certification. Agency CHCOs were able to ask questions about the proposal and make suggestions. For example, one CHCO suggested that OPM reconsider the timing of the recertification process since it coincided with agencies’ annual performance appraisal cycle, and this has proven to be a key issue for the certification process. Further, CHCOs were given a very short time frame of 24 hours to review and comment on the proposed certification criteria. Executive branch agency officials overwhelmingly reinforced a need for OPM to do more to collaborate and facilitate information sharing with the council and HR directors. More collaboration with the CHCO Council during the design phase of human capital initiatives would enable OPM to incorporate agency suggestions and build a governmentwide consensus for reform. OPM staff involved with the certification process told us that in 2004, OPM sought input on the certification criteria from OMB and members of the CHCO Council. There were also opportunities for agency comments when the draft regulations were released and through the CHCO Council. In addition, the CHCO Council Subcommittee of Performance Management reviewed the process as well. However, most comments focused on pay flexibilities and not the certification process. OPM has taken some steps to improve the effectiveness of the council by expanding the membership to include deputy CHCO positions. Some deputy CHCOs are also the agencies’ HR directors, but others perform different deputy roles. Including deputy CHCOs will bring additional HR expertise and provide more leadership continuity to the council. An agency CHCO said OPM is taking other steps to improve collaboration with agencies, such as promoting more CHCO Academy meetings on the certification process and reinstituting executive resource forums, which help keep agency executive resources staff current on OPM’s certification policies. A recent executive resource forum gave agency executive resource staff an opportunity to discuss common concerns about the certification process. Moving forward, collaboration will be critical as human capital reforms begin to take hold across government. If OPM is to lead reform successfully, it will need to strategically use the partnerships it has available to it, such as the CHCO Council and other key stakeholders. OPM can continue to build upon its expansion of the CHCO Council and promotion of CHCO Academies and executive resource forums. These are important steps toward building a collegial environment for debating and collaborating on future human capital reforms. Our work on high-performing organizations and successful transformations has shown that communication with customers should be a top priority and is central to forming the partnerships needed to develop and implement transformation strategies. This communication is most effective when done early, clearly, and often. Providing agencies with clear and timely guidance is one way of effectively communicating with OPM’s customers. In the past, we have reported concerns with OPM’s communications pertaining to their leadership in implementing governmentwide human capital initiatives and have recommended ways in which OPM could improve its guidance to federal agencies. For example, in 2003 we reported that an initial lack of clarity in telework guidance for federal agencies from OPM led to misleading data being reported on agencies’ telework programs. As a result, we recognized the need for OPM to provide agencies with consistent, inclusive, and unambiguous support and guidance. The initial lack of clear and timely guidance has hindered agency implementation of senior executive performance appraisal systems. When the certification process began in 2004, OPM provided agencies with limited guidance for implementing the new regulations. Officials at a majority of the CHCO Council agencies told us they did not have enough guidance to properly prepare for meeting the certification criteria. With the release of the regulations in 2004, OPM’s initial guidance consisted of a list of documents required for provisional and full certification and a sample cover letter to accompany each application. The lack of more specific guidance created confusion as agencies attempted to understand the broadly defined regulatory criteria and adjust to the requirements for certification. Agencies did not fully understand what the regulations required in order to receive certification, thus resulting in an inefficient process and increasing the workload of agency human resource staffs unnecessarily. According to executive branch agency officials, when contacting OPM for clarification or assistance with requirements, they received conflicting answers and advice. Executive branch agency HR directors said that they sometimes received mixed messages on the certification process from OPM, and it appeared that answers would change depending on the individual an agency was working with at OPM. One agency CHCO said that rather than providing agencies with guidance, OPM tends to wait to receive the agency submission and then determine if it meets requirements. While OPM directs agencies to its Web site and online resources, an agency CHCO said they found this information useful, but this did not fulfill all of their information needs. OPM officials we spoke with about this agreed that they need to provide clear and consistent guidance to agencies and said they are working to improve this. They said the certification of agency performance appraisal systems has been an iterative, learning process, and OPM is positioning itself to provide more guidance to agencies. For example, OPM has continued to update its annual certification guidance to provide agencies with more assistance when developing their senior executive appraisal systems for certification. The guidance for calendar year 2006 includes explicit examples from executive performance plans that comply with the certification criteria. The continued late issuance of certification guidance in the years since the 2004 regulations were released has plagued the process by delaying the certification of agency systems. Since certification of appraisal systems runs on the calendar year, an agency’s provisional certification expires on December 31st unless they submit an application and receive certification for the next calendar year. To avoid a gap in certification between calendar years, applications for appraisal system certification need to be approved before January 1st. However, OPM did not issue guidance for calendar year 2006 until January 5, 2006, causing agencies to lose time in developing their 2006 applications for review and certification. This delay was compounded when OPM clarified its guidance in a January 30, 2006, memorandum telling agencies that senior executive performance appraisal systems would not be certified for calendar year 2006 if the performance plans did not hold executives accountable for achieving measurable business outcomes. Some agencies had to revise their submissions, where necessary, to meet OPM’s additional requirements, causing further delays. Untimely guidance has been a recurring problem with OPM’s implementation of the certification process, beginning when OPM initially developed the regulations for certifying appraisal systems. In late November 2003, Congress passed legislation to create the new senior executive performance-based pay system to take effect in January 2004; however, it took 8 months for OPM to publish the certification criteria included in the interim regulations when jointly released with OMB in July 2004. As a result, agencies that were certified in 2004 were unable to operate under the higher executive pay caps until late in the calendar year. In December 2004, OPM issued guidance for calendar year 2005. The guidance was issued before the start of the calendar year, but only by a few weeks. On November 1, 2006, OPM posted a memorandum to heads of departments and agencies from the Director of OPM, notifying them of guidance for agencies seeking certification for calendar year 2007. These delays and late revisions exacerbate the time crunch agencies face when applying for certification. According to executive branch agency officials, after agencies’ performance cycles end on September 30, they essentially have 90 days until the end of the calendar year when their current certification expires if they are provisionally certified or in their final year of full certification. Within this time frame, agencies must conduct senior executive performance assessments and reviews, develop performance plans for the next performance year, and compile agency and senior executive performance data for the certification application. The late release of certification guidance adds a level of uncertainty to the process that can delay an agency’s submission of its application until after the start of the calendar year. Some agencies delay preparing their certification applications because they do not know when OPM will release its annual guidance or if there will be any changes in requirements from the previous year. This creates a gap in certification after an agency’s current certification expires. Until the agency’s senior executive performance appraisal system is recertified, it must operate under the lower “uncertified” executive pay cap of $152,000 in 2006 ($13,200 less than for certified systems), while the cap on total compensation is $183,500 ($28,600 less than for certified systems). OPM has acknowledged that the pay limitations in this certification gap can negatively impact an agency’s ability to recruit, reassign, and retain qualified senior executives. Executive branch agency officials expressed similar concerns about how the certification gap limits their ability to attract and hire new executives. They also said the certification gap creates an uneven playing field between agencies with certified systems and agencies that are still awaiting recertification. In July 2006, OPM issued regulations to alleviate one of the concerns with the certification gap. The regulations now allow agencies to increase the pay rates of senior executives once the agency is certified, even if it happens after the start of the calendar year. These regulations resolve a symptom of the certification gap, but do not address the underlying causes of the time crunch agencies face when applying for certification. Also, according to OPM officials, the administration has submitted a legislative proposal to Congress to eliminate the calendar year basis for certification. However, such legislation has not been introduced. Moving forward, OPM could alleviate confusion, delays, and inefficiencies by providing agencies with clear and timely guidance for implementing human capital reforms. OPM needs to clearly communicate its expectations and provide agencies with adequate time to adjust to any changes in requirements. When designing new human capital initiatives, OPM could work with agencies to identify what guidance agencies will need and develop a timeline for when OPM will release such guidance. A different time frame for certifying performance appraisal systems could also help alleviate the time crunch agencies face when applying for certification. We have reported that leading practices and benchmarking are important to supporting agency transformation efforts, and often include case illustrations of leading practices in our reports. In May 2003, we recommended that OPM work to more thoroughly research, compile, and analyze information on the effective and innovative use of human capital flexibilities and more fully serve as a clearinghouse in sharing and distributing information. OPM began working with a contractor in the summer of 2005 to review hiring flexibilities and authorities to better determine which ones are used and not used, who is using them, and when and how they are being used; however, it is still unclear if OPM has created a “clearinghouse” of information to help agencies meet their human capital needs. In 2004, we stated that agencies need to provide OPM with timely and comprehensive information about their experiences in using various approaches and flexibilities to improve their hiring processes, and that OPM could serve as a facilitator in the collection and exchange of information about agencies’ effective practices and successful approaches. Executive branch agency officials told us that OPM could have better facilitated the sharing of best practices for developing and implementing senior executive appraisal systems. According to OPM, in the last 3 years, it has reviewed and certified about 100 applications for appraisal system certification. OPM could use this archive of information to identify some best practices for developing certified systems, but OPM has not fully shared this information with agencies. Director Springer said OPM has met with officials from the only agency currently with full certification, the Department of Labor (DOL), to study what they have done right. However, Director Springer did not know if other agencies had taken the initiative to contact DOL to learn from their success. A senior OPM official said OPM did not provide agencies with examples of “best practices” for certification applications because OPM did not want agencies to think there was only one “right way” to get certified. We have reported that a “one size fits all” approach to human capital management is not appropriate, but we also recognized the value of documenting a range of best practices which agencies can tailor to their specific needs. One agency HR director said agencies were anxious to learn about what was going on at other agencies and did not understand why OPM was reluctant to share information. Without sufficient guidance from OPM, agencies relied on each other where possible to develop an understanding of the certification requirements. One CHCO also took the initiative to use CHCO Academy meetings to engender information sharing among agencies about the application process. However, agencies were unable to resolve uncertainties and disagreements about the regulatory requirements without clearer guidance from OPM. Executive branch agency officials said best practices for certification could help them improve the design of their performance appraisal systems. For example, executive branch agency officials said best practices for developing senior executive performance measures would help them make their performance plans more results based, as required for certification. Recently, OPM has taken steps to share information among agencies. In September 2006, OPM provided agencies’ executive resource directors with samples of agency senior executive performance plans, though OPM did not specify why these samples were selected and if they should serve as best practices for other agencies. Moving forward, OPM should facilitate the sharing of best practices for human capital reforms among federal agencies. Director Springer has said she wants the CHCO Council to develop a best practices initiative to collect and share information on the certification process. The CHCO Council could be used to facilitate best practices for other human capital initiatives as well. Providing a forum for agencies to learn from each others’ experiences will allow agencies to share effective strategies and avoid common pitfalls. We have reported that communication during a transformation is not about just “pushing the message out.” Given the uncertainties that performance-based pay systems may generate for agencies and employees accustomed to receiving more standardized pay increases, two-way communication is especially important in an environment of human capital reform. Creating opportunities for employees and customers to communicate concerns and experiences surrounding a transformation allows them to feel that their experiences are important and acknowledged. Once this employee and customer feedback is received, it is important to use this solicited feedback to make any appropriate changes to the implementation of the transformation. For example, OPM uses its FHCS as an important method of gathering its own employee feedback and has used this information to take actions to improve its organization. In addition, OPM recognizes that it is important to notify and involve the employees affected by personnel demonstration projects, which are similar to the senior executive performance-based pay system, though OPM does not require those implementing such demonstration projects to obtain feedback. However, according to its Demonstration Projects Evaluation Handbook, OPM suggests that a survey is one method that could be used to obtain employees’ views on the impact of the demonstration project to help develop lessons learned that could be shared with the affected agency, as well as governmentwide. We have also reported that high-performing organizations understand they need to continuously review and revise their performance management systems through monitoring their systems, informally and formally, including listening to employees’ and stakeholders’ views. OPM does not actively solicit and act on feedback from agencies on the implementation of the certification process. Executive branch agency HR directors said there was not a formal mechanism, such as a survey instrument, for agencies to provide feedback to OPM on its guidance and assistance to agencies. An OPM executive within the HCLMSA division confirmed that OPM does not have a formal feedback mechanism; however, this executive said OPM converses with agencies regularly so they did not feel the need to obtain information in this way. Informal feedback from agencies is primarily communicated through the HCOs. OPM holds regular meetings of the HCOs to discuss agency concerns. However, executive branch agency officials said OPM does not always act to address these concerns. OPM also gathers agency feedback through the CHCO Council and executive resource forums. OPM’s current feedback mechanisms are important and valuable, but they could be supplemented, though not replaced, with more formal outreach. Formal feedback mechanisms can ensure that OPM gathers a full range of views by giving everyone an opportunity to comment. Formal feedback also provides a mechanism for collecting the views of clients and employees in one place, allowing OPM to track and report progress over time. Also, OPM does not gather feedback from senior executives who are directly affected by the new performance appraisal systems and does not require agencies to survey senior executives, even though agencies are approaching the fourth year of implementation. Director Springer said OPM has not surveyed members of the SES about their attitudes towards the new system. In September 2006, she said it would be premature to conduct a survey before the system takes hold, but she did not say when the timing might be appropriate. Also, the 2006 FHCS, OPM’s most recent survey that gathers employees’ perceptions of federal human capital practices in their agencies, did not include any questions specifically designed to gather feedback on changes to senior executive performance systems. However, Director Springer said OPM plans to analyze a recent survey of SES members conducted by the Senior Executive Association to obtain the experience and views of SES members on the new executive systems. Going forward, OPM should recognize the usefulness of agencies’ and senior executive employees’ views on the certification process and identify a systematic approach to obtain feedback on this and future human capital reforms. Feedback mechanisms, such as survey or focus groups, could help OPM identify what its customers think OPM is doing well, and where OPM needs to improve. Once obtained, feedback information should be considered in developing new agency guidance and OPM should take steps to address any specific agency concerns, as appropriate. High-performing organizations understand they need to continuously review and revise their performance management systems to achieve results and accelerate change. These organizations continually review and revise their human capital management systems based on data-driven lessons learned and changing needs in the environment. We have reported that agencies seeking human capital reform should consider doing evaluations that are broadly modeled on the evaluation requirements of the OPM demonstration projects. Under the demonstration project authority, agencies must evaluate and periodically report on results, implementation of the demonstration project, cost and benefits, impacts on veterans and other equal employment opportunity groups, adherence to merit system principles, and the extent to which the lessons from the project can be applied governmentwide. Such an evaluation could ensure accountability, facilitate congressional oversight, allow for any midcourse corrections, and assist the agency in benchmarking its progress with other efforts. Also, monitoring the implementation of new pay systems is important because unintended consequences may arise. Organizations have found they should be open to refining their systems. For example, we have reported that in order to spread the pay increases among as many employees as possible, the Federal Deposit Insurance Corporation (FDIC) found that managers tended not to award merit pay increases to top- performing employees when they were to be promoted in the career ladder and as a result, these high-performing employees were not getting the merit pay increases they deserved. FDIC recognized that this unintended consequence needed to be corrected in future iterations of the pay system and managers needed help in learning how to make the necessary distinctions in employees’ contributions. As we noted in our September 2006 testimony, OPM needs to carefully monitor the implementation of agencies’ senior executive performance management systems, especially those that have provisional certification. This is because, as also noted earlier in this report, agencies with provisional certification have only met four of nine required criteria for certification and can still receive the pay flexibilities of the new system. In other words, agencies can receive the benefits of the new pay-for- performance system without meeting all of its requirements and having safeguards in place. We testified in October 2005 that in our view such provisional certifications should not be an option under any broad-based classification and compensation reform proposal. Although OPM does not have an evaluation strategy, it is taking steps to monitor how agencies are making meaningful distinctions in senior executive performance. Such distinctions are required by statute and are one of the nine criteria for certifying agencies’ senior executive performance appraisal systems (as shown in app. II). Once agencies have provisional or full certification, OPM monitors this criterion by measuring the distributions of agencies’ performance ratings and pay. This information helps OPM determine if agencies are making meaningful distinctions among the performance of their senior executives. Such distinctions as part of an effective performance management system are important because they allow the organization’s leadership to appropriately reward those who perform at the highest level. In its Report on Senior Executive Pay for Performance for Fiscal Year 2005, OPM stated that the data indicate that federal agencies are taking seriously the requirement to develop rigorous appraisal systems and to make meaningful distinctions in performance ratings and pay. All reporting agencies have moved away from pass/fail appraisal systems and now have at least one performance level above “fully successful.” In fiscal year 2005, 43 percent of career SES governmentwide were rated at the highest performance level, compared to 75 percent in 2003 prior to the implementation of the SES pay-for-performance system. Further, OPM reported for fiscal year 2005 that the percentage of SES rated at the highest performance level declined 16 percent from the prior year. OPM also reported that the largest increases in salary went to SES rated at the highest performance level. Although SES pay and performance award amounts vary by agency based on factors such as compensation strategy, funding, and agency performance levels, OPM believes these general trends suggest a further refinement may be occurring in the process of distinguishing outstanding performers. Developing an evaluation strategy that works within OPM’s existing required systems—such as the Human Capital Assessment and Accountability Framework (HCAAF)—is one approach that OPM can take to track agencies’ progress in implementing their senior executive performance systems as well as hold them accountable for meeting OPM’s certification criteria. For example, DOD officials suggested that OPM could work with agencies to develop metrics under the HCAAF to determine whether agency performance management systems were making meaningful distinctions based on relative performance or other such important criteria. These metrics could be reported in current systems, such as the President’s Management Agenda (PMA). Because OPM carries out its role in a decentralized environment where the results of its efforts largely take place at federal agencies outside its direct control, it is particularly important that OPM develop a strategy to track agencies’ progress in meeting its human capital reform goals. OPM could require evaluations that are broadly modeled on the evaluation requirements of the OPM demonstration projects. It can work within its currently required systems to make reporting requirements less onerous and part of agencies’ routines. As we testified in September 2006, in the future, OPM should maintain a focus on continuous improvement of agency systems by monitoring the certification process, determining whether any obstacles are impeding agencies from receiving full certification, and taking appropriate measures to address them. Significant reforms are already underway to modernize the federal government’s human capital management systems to better position agencies to meet the challenges of the 21st century. OPM is taking steps to better prepare itself and agencies for governmentwide human capital reform through the implementation of the senior executive performance appraisal system certification process, other performance management initiatives, such as its PAAT and beta sites, and other governmentwide human capital initiatives. These reform efforts present an opportunity for OPM to evaluate and learn from its approach to implementing these initiatives—lessons that can be applied to ongoing and future human capital reforms. OPM’s workforce and succession planning efforts are also vital to ensuring it has the internal capacity to lead and implement reforms. This includes building and maintaining the needed skills and competencies for OPM’s evolving role in assisting agencies. While OPM has taken steps through its planning efforts to assess its workforce needs, it can better prepare its workforce by reexamining its competencies in light of its updated strategic management framework in order to meet future demands. Agencies have raised concerns with OPM’s workforce capacity in general, and more specific concerns with OPM’s implementation of the senior executive performance appraisal system. These include the lack of clear and timely guidance, the need for more sharing of best practices, and the year-end time crunch agencies face gathering the required information for OPM to certify their systems. Further, OPM does not obtain formal feedback from agencies on the implementation of the executive systems to assist OPM in better understanding agency concerns and the difficulties they face with implementation. Although OPM recognizes the value of obtaining employees’ views on reform efforts, as it encouraged with past demonstration projects, it has not encouraged obtaining such feedback for the executive performance system. In addition, having an evaluation strategy to monitor agencies’ overall results of the senior executive performance system could help ensure accountability and provide transparency for Congress, other agencies, and stakeholders. To better align OPM’s workforce skills and competencies for future human capital reform efforts, we recommend that the Director of OPM: Reexamine OPM’s agencywide skills and competency assessment in light of its updated strategic management documents. To assist executive branch agencies in meeting the requirements for the certification of their senior executive performance appraisal systems, we recommend that the Director of OPM: Develop and publish a timeline for the issuance of certification guidance. This timeline should be developed with the input of the CHCO Council and provide agencies with adequate time to adjust to any changes in guidance. Evaluate alternatives that could remedy the year-end time compression that agencies face when trying to meet OPM application requirements and avoid a gap in certification. Work with the CHCO Council to develop a formal mechanism for sharing leading practices for implementing human capital initiatives, such as the senior executive performance appraisal certification and other performance management reform initiatives. This forum should include an adequate range of examples and best practices so as not to promote one- size-fits-all solutions. Develop a formal feedback mechanism to obtain agencies’ views on OPM’s implementation of the certification process. OPM should utilize this feedback to identify common agency concerns and develop action plans to address these concerns. Work with executive branch agencies to develop a systematic approach for obtaining employee attitudes towards human capital reforms. Develop a strategy to allow OPM, other executive agencies, and Congress to monitor the progress of implementation of the senior executive performance-based pay system. We provided a draft of this report to the Director of OPM for review and comment. We received a written response from the Director, which is reprinted in appendix IV. The Director stated that OPM has made progress toward achieving its operational and strategic goals, but neither agreed nor disagreed with our recommendations. Director Springer provided a number of informative comments describing progress OPM has made towards achieving its planned goals, and initiatives undertaken to assist federal agencies with meeting their hiring demands of the future. Director Springer said OPM has made progress towards achieving its operational and strategic goals since she became Director of OPM. The Director provided information that while beyond the scope of the report, nonetheless is helpful in understanding the context in which OPM is operating. Specifically, she commented that OPM associates have worked together and with agencies to achieve the objectives that are tied to OPM’s Strategic and Operational Plan, 2006-2010, and since March 2006, OPM has achieved its plan’s objectives, on time or ahead of schedule. Also, OPM provided a number of technical comments and, where appropriate, we have made changes to the report language to reflect these comments. We are sending copies of this report to the Director of OPM, the Director of OMB, and other interested parties. Copies will be made available to others upon request. This report is also available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions concerning this report, please contact me at (202) 512-6806. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. To identify lessons learned to inform the Office of Personnel Management’s (OPM) capacity to lead and implement human capital reform, we reviewed OPM’s implementation of the senior executive performance appraisal system certification process. We reviewed and analyzed key documents including the legislation that authorized the new senior executive performance-based pay system and the regulations for the appraisal system certification process that were jointly issued by OPM and the Office of Management and Budget (OMB). We also reviewed and analyzed the subsequent guidance issued by OPM to agencies to prepare their certification applications, policy memos from OPM to agencies, and other documentation related to the certification process. To gain an agency perspective of the certification process and to a limited degree on other performance management initiatives, such as the Performance Appraisal Assessment Tool (PAAT) and the performance management beta sites, we interviewed 22 of the 23 members of the Chief Human Capital Officers Council and/or their corresponding agency HR directors. The one agency that was not available for an interview provided us with written responses to our questions. In addition, we conducted interviews with OPM’s five associate directors and other senior-level staff, such as the Chief Financial Officer and Chief Human Capital Officer, to obtain their views of OPM’s management practices. We were briefed by the OPM Director and other OPM officials on the OPM Strategic and Operational Plan, 2006-2010 and aspects of OPM’s human capital strategies and initiatives. We also interviewed staff from OMB related to their role in the performance appraisal system certification process. To evaluate OPM’s workforce capacity, we interviewed OPM’s former and current Chief Human Capital Officers and analyzed the OPM Strategic and Operational Plan, 2006-2010. To understand how OPM’s workforce is aligned to support the implementation of potential reforms, we analyzed a number of internal OPM documents such as its August 2006 Corporate Leadership Succession Management Plan and A Plan for the Strategic Management of OPM’s Human Capital fiscal years 2004-2007. As the Plan for the Strategic Management of OPM’s Human Capital fiscal years 2006- 2007 was issued at the conclusion of our review, we were not able to analyze this document. To evaluate OPM’s efforts to build agency infrastructure, we reviewed documents related to OPM’s PAAT and the performance management beta site initiatives. We selected these initiatives because of similarities to the certification process and their likelihood to yield tangible lessons related to OPM’s capacity to lead future reforms. To evaluate OPM’s feedback mechanisms, we reviewed survey questions included in the 2004 and 2006 Federal Human Capital Survey (FHCS). The 2006 survey was launched in June 2006 and results are not yet available. To assess OPM’s measures for tracking progress, we analyzed operational goals in the OPM Strategic and Operational Plan, 2006-2010. We also reviewed OPM’s measures of senior executive performance ratings and pay in its Report on Senior Executive Service Pay for Performance for Fiscal Year 2005. We leveraged our work that resulted in our June 2006 testimony on OPM’s internal capacity. We used the 2004 FHCS, the latest available survey data, and summaries of OPM’s 2005 focus groups to assess employee views of OPM’s organizational capacity. We reviewed OPM’s analysis of its 2004 FHCS results and conducted our own analyses of survey results using 2002 and 2004 FHCS data sets provided to us by OPM. On the basis of our examination of the data and discussions with OPM officials concerning survey design, administration, and processing, we determined that the data were sufficiently reliable for the purpose of our review. We analyzed summaries of OPM employee focus groups that OPM conducted in fall 2005 to understand factors contributing to employees’ responses on the 2004 FHCS. We used the participant comments from these focus groups to illustrate employee perspectives. We also analyzed the May 2006 action plans developed by OPM to address issues identified in the focus groups. Other documents reviewed included our previous work related to OPM, high-performing organizations, organizational transformation, and human capital management reforms. We also reviewed GAO’s previous recommendations on a range of issues related to OPM’s human capital leadership role and internal management challenges. We conducted our work from October 2005 to September 2006 in accordance with generally accepted government auditing standards. The new senior executive pay system raises the cap on base pay and total compensation. For 2006, the caps are $152,000 for base pay (Level III of the Executive Schedule) with a senior executive’s total compensation not to exceed $183,500 (Level I of the Executive Schedule). If an agency’s senior executive performance appraisal system is certified by the Office of Personnel Management (OPM) and the Office of Management and Budget (OMB) concurs, the caps are increased to $165,200 for base pay (Level II of the Executive Schedule) and $212,100 for total compensation (the total annual compensation payable to the Vice President). To qualify for senior executive pay flexibilities, agencies’ performance appraisal systems are evaluated against nine certification criteria. As shown in table 2, the certification criteria jointly developed by OPM and OMB are broad principles that position agencies to use their pay systems strategically to support the development of a stronger performance culture and the attainment of the agency’s mission, goals, and objectives. There are two levels of performance appraisal system certification available to agencies: full and provisional. To receive full certification, the design of the systems must meet the nine certification criteria and agencies must provide documentation of prior performance ratings to demonstrate compliance with the criteria. Full certification lasts for 2 calendar years. Agencies can receive provisional certification if they have designed but not yet fully implemented a senior executive performance appraisal system, or do not have a history of performance ratings that meets the certification criteria. Provisional certification lasts for 1 calendar year. OPM’s role in the certification process begins when an agency submits a certification application to OPM. If fully certified, the certification is good for the remainder of the calendar year in which the agency applied, as well as all of the following calendar year. If provisionally certified, an agency’s certification is only good for the calendar year in which it applied. For example, if an agency is provisionally certified in October 2005, its certification would expire in December 2005. To ensure the agency’s submission is complete, the agency’s OPM contact—the Human Capital Officer (HCO)—first verifies that the application contains the required materials and documents. If complete, the HCO sends copies to the two OPM divisions responsible for reviewing the application, the Human Capital Leadership and Merit System Accountability (HCLMSA) division and the Strategic Human Resources Policy (SHRP) division, and an additional copy to OMB. The agency’s submission is reviewed independently by representatives within HCLMSA and SHRP to bring different perspectives to the review. The submissions are evaluated against the nine certification criteria, but each review team has its own method for analyzing the application. After an initial review, the reviewers from HCLMSA and SHRP hold an informal meeting to discuss the submission. After a more thorough review, the reviewers meet again in a formal panel along with the agency’s HCO and decide whether they have enough information to reach a certification decision about the agency. If the panel concludes there is not enough information to reach a decision, the HCO will request that the agency provide any missing or additional supporting information. If the panel decides there is sufficient information to reach a decision, it will either certify or reject the application. When an application is rejected, the HCO works with the agency to help modify its performance appraisal system so that it meets the criteria. If the application is approved by OPM, the HCO contacts OMB for concurrence. OMB uses the same nine criteria to evaluate agency applications, but primarily focuses on measures of agency performance. If OMB concurrence is not achieved, the HCO works with the agency to address OMB’s concerns until resolution is reached. Once OMB concurs, the Director of OPM certifies the agency’s performance appraisal system and the agency is formally notified with a letter. The HCO also provides additional comments to the agency on their system and identifies any improvement needs. For example, these comments may direct the agency to focus more on making meaningful distinctions in performance. Figure 1 provides an overview of the certification process. Appendix III: Agency Certification Status for Calendar Years 2004, 2005, and 2006 as of October 2006 (SES or SL/ST) 1-year provisional (P) Department of Health And Human Services Department of Housing and Urban Development (HUD) F (2006/2007) F (2004/2005) National Aeronautics and Space Administration (NASA) P (SES or SL/ST) 1-year provisional (P) F (2004/2005) Agency did not submit an appraisal system application, submitted an application but was not approved, or withdrew an application for OPM’s review. Brenda S. Farrell, (202) 512-6806 or farrellb@gao.gov. In addition to the contact named above, Trina Lewis, Assistant Director; Thomas Beall; Carole J. Cimitile; William Colvin; Elizabeth Curda; S. Mike Davis; William Doherty; Charlene Johnson; Jeffrey McDermott; Michael Volpe; Katherine H. Walker; and Gregory H. Wilmoth made major contributions to this report.
As the agency responsible for the federal government's human capital initiatives, the Office of Personnel Management (OPM) must have the capacity to successfully guide human capital transformations necessary to meet the governance challenges of the 21st century. Given this key role, GAO was asked to assess OPM's capacity to lead further reforms. In June 2006, GAO testified on several management challenges that OPM faces. This report--the second in a series--supplements that testimony and, using the new senior executive performance-based pay system as a model for understanding OPM's capacity to lead and implement reform, identifies lessons learned that can inform future reforms. GAO analyzed relevant laws and documents, and obtained views from the Chief Human Capital Officers (CHCO) Council and human resource directors, the Office of Management and Budget (OMB) staff, and OPM officials. The congressionally authorized senior executive performance-based pay system, implemented in 2004, provides an opportunity to learn from experiences gained and apply those lessons to the design and implementation of future human capital reforms. Under the performance-based system, before an agency can receive the new pay flexibilities, OPM, with concurrence from OMB, must certify that the agency's appraisal system meets certain criteria. OPM is likely to play a similar leadership and oversight role for future reforms.
In general, SCHIP funds are targeted to uninsured children in families whose incomes are too high to qualify for Medicaid but are at or below 200 percent of FPL. Recognizing the variability in state Medicaid programs, federal SCHIP law allows a state to cover children in families with incomes up to 200 percent of FPL or 50 percentage points above its existing Medicaid eligibility standard as of March 31, 1997. Additional flexibility regarding eligibility levels is available, however, as Medicaid and SCHIP provide some flexibility in how a state defines income for purposes of eligibility determinations. Congress appropriated approximately $40 billion over 10 years (from fiscal years 1998 through 2007) for distribution among states with approved SCHIP plans. Allocations to states are based on a formula that takes into account the number of low- income children in a state. In general, states that choose to expand Medicaid to enroll eligible children under SCHIP must follow Medicaid rules, while separate child health programs have additional flexibilities in benefits, cost-sharing, and other program elements. Under certain circumstances, states may also cover adults under SCHIP. SCHIP allotments to states are based on an allocation formula that uses (1) the number of children, which is expressed as a combination of two estimates—the number of low-income children without health insurance and the number of all low-income children, and (2) a factor representing state variation in health care costs. Under federal SCHIP law and subject to certain exceptions, states have 3 years to use each fiscal year’s allocation, after which any remaining funds are redistributed among the states that had used all of that fiscal year’s allocation. Federal law does not specify a redistribution formula but leaves it to the Secretary of Health and Human Services (HHS) to determine an appropriate procedure for redistribution of unused allocations. Absent congressional action, states are generally provided 1 year to spend any redistributed funds, after which time funds may revert to the U.S. Treasury. Each state’s SCHIP allotment is available as a federal match based on state expenditures. SCHIP offers a strong incentive for states to participate by providing an enhanced federal matching rate that is based on the federal matching rate for a state’s Medicaid program—for example, the federal government will reimburse at a 65 percent match under SCHIP for a state receiving a 50 percent match under Medicaid. There are different formulas for allocating funds to states, depending on the fiscal year. For fiscal years 1998 and 1999, the formula used estimates of the number of low-income uninsured children to allocate funds to states. For fiscal year 2000, the formula changed to include estimates of the total number of low-income children as well. SCHIP gives the states the choice of three design approaches: (1) a Medicaid expansion program, (2) a separate child health program with more flexible rules and increased financial control over expenditures, or (3) a combination program, which has both a Medicaid expansion program and a separate child health program. Initially, states had until September 30, 1998, to select a design approach, submit their SCHIP plans, and obtain HHS approval in order to qualify for their fiscal year 1998 allotment. With an approved state child health plan, a state could begin to enroll children and draw down its SCHIP funds. The design approach a state chooses has important financial and programmatic consequences, as shown below. Expenditures. In separate child health programs, federal matching funds cease after a state expends its allotment, and non-benefit-related expenses (for administration, direct services, and outreach) are limited to 10 percent of claims for services delivered to beneficiaries. In contrast, Medicaid expansion programs may continue to receive federal funds for benefits and for non-benefit-related expenses at the Medicaid matching rate after states exhaust their SCHIP allotments. Enrollment. Separate child health programs may establish separate eligibility rules and establish enrollment caps. In addition, a separate child health program may limit its own annual contribution, create waiting lists, or stop enrollment once the funds it budgeted for SCHIP are exhausted. A Medicaid expansion must follow Medicaid eligibility rules regarding income, residency, and disability status, and thus generally cannot limit enrollment. Benefits. Separate child health programs must use, for example, benchmark benefit standards that use specified private or public insurance plans as the basis for coverage. However, Medicaid—and therefore a Medicaid expansion—must provide coverage of all benefits available to the Medicaid population, including certain services for children. In particular, Early and Periodic Screening, Diagnosis, and Treatment (EPSDT) requires states to cover treatments or stabilize conditions diagnosed during routine screenings—regardless of whether the benefit would otherwise be covered under the state’s Medicaid program. A separate child health program does not require EPSDT coverage. Beneficiary cost-sharing. Separate child health programs may impose limited cost-sharing—through premiums, copayments, or enrollment fees—for children in families with incomes above 150 percent of FPL up to 5 percent of family income annually. Since the Medicaid program did not previously allow cost-sharing for children, a Medicaid expansion program under SCHIP would have followed this rule. In general, states may cover adults under the SCHIP program under two key approaches. First, federal SCHIP law allows the purchase of coverage for adults in families with children eligible for SCHIP under a waiver if a state can show that it is cost-effective to do so and demonstrates that such coverage does not result in “crowd-out”—a phenomenon in which new public programs or expansions of existing public programs designed to extend coverage to the uninsured prompt some privately insured persons to drop their private coverage and take advantage of the expanded public subsidy. The cost- effectiveness test requires the states to demonstrate that covering both adults and children in a family under SCHIP is no more expensive than covering only the children. The states may also elect to cover children whose parents have access to employer-based or private health insurance coverage by using SCHIP funding to subsidize the cost. Second, under section 1115 of the Social Security Act, states may receive approval to waive certain Medicaid or SCHIP requirements or authorize Medicaid or SCHIP expenditures. The Secretary of Health and Human Services may approve waivers of statutory requirements or authorize expenditures in the case of experimental, pilot, or demonstration projects that are likely to promote program objectives. In August 2001, HHS indicated that it would allow states greater latitude in using section 1115 demonstration projects (or waivers) to modify their Medicaid and SCHIP programs and that it would expedite consideration of state proposals. One initiative, the Health Insurance Flexibility and Accountability Initiative (HIFA), focuses on proposals for covering more uninsured people while at the same time not raising program costs. States have received approval of section 1115 waivers that provide coverage of adults using SCHIP funding. SCHIP enrollment increased rapidly over the first years of the program, and has stabilized for the past several years. In 2005, the most recent year for which data are available, 4.0 million individuals were enrolled during the month of June, while the total enrollment count—which represents a cumulative count of individuals enrolled at any time during fiscal year 2005—was 6.1 million. Of these 6.1 million enrollees, 639,000 were adults. Because SCHIP requires that applicants are first screened for Medicaid eligibility, some states have experienced increases in their Medicaid programs as well, further contributing to public health insurance coverage of low-income children during this same period. Based on a 3-year average of 2003 through 2005 Current Population Survey (CPS) data, the percentage of uninsured children varied considerably by state, with a national average of 11.7 percent. SCHIP annual enrollment grew quickly from program inception through 2002 and then stabilized at about 4 million from 2003 through 2005, on the basis of a point-in-time enrollment count. Total enrollment, which counts individuals enrolled at any time during a particular fiscal year, showed a similar pattern of growth and was over 6 million as of June 2005 (see fig. 1). Generally, point-in-time enrollment is a subset of total enrollment, as it represents the number of individuals enrolled during a particular month. In contrast, total enrollment includes an unduplicated count of any individual enrolled at any time during the fiscal year; thus the data are cumulative, with new enrollments occurring monthly. Our prior work has shown that certain obstacles can prevent low-income families from enrolling their children into public programs such as Medicaid or SCHIP. Primary obstacles included families’ lack of knowledge about program availability and that, even when children were eligible to participate, complex eligibility rules and documentation requirements complicated the application process. During the early years of SCHIP program operation, we found that many states developed and deployed outreach strategies in an effort to overcome these enrollment barriers. Many states adopted innovative outreach strategies and simplified and streamlined their enrollment processes in order to reach as many eligible children as possible. Examples follow. States launched ambitious public education campaigns that included multimedia campaigns, direct mailings, and the widespread distribution of applications. To overcome the barrier of a long, complicated SCHIP eligibility determination process, states reduced verification and documentation requirements that exceeded federal requirements, shortened the length of applications, and used joint SCHIP-Medicaid applications. States also located eligibility workers in places other than welfare offices—schools, child care centers, churches, local tribal organizations, and Social Security offices—to help families with the initial processing of applications. States eased the process by which applicants reapplied for SCHIP at the end of their coverage period. For example, one state mailed families a summary of the information on their last application, and asked families to update any changes to the information. Because states must also screen for Medicaid eligibility before enrolling children into SCHIP, some states have noted increased enrollment in Medicaid as a result of SCHIP. For example, Alabama reported a net increase of approximately 121,000 children in Medicaid since its SCHIP program began in 1998. New York reported that, for fiscal year 2005, approximately 204,000 children were enrolled in Medicaid as a result of outreach activities, compared with 618,973 children enrolled in SCHIP. In contrast, not all states found that their Medicaid enrollment was significantly affected by SCHIP. For example, Idaho reported that a negligible number of children were found eligible for Medicaid as a result of outreach related to its SCHIP program. Maryland identified an increase of 0.2 percent between June 2004 and June 2005. Based on a 3-year average of 2003 through 2005 CPS data, the percentage of uninsured children varied considerably by state and had a national average of 11.7 percent. The percentage of uninsured children ranged from 5.6 percent in Vermont to 20.4 percent in Texas (see fig. 2). According to the Congressional Research Service (CRS) analysis of 2005 CPS data, the percentage of uninsured children was higher in the southern (13.7 percent) and western (13.8 percent) regions of the United States compared with children living in northeastern (8.5 percent) and midwestern (8.2 percent) regions. Nearly 40 percent of the nation’s uninsured children lived in three of the most populous states—California, Florida, and Texas—each of which had percentages of uninsured children above the national average. Variations across states in rates of uninsured children may be linked to a number of factors, including the availability of employer-sponsored coverage. We have previously reported that certain types of workers were less likely to have had access to employer-sponsored insurance and thus were more likely to be uninsured. In particular, those working part- time, for small firms, or in certain industries such as agriculture or construction, were among the most likely to be uninsured. Additionally, states with high uninsured rates and those with low rates often were distinct with regard to several characteristics. For example, states with higher than average uninsured rates tended to have higher unemployment and proportionally fewer employers offering coverage to their workers. Small employers—those with less than 10 employees—were much less likely to offer health insurance to their employees than larger employers. States’ SCHIP programs reflect the flexibility allowed in structuring approaches to providing health care coverage, including their choice among three program designs—Medicaid expansions, separate child health programs, and combination programs, which have both a Medicaid expansion and a separate child health program component. As of fiscal year 2005, 41 state SCHIP programs covered children in families whose incomes are up to 200 percent of FPL or higher, with 7 of the 41 states covering children in families whose incomes are at 300 percent of FPL or higher. States generally imposed some type of cost-sharing in their programs, with 39 states charging some combination of premiums, copayments, or enrollment fees, compared with 11 states that did not charge cost-sharing. Nine states reported operating premium assistance programs that use SCHIP funding to subsidize the cost of premiums for private health insurance coverage. As of February 2007, we identified 14 states with approved section 1115 waivers to cover adults, including parents and caretaker relatives, pregnant women, and, in some cases, childless adults. As of July 2006, of the 50 states currently operating SCHIP programs, 11 states had Medicaid expansion programs, 18 states had separate child health programs, and 21 states had a combination of both approaches (see fig. 3). When the states initially designed their SCHIP programs, 27 states opted for expansions to their Medicaid programs. Many of these initial Medicaid expansion programs served as “placeholders” for the state—that is, minimal expansions in Medicaid eligibility were used to guarantee the 1998 fiscal year SCHIP allocation while allowing time for the state to plan a separate child health program. Other initial Medicaid expansions— whether placeholders or part of a combination program—also accelerated the expansion of coverage for children aged 14 to 18 up to 100 percent of FPL, which states are already required to cover under federal Medicaid law. A state’s starting point for SCHIP eligibility is dependent upon the eligibility levels previously established in its Medicaid program. Under federal Medicaid law, all state Medicaid programs must cover children aged 5 and under if their family incomes are at or below 133 percent of FPL and children aged 6 through 18 if their family incomes are at or below 100 percent of FPL. Some states have chosen to cover children in families with higher income levels in their Medicaid programs. Each state’s starting point essentially creates a “corridor”—generally, SCHIP coverage begins where Medicaid ends and then continues upward, depending on each state’s eligibility policy. In fiscal year 2005, 41 states used SCHIP funding to cover children in families with incomes up to 200 percent of FPL or higher, including 7 states that covered children in families with incomes up to 300 percent of FPL or higher. In total, 27 states provided SCHIP coverage for children in families with incomes up to 200 percent of FPL, which was $38,700 for a family of four in 2005. Another 14 states covered children in families with incomes above 200 percent of FPL, with New Jersey reaching as high as 350 percent of FPL in its separate child health program. Finally, 9 states set SCHIP eligibility levels for children in families with incomes below 200 percent of FPL. For example, North Dakota covered children in its separate child health program up to 140 percent of FPL. (See fig. 4.) (See app. I for the SCHIP upper income eligibility levels by state, as a percentage of FPL.) Under federal SCHIP law, states with separate child health programs have the option of using different bases for establishing their benefit packages. Separate child health programs can choose to base their benefit packages on (1) one of several benchmarks specified in federal SCHIP law, such as the Federal Employees Health Benefits Program (FEHBP) or state employee coverage; (2) a benchmark-equivalent set of services, as defined under federal law; (3) coverage equivalent to state-funded child health programs in Florida, New York, or Pennsylvania; or (4) a benefit package approved by the Secretary of Health and Human Services (see table 1). In some cases, separate child health programs have changed their benefit packages, adding and removing benefits over time, as follows. In 2003, Texas discontinued dental services, hospice services, skilled nursing facilities coverage, tobacco cessation programs, vision services, and chiropractic services. In 2005, the state added many of these services (chiropractic services, hospice services, skilled nursing facilities, tobacco cessation services, and vision care) back into the SCHIP benefit package and increased coverage of mental health and substance abuse services. In January 2002, Utah changed its benefit structure for dental services, reducing coverage for preventive (cleanings, examinations, and x-rays) and emergency dental services in order to cover as many children as possible with limited funding. In September 2002, the dental benefit package was further restructured to include dental coverage for accidents, as well as fluoride treatments and sealants. In 2005, most states’ SCHIP programs required families to contribute to the cost of care with some kind of cost-sharing requirement. The two major types of cost-sharing—premiums and copayments—can have different behavioral effects on an individual’s participation in a health plan. Generally, premiums are seen as restricting entry into a program, whereas copayments affect the use of services within the program. There is research indicating that if cost-sharing is too high, or imposed on families whose income is too low, it can impede access to care and create financial burdens for families. In 2005, states’ annual SCHIP reports showed that 39 states had some type of cost-sharing—premiums, copayments, or enrollment fees—while 11 states reported no cost-sharing in their SCHIP programs. Overall, 16 states charged premiums and copayments, 14 states charged premiums only, and 9 states charged copayments only (see fig. 5). Cost-sharing occurred more frequently in the separate child health programs than in Medicaid expansion programs. For example, 8 states with Medicaid expansion programs had cost-sharing requirements, compared with 34 states operating separate child health program components. The amount of premiums charged varied considerably among the states that charged cost-sharing. For example, premiums ranged from $5.00 per family per month for children in families with incomes from 150 to 200 percent of FPL in Michigan to $117 per family per month for children in families with incomes from 300 to 350 percent of FPL in New Jersey. Federal SCHIP law prohibits states from imposing cost-sharing on SCHIP-eligible children that totals more than 5 percent of family income annually. In addition, cost-sharing for children may be imposed on the basis of family income. For example, we earlier reported that in 2003, Virginia SCHIP copayments for children in families with incomes from 133 percent to below 150 percent of FPL were $2 per physician visit or per prescription and $5 for services for children in families with higher incomes. In fiscal year 2005, nine states reported operating premium assistance programs (see table 2), but implementation remains a challenge. Enrollment in these programs varied across the states. For example, Louisiana reported having under 200 enrollees and Oregon reported having nearly 6,000 enrollees. To be eligible for SCHIP, a child must not be covered under any other health coverage program or have private health insurance. However, some uninsured children may live in families with access to employer-sponsored health insurance coverage. Therefore, states may choose to establish premium assistance programs, where the state uses SCHIP funds to contribute to health insurance premium payments. To the extent that such coverage is not equivalent to the states’ Medicaid or SCHIP level of benefits, including limited cost-sharing, states are required to pay for supplemental benefits and cost-sharing to make up this difference. Under certain section 1115 waivers, however, states have not been required to provide this supplemental coverage to participants. Several states reported facing challenges implementing their premium assistance programs. Louisiana, Massachusetts, New Jersey, and Virginia cited administration of the program as labor intensive. For example, Massachusetts noted that it is a challenge to maintain current information on program participants’ employment status, choice of health plan, and employer contributions, but such information is needed to ensure accurate premium payments. Two states—Rhode Island and Wisconsin—noted the challenges of operating premium assistance programs, given changes in employer-sponsored health plans and accompanying costs. For example, Rhode Island indicated that increases in premiums are being passed to employees, which makes it more difficult to meet cost-effectiveness tests applicable to the purchase of family coverage. States opting to cover adult populations using SCHIP funding may do so under an approved section 1115 waiver. As of February 2007, we identified 14 states with approved waivers to cover at least one of three categories of adults: parents of eligible Medicaid and SCHIP children, pregnant women, and childless adults. (See table 3.) The Deficit Reduction Act of 2005 (DRA), however, has prohibited the use of SCHIP funds to cover nonpregnant childless adults. Effective October 1, 2005, the Secretary of Health and Human Services may not approve new section 1115 waivers that use SCHIP funds for covering nonpregnant childless adults. However, waivers for covering these adults that were approved prior to this date are allowed to continue until the end of the waiver. Additionally, the Secretary may continue to approve section 1115 waivers that extend SCHIP coverage to pregnant adults, as well as parents and other caretaker relatives of children eligible for Medicaid or SCHIP. SCHIP program spending was low initially, as many states did not implement their programs or report expenditures until 1999 or later, but spending was much higher in the program’s later years and now threatens to exceed available funding. Beginning in fiscal year 2002, states together spent more federal dollars than they were allotted for the year and thus relied on the 3-year availability of SCHIP allotments or on redistributed SCHIP funds to cover additional expenditures. But as spending has grown, the pool of funds available for redistribution has shrunk. Some states consistently spent more than their allotted funds, while other states consistently spent less. Overall, 18 states were projected to have shortfalls—that is, they were expected to exhaust available funds, including current and prior-year allotments—in at least 1 year from 2005 through 2007. To cover projected shortfalls that several states faced, Congress appropriated an additional $283 million for fiscal year 2006. As of January 2007, 14 states are projected to exhaust their allotments in fiscal year 2007. SCHIP program spending began low, but by fiscal year 2002, states’ aggregate annual spending from their federal allotments exceeded their annual allotments. Spending was low in the program’s first 2 years because many states did not implement their programs or report expenditures until fiscal year 1999 or later. Combined federal and state spending was $180 million in 1998 and $1.3 billion in 1999. However, by the end of the program’s third fiscal year (2000), all 50 states and the District of Columbia had implemented their programs and were drawing down their federal allotments. Since fiscal year 2002, SCHIP spending has grown by an average of about 10 percent per year. (See fig. 6.) From fiscal year 1998 through 2001, annual federal SCHIP expenditures were well below annual allotments, ranging from 3 percent of allotments in fiscal year 1998 to 63 percent in fiscal year 2001. In fiscal year 2002, the states together spent more federal dollars than they were allotted for the year, in part because total allotments dropped from $4.25 billion in fiscal year 2001 to $3.12 billion in fiscal year 2002, marking the beginning of the so-called “SCHIP dip.” However, even after annual SCHIP appropriations increased in fiscal year 2005, expenditures continued to exceed allotments (see fig. 7). Generally, states were able to draw on unused funds from prior years’ allotments to cover expenditures incurred in a given year that were in excess of their allotment for that year, because, as discussed earlier, the federal SCHIP law gave states 3 years to spend each annual allotment. In certain circumstances, states also retained a portion of unused allotments. States that have outspent their annual allotments over the 3-year period of availability have also relied on redistributed SCHIP funds to cover excess expenditures. But as overall spending has grown, the pool of funds available for redistribution has shrunk from a high of $2.82 billion in unused funds from fiscal year 1999 to $0.17 billion in unused funds from fiscal year 2003. Meanwhile, the number of states eligible for redistributions has grown from 12 states in fiscal year 2001 to 40 states in fiscal year 2006. (See fig. 8.) Congress has acted on several occasions to change the way SCHIP funds are redistributed. In fiscal years 2000 and 2003, Congress amended statutory provisions for the redistribution and availability of unused SCHIP allotments from fiscal years 1998 through 2001, reducing the amounts available for redistribution and allowing states that had not exhausted their allotments by the end of the 3-year period of availability to retain some of these funds for additional years. Despite these steps, $1.4 billion in unused SCHIP funds reverted to the U.S. Treasury by the end of fiscal year 2005. Congress has also appropriated additional funds to cover states’ projected SCHIP program shortfalls. The DRA included a $283 million appropriation to cover projected shortfalls for fiscal year 2006. CMS divided these funds among 12 states as well as the territories. In the beginning of fiscal year 2007, Congress acted to redistribute unused SCHIP allotments from fiscal year 2004 to states projected to face shortfalls in fiscal year 2007. The National Institutes of Health Reform Act of 2006 makes these funds available to states in the order in which they experience shortfalls. In January 2007, CRS projected that although 14 states will face shortfalls, the $147 million in unused fiscal year 2004 allotments will be redistributed to the five states that are expected to experience shortfalls first. The NIH Reform Act also created a redistribution pool of funds by redirecting fiscal year 2005 allotments from states that at midyear (March 31, 2007) have more than twice the SCHIP funds they are projected to need for the year. Some states consistently spent more than their allotted funds, while other states consistently spent less. From fiscal years 2001 through 2006, 40 states spent their entire allotments at least once, thereby qualifying for redistributions of other states’ unused allotments; 11 states spent their entire allotments in at least 5 of the 6 years that funds were redistributed. Moreover, 18 states were projected to face shortfalls—that is, they were expected to exhaust available funds, including current and prior-year allotments—in at least 1 of the final 3 years of the program. (See fig. 9.) As of January 2007, 14 states were projected to exhaust their allotments in fiscal year 2007. When we compared the 18 states that were projected to have shortfalls with the 32 states that were not, we found that the shortfall states were more likely to have a Medicaid component to their SCHIP program, to have a SCHIP eligibility corridor broader than the median, and to cover adults in SCHIP under section 1115 waivers (see table 4). It is unclear, however, to what extent these characteristics contributed to states’ overall spending experiences with the program, as many other factors have also affected states’ program balances, including prior coverage of children under Medicaid, and SCHIP eligibility criteria, benefit packages, enrollment policies, outreach efforts, and payment rates to providers. In addition, we and others have noted that the formula for allocating funds to states has flaws that led to underestimates of the number of eligible children in some states and thus underfunding. Fifteen of the 18 shortfall states (83 percent) had Medicaid expansion programs or combination programs that included Medicaid expansions, which generally follow Medicaid rules, such as providing the full Medicaid benefit package and continuing to provide coverage to all eligible individuals even after the states’ SCHIP allotments are exhausted. The shortfall states tended to have a broader eligibility corridor in their SCHIP programs, indicating that, on average, the shortfall states covered children in SCHIP from lower income levels, from higher income levels, or both. For example, 33 percent of the shortfall states covered children in their SCHIP programs above 200 percent of FPL, compared with 25 percent of the nonshortfall states. Finally, 6 of the 18 shortfall states (33 percent) were covering adults in SCHIP under section 1115 waivers by the end of fiscal year 2006, compared with 6 of the 32 nonshortfall states (19 percent). On average, the shortfall states that covered adults began covering them earlier than nonshortfall states and enrolled a higher proportion of adults. At the end of fiscal year 2006, 12 states covered adults under section 1115 waivers using SCHIP funds. Five of these 12 states began covering adults before fiscal year 2003, and all 5 states faced shortfalls in at least 1 of the final 3 years of the program. In contrast, none of the 4 states that began covering adults with SCHIP funds in the period from fiscal year 2004 through 2006 faced shortfalls. On average, the shortfall states covered adults more than twice as long as nonshortfall states (5.1 years compared with 2.3 years by the end of fiscal year 2006). Shortfall states also enrolled a higher proportion of adults. Nine states, including six shortfall states, covered adults using SCHIP funds throughout fiscal year 2005. In these nine states, adults accounted for an average of 45 percent of total enrollment. However, in the shortfall states, the average proportion was more than twice as high as in nonshortfall states. Adults accounted for an average of 55 percent of enrollees in the shortfall states, compared with 24 percent in the nonshortfall states. (See table 5.) While analyses of states as a group reveal some broad characteristics of states’ programs, examining the experiences of individual states offers insights into other factors that have influenced states’ program balances. States themselves have offered a variety of reasons for shortfalls and surpluses. These examples, while not exhaustive, highlight additional factors that have shaped states’ financial circumstances under SCHIP. Inaccuracies in the CPS-based estimates on which states’ allotments were based. North Carolina, a shortfall state, offers a case in point. In 2004, the state had more low-income children enrolled in the program than CPS estimates indicated were eligible. To curb spending, North Carolina shifted children through age 5 from the state’s separate child health program to a Medicaid expansion, reduced provider payments, and limited enrollment growth. Annual funding levels that did not reflect enrollment growth. Iowa, another shortfall state, noted that annual allocations provided too many funds in the early years of the program and too few in the later years. Iowa did not use all its allocations in the first 4 years and thus the state’s funds were redistributed to other states. Subsequently, however, the state has faced shortfalls as its program matured. Impact of policies designed to curb or expand program growth. Some states have attempted to manage program growth through ongoing adjustments to program parameters and outreach efforts. For example, when Florida’s enrollment exceeded a predetermined target in 2003, the state implemented a waiting list and eliminated outreach funding. When enrollment began to decline, the state reinstituted open enrollment and outreach. Similarly, Texas⎯commensurate with its budget constraints and projected surpluses⎯has tightened and loosened eligibility requirements and limited and expanded benefits over time in order to manage enrollment and spending. Children without health insurance are at increased risk of foregoing routine medical and dental care, immunizations, treatment for injuries, and treatment for chronic illnesses. Yet, the states and the federal government face challenges in their efforts to continue to finance health care coverage for children. As health care consumes a growing share of state general fund or operating budgets, slowdowns in economic growth can affect states’ abilities—and efforts—to address the demand for public financing of health services. Moreover, without substantive programmatic or revenue changes, the federal government faces near- and long-term fiscal challenges as the U.S. population ages because spending for retirement and health care programs will grow dramatically. Given these circumstances, we would like to suggest several issues for consideration as Congress addresses the reauthorization of SCHIP. These include the following: Maintaining flexibility without compromising the goals of SCHIP. The federal-state SCHIP partnership has provided an important opportunity for innovation on the part of states for the overall benefit of children’s health. Providing three design choices for states—Medicaid expansions, separate child health programs, or a combination of both approaches—affords them the opportunity to focus on their own unique and specific priorities. For example, expansions of Medicaid offer Medicaid’s comprehensive benefits and administrative structures and ensure children’s coverage if states exhaust their SCHIP allotments. However, this entitlement status also increases financial risk to states. In contrast, SCHIP separate child health programs offer a “block grant” approach to covering children. As long as the states meet statutory requirements, they have the flexibility to structure coverage on an employer-based health plan model and can better control program spending than they can with a Medicaid expansion. However, flexibility within the SCHIP program, such as that available through section 1115 waivers, may also result in consequences that can run counter to SCHIP’s goal—covering children. For example, we identified 15 states that have authority to cover adults with their federal SCHIP funds, with several states covering more adults than children. States’ rationale is that covering low-income parents in public programs such as SCHIP and Medicaid increases the enrollment of eligible children as well, with the result that fewer children go uninsured. Federal SCHIP law provides that families may be covered only if such coverage is cost- effective; that is, covering families costs no more than covering the SCHIP- eligible children. We earlier reported that HHS had approved state proposals for section 1115 waivers to use SCHIP funds to cover parents of SCHIP- and Medicaid-eligible children without regard to cost- effectiveness. We also reported that HHS approved state proposals for section 1115 waivers to use SCHIP funds to cover childless adults, which in our view was inconsistent with federal SCHIP law and allowed SCHIP funds to be diverted from the needs of low-income children. We suggested that Congress consider amending the SCHIP statute to specify that SCHIP funds were not available to provide health insurance coverage for childless adults. Under the DRA, Congress prohibited the Secretary of Health and Human Services from approving any new section 1115 waivers to cover nonpregnant childless adults after October 1, 2005, but allowed waivers approved prior to that date to continue. It is important to consider the implications of states’ use of allowable flexibility for other aspects of their programs. For example, what assurances exist that SCHIP funds are being spent in the most cost- effective manner, as required under federal law? In view of current federal fiscal constraints, to what extent should SCHIP funds be available for adult coverage? How has states’ use of available flexibility to establish expanded financial eligibility categories and covered populations affected their ability to operate their SCHIP programs within the original allotments provided to them? Considering the federal financing strategy, including the financial sustainability of public commitments. As SCHIP programs have matured, states’ spending experience can help inform future federal financing decisions. CRS testified in July 2006 that 40 states were now spending more annually than they received in their annual original SCHIP allotments. While many of them did not face shortfalls in 2006 because of available prior-year balances, redistributed funds, and the supplemental DRA appropriation, 14 states are currently projected to face shortfalls in 2007. With the pool of funds available for redistribution virtually exhausted, the continued potential for funding shortfalls for many states raises some fundamental questions about SCHIP financing. If SCHIP is indeed a capped grant program, to what extent does the federal government have a responsibility to address shortfalls in individual states, especially those that have chosen to expand their programs beyond certain parameters? In contrast, if the policy goal is to ensure that states do not exhaust their federal SCHIP allotments, by providing for the continuing redistribution of funds or additional federal appropriations, does the program begin to take on the characteristics of an entitlement similar to Medicaid? What overall implications does this have for the federal budget? Assessing issues associated with equity. The 10 years of SCHIP experience that states now have could help inform any policy decisions with respect to equity as part of the SCHIP reauthorization process. Although SCHIP generally targets children in families with incomes at or below 200 percent of FPL, 9 states are relatively more restrictive with their eligibility levels, while 14 states are more expansive, ranging as high as 350 percent of FPL. Given the policy goal of reducing the rate of uninsured among the nation’s children, to what extent should SCHIP funds be targeted to those states that have not yet achieved certain minimum coverage levels? Given current and future federal fiscal constraints, to what extent should the federal government provide federal financial participation above certain thresholds? What broader implications might this have for flexibility, choice, and equity across state programs? Another consideration is whether the formulas used in SCHIP—both the formula to determine the federal matching rate and the formula to allocate funds to states—could be refined to better target funding to certain states for the benefit of covering uninsured children. Because the SCHIP formula is based on the Medicaid formula for federal matching funds, it has some inherent shortcomings that are likely beyond the scope of consideration for SCHIP reauthorization. For the allocation formula that determines the amount of funds a state will receive each year, several analysts, including CRS, have noted alternatives that could be considered. These include altering the methods for estimating the number of children at the state level, adjusting the extent to which the SCHIP formula for allocating funds to states includes the number of uninsured versus low-income children, and incorporating states’ actual spending experiences to date into the formula. Considering the effects of any one or combination of these—or other—policy options would likely entail iterative analysis and thoughtful consideration of relevant trade-offs. Mr. Chairman, this concludes my prepared remarks. I would be pleased to respond to any questions that you or other members of the Subcommittee may have. For future contacts regarding this testimony, please contact Kathryn G. Allen at (202) 512-7118 or at allenk@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Carolyn L. Yocom, Assistant Director; Nancy Fasciano; Kaycee M. Glavich; Paul B. Gold; JoAnn Martinez-Shriver; and Elizabeth T. Morrison made key contributions to this statement. Appendix I: SCHIP Upper Income Eligibility by State, Fiscal Year 2005 expressed as a percentage of FPL 200 expressed as a percentage of FPL While Tennessee has not had a SCHIP program since October 2002, in January 2007, CMS approved Tennessee’s SCHIP plan, which covers pregnant women and children in families with incomes up to 250 percent of FPL. According to state information, the program will be implemented in early 2007. Children’s Health Insurance: State Experiences in Implementing SCHIP and Considerations for Reauthorization. GAO-07-447T. Washington, D.C.: February 1, 2007. Children’s Health Insurance: Recent HHS-OIG Reviews Inform the Congress on Improper Enrollment and Reductions in Low-Income, Uninsured Children. GAO-06-457R. Washington, D.C.: March 9, 2006. 21st Century Challenges: Reexamining the Base of the Federal Government. GAO-05-325SP. Washington, D.C.: February 2005. Medicaid and SCHIP: States’ Premium and Cost Sharing Requirements for Beneficiaries. GAO-04-491. Washington, D.C.: March 31, 2004. SCHIP: HHS Continues to Approve Waivers That Are Inconsistent with Program Goals. GAO-04-166R. Washington, D.C.: January 5, 2004. Medicaid Formula: Differences in Funding Ability among States Often Are Widened. GAO-03-620. Washington, D.C.: July 10, 2003. Medicaid and SCHIP: States Use Varying Approaches to Monitor Children’s Access to Care. GAO-03-222. Washington, D.C.: January 14, 2003. Health Insurance: States’ Protections and Programs Benefit Some Unemployed Individuals. GAO-03-191. Washington, D.C.: October 25, 2002. Medicaid and SCHIP: Recent HHS Approvals of Demonstration Waiver Projects Raise Concerns. GAO-02-817. Washington, D.C.: July 12, 2002. Children’s Health Insurance: Inspector General Reviews Should Be Expanded to Further Inform the Congress. GAO-02-512. Washington, D.C.: March 29, 2002. Long-Term Care: Aging Baby Boom Generation Will Increase Demand and Burden on Federal and State Budgets. GAO-02-544T. Washington, D.C.: March 21, 2002. Medicaid and SCHIP: States' Enrollment and Payment Policies Can Affect Children's Access to Care. GAO-01-883. Washington, D.C.: September 10, 2001. Children’s Health Insurance: SCHIP Enrollment and Expenditure Information. GAO-01-993R. Washington, D.C.: July 25, 2001. Medicaid: Stronger Efforts Needed to Ensure Children’s Access to Health Screening Services. GAO-01-749. Washington, D.C.: July 13, 2001. Medicaid and SCHIP: Comparisons of Outreach, Enrollment Practices, and Benefits. GAO/HEHS-00-86. Washington, D.C.: April 14, 2000. Children’s Health Insurance Program: State Implementation Approaches Are Evolving. GAO/HEHS-99-65. Washington, D.C.: May 14, 1999. Medicaid: Demographics of Nonenrolled Children Suggest State Outreach Strategies. GAO/HEHS-98-93. Washington, D.C.: March 20, 1998. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In August 1997, Congress created the State Children's Health Insurance Program (SCHIP) with the goal of significantly reducing the number of low-income uninsured children, especially those who lived in families with incomes exceeding Medicaid eligibility requirements. Unlike Medicaid, SCHIP is not an entitlement to services for beneficiaries but a capped allotment to states. Congress provided a fixed amount--approximately $40 billion from fiscal years 1998 through 2007--to states with approved SCHIP plans. Funds are allocated to states annually. Subject to certain exceptions, states have 3 years to use each year's allocation, after which unspent funds may be redistributed to states that have already spent all of that year's allocation. GAO's testimony addresses trends in SCHIP enrollment and the current composition of SCHIP programs across the states, states' spending experiences under SCHIP, and considerations GAO has identified for SCHIP reauthorization. GAO's testimony is based on its prior work, particularly testimony before the Senate Finance Committee on February 1, 2007 (see GAO-07-447T). GAO updated this work with the Centers for Medicare & Medicaid Services' (CMS) January 2007 approval of Tennessee's SCHIP program. SCHIP enrollment increased rapidly during the program's early years but has stabilized over the past several years. As of fiscal year 2005, the latest year for which data are available, SCHIP covered approximately 6 million enrollees, including about 639,000 adults, with about 4 million enrollees in June of that year. Many states adopted innovative outreach strategies and simplified and streamlined their enrollment processes in order to reach as many eligible children as possible. States' SCHIP programs reflect the flexibility federal law allows in structuring approaches to providing health care coverage. As of July 2006, states had opted for the following from among their choices of program structures allowed: a separate child health program (18 states), an expansion of a state's Medicaid program (11), or a combination of the two (21). In addition, 41 states opted to cover children in families with incomes at 200 percent of the federal poverty level (FPL) or higher, with 7 of these states covering children in families with incomes at 300 percent of FPL or higher. Thirty-nine states required families to contribute to the cost of their children's care in SCHIP programs through a cost-sharing requirement, such as a premium or copayment; 11 states charged no cost-sharing. As of February 2007, GAO identified 14 states that had waivers in place to cover adults in their programs; these included parents and caretaker relatives of eligible Medicaid and SCHIP children, pregnant women, and childless adults. SCHIP spending was initially low, but now threatens to exceed available funding. Since 1998, some states have consistently spent more than their allotments, while others spent consistently less. States that earlier overspent their annual allotments over the 3-year period of availability could rely on other states' unspent SCHIP funds, a portion of which were redistributed to cover other states' excess expenditures. By fiscal year 2002, however, states' aggregate annual spending began to exceed annual allotments. As spending has grown, the pool of funds available for redistribution has shrunk. As a result, 18 states were projected to have "shortfalls" of SCHIP funds--meaning they had exhausted all available funds--in at least one of the final 3 years of the program. To cover projected shortfalls faced by several states, Congress appropriated an additional $283 million for fiscal year 2006. SCHIP reauthorization occurs in the context of debate on broader national health care reform and competing budgetary priorities, highlighting the tension between the desire to provide affordable health insurance coverage to uninsured individuals, including low-income children, and the recognition of the growing strain of health care coverage on federal and state budgets. As Congress addresses reauthorization, issues to consider include (1) maintaining flexibility within the program without compromising the primary goal to cover children, (2) considering the program's financing strategy, including the financial sustainability of public commitments, and (3) assessing issues associated with equity, including better targeting SCHIP funds to achieve certain policy goals more consistently nationwide.
In conducting our review, we compared LSSC’s progress in planning and managing its Year 2000 project to our Year 2000 Assessment Guide. We also reviewed DOD’s Year 2000 Management Plan, Department of the Army and Army Materiel Command (AMC) Year 2000 guidance, and private industry Year 2000 guidance. We focused our review on Year 2000 work performed by (1) LSSC—the designer, developer, and maintainer of CCSS, and (2) AMC—the Army major command responsible for promulgating Year 2000 policy and guidance and providing assistance to its major subordinate commands and central design activities. To determine the status of LSSC’s Year 2000 project and the appropriateness of its strategy and actions for ensuring successful completion, we interviewed LSSC’s Year 2000 Project Manager, Project Officer, and Focal Point who are responsible for project management, direction, and reporting. We also interviewed the AMC Year 2000 project team and the AMC Year 2000 Logistics Systems Chair. We obtained and analyzed Year 2000 guidance as well as documentation from the CCSS Configuration Control Board, AMC quarterly progress reviews, and AMC Year 2000 Logistics Task Force meetings to determine Year 2000 plans, strategy, and status for each of the five phases. We obtained and discussed software change schedules, workload statistics, and testing procedures and testing resource information with LSSC’s Quality Assurance Division Chief. We discussed software change procedures with the Technical Data Systems Division Chief and impact and workload issues with the Asset Management Systems Chief. To compare LSSC’s workload with its available staff resources, we obtained and discussed staffing information with the agency’s Resources Management Director, Budget and Manpower Division Chief, and the Business Information Systems Director. To determine cost estimates for the project, we interviewed the Year 2000 Project Manager and obtained and analyzed LSSC documents pertaining to cost. We also discussed LSSC’s software maturity capability and efforts to improve its maturity level with the Year 2000 Project Manager and the computer specialist tasked with improving LSSC’s software maturity capability. We conducted our work primarily at the Logistics Systems Support Center in St. Louis, Missouri, and at the U.S. Army Materiel Command in Alexandria, Virginia. Our audit work was performed from December 1996 through August 1997 in accordance with generally accepted government auditing standards. The Department of Defense provided written comments on a draft of this report. These comments are discussed in the “Agency Comments and Our Evaluation” section and are reprinted in appendix I. LSSC is one of several central design activities (CDAs) for the Army Materiel Command (AMC). LSSC’s major responsibility is to design, develop, deploy, integrate, and maintain the Commodity Command Standard System (CCSS), a standard automated wholesale logistics system supporting AMC and other Army and DOD organizations. CCSS performs stock control, supply management, cataloging, provisioning, procurement, maintenance, security assistance, and financial management over an inventory of supply items for these organizations. It is the business automation core of AMC’s commodity commands and is linked to other Army logistics systems, such as the Continuing Balance System-Expanded (CBSX). CCSS’ financial module also provides general accounting, inventory accounting, billing support, and general ledger and financial reports for both reimbursable and non-reimbursable issues. As one of the world’s largest integrated business systems, CCSS comprises 561 separate subsystems that contain 10.2 million lines of program code in about 5000 programs. These subsystems work collectively to process an annual procurement budget for supplies and equipment of over $23 billion. The Year 2000 problem is rooted in the way dates are recorded and computed in automated information systems. For the past several decades, systems have typically used two digits to represent the year, such as “97” representing 1997, in order to conserve electronic data storage and reduce operating costs. However, with this two-digit format, the year 2000 is indistinguishable from 1900, as is 2001 indistinguishable from 1901. As a result of this ambiguity, system or application programs that use dates to perform calculations, comparisons, or sorting may generate incorrect results when working with years after 1999. LSSC staff recognized the significance of this anomaly in 1991, and at an estimated cost of $5.9 million, recommended making all CCSS date fields Year 2000 compliant in accordance with Army Regulation 25-9. The regulation specified that date fields were to be 8 positions in length (YYYYMMDD) including two century positions to be populated with normal values of “19” or “20.” LSSC has requested funding for the recommended changes every year since 1991; however, funding was denied because CCSS was a legacy system designated for replacement by other systems under the DOD Corporate Information Management (CIM) initiative. By 1994, however, it became apparent that emergency system changes would be necessary to allow certain CCSS subsystems to continue forecasting requirements beyond 1999. LSSC reports that, since 1994, it has renovated at least 3.8 million lines of code to accommodate the year 2000. However, initial funding for completing the CCSS Year 2000 effort, now estimated at over $12 million, was not approved until January 1997. The impact of the year 2000 on CCSS is substantial since CCSS is heavily date dependent. Date fields are used in nearly all CCSS subsystems, files, databases, and data used for status accounting, computations, forecasting, financial accounting, and requisition processing. Consequently, faulty turn of the century date processing would significantly impair the Army’s ability to order, manage, sell, and account for commodities such as ammunition, communications, and electronics. In turn, through its other logistics systems connections, it could also impair the Army’s ability to track and manage major end items such as aircraft, missiles, and tanks, as well as the many thousands of repair parts that support them. Because CCSS is the Army’s wholesale logistics system, a loss of CCSS operational support to AMC and other DOD agencies poses a serious threat to overall mission capability. For example, if dates are not processed accurately in CCSS applications that support inventory management and requisition processing, items ordered on or after January 1, 2000, could be identified as 99-year old excess inventory and become candidates for disposal. The cost of such faulty date processing would be great considering the (1) cost of the inventory item, (2) administrative costs involved in requisitioning, shipping, handling, and accounting for the item in the various financial, inventory, and transportation subsystems, and (3) costs associated with designating the item as excess inventory for disposal and the subsequent physical disposal of the item. Such an occurrence could severely impair overall military readiness since the necessary items would not be available for the soldier in the field. More importantly, soldiers and military civilians may not be able to properly maintain or replace weapon systems components, which could result in injury or death. Also, military equipment maintenance and overhaul facilities could be temporarily closed for lack of spare parts. In February 1997, we published the Year 2000 Computing Crisis: An Assessment Guide, which addresses common issues affecting most federal agencies and presents a structured approach and checklist to aid in the planning, managing, and implementing of Year 2000 projects. The guide describes five phases —supported by program and project management activities —with each phase representing a major Year 2000 program activity or segment. The guidance draws heavily on the work of the Best Practices Subcommittee of the Interagency Year 2000 Committee, and incorporates guidance and practices identified by leading organizations in the information technology industry. The five phases are consistent with those prescribed by DOD in its Year 2000 Management Plan. The phases and a description of each phase follows: Awareness—Define the Year 2000 problem and gain executive-level support and sponsorship. Establish a Year 2000 program team and develop an overall strategy. Ensure that everyone in the organization is fully aware of the issue. Assessment—Assess the Year 2000 impact on the enterprise. Identify core business areas and processes, inventory and analyze systems supporting the core business areas, and rank their conversion or replacement. Develop contingency plans to handle data exchange issues, lack of data, and bad data. Identify and secure the necessary resources. Renovation—Convert, replace, or eliminate selected platforms, applications, databases, and utilities. Modify interfaces. Validation—Test, verify, and validate converted or replaced platforms, applications, databases, and utilities. Test the performance, functionality, and integration of converted or replaced platforms, applications, databases, utilities, and interfaces in an operational environment. Implementation—Implement converted or replaced platforms, applications, databases, utilities, and interfaces. Implement data exchange contingency plans, if necessary. In addition to following the five phases described above, the Year 2000 program should also be planned and managed as a single large information system development effort. Agencies should promulgate and enforce good management practices on the program and project levels. LSSC is the Army component responsible for applying the Year 2000 five-phased resolution process to CCSS. As such, in July 1996, LSSC initiated a project to address CCSS Year 2000 processing issues. As of July 1997, LSSC has completed a number of activities associated with the awareness and assessment phases of the process, including identifying its inventory, establishing a Year 2000 project team, and assessing the date impact on CCSS’ 10.2 million lines of code. LSSC has identified that as much as 54 percent or 5.5 million lines of code may be impacted by the year 2000 due to the fact that entire applications may need to be corrected to accommodate the date change. LSSC officials stated that they still need to determine how specific code will be changed in affected applications. LSSC also reported that an additional 3.8 million lines of code have already been renovated but still need to undergo integrated and regression testing. LSSC plans to implement the Year 2000-compliant CCSS by November 1998 at a cost of over $12 million. Prior to receiving funding in January 1997, the Year 2000 project remained in the awareness phase. During the awareness phase, LSSC completed tasks such as assembling technical and functional representatives into a Year 2000 task force, evaluating automated software assessment tools, and identifying the number of software lines of code. Once the project was officially funded and entered the assessment phase, LSSC officials appointed the project manager and management staff. Also, the Year 2000 project team prepared a project charter and schedule, secured contractor support to assist with assessment tasks, and began to determine the date impact on CCSS program code. As project activity proceeded, project staff routinely reported Year 2000 progress to the AMC Deputy Commanding General, AMC Year 2000 Logistics Task Force, Communications-Electronics Command (CECOM) Year 2000 Project Office, and the CCSS Configuration Control Board. To support project management, LSSC’s Year 2000 project manager drafted a plan which initially did not conform to DOD’s recommended Year 2000 five-phased approach, although the plan did identify some tasks typically associated with Year 2000 projects. For example, the plan included such tasks as beginning risk assessment and contingency plan development, providing assessment tool training, conducting an inventory of CCSS applications, and obtaining contractor support for date impact assessment. As a result of our concerns that the plan did not clearly specify or identify key Year 2000 phases and associated tasks, LSSC’s Year 2000 project manager later revised the plan in an attempt to better identify phases and tasks in accordance with DOD’s five-phased approach. In addition to assessing the lines of code for CCSS, LSSC reported that it had cataloged the applications, modules, functional areas served, and languages used. LSSC had also determined that all the source code for CCSS was available and matched production code. In addition, LSSC had acquired automated assessment tools to help identify affected and obsolete code and trained LSSC staff to use these tools for the assessment, renovation, and validation phases. Since the exchange of data with other systems through external interfaces creates the potential to introduce or propagate errors from one system to another, LSSC identified 57 other systems which interface with CCSS and is in the process of confirming data exchange requirements with the external system owners. LSSC also developed a standard memorandum of agreement (MOA) to document coordination of data exchange requirements. Since CCSS and its interfacing partners plan to use procedural code and sliding window techniques to correct the Year 2000 problem, any date formats exchanged would be properly converted through internal program coding changes rather than changes to date formats. As part of its assessment of the level of date impact on CCSS, LSSC assessed the risk of not preparing CCSS for the year 2000. LSSC reported that CCSS, as a whole, is not Year 2000 compliant and that a catastrophic failure of the Army wholesale logistics mission would occur if CCSS is not made compliant. LSSC further reported that no known commercial or government replacements exist for CCSS functionality and that renovation of existing CCSS code was essential to mitigate the risk of failure. In May 1997, LSSC was still addressing the assessment phase activities of identifying a renovation strategy and developing a validation strategy and schedule for testing. According to DOD’s Year 2000 Management Plan and AMC’s Year 2000 Action Plan, the validation strategy should identify the general time frames for the validation of all information resources and include consideration of hardware concerns such as availability of processing cycles and storage as well as human resource issues. In addition, efforts were ongoing to contract with a vendor to perform automated code correction on some CCSS subsystems. To its credit, LSSC recognized the problems inherent in the century date change and began seeking funding to address Year 2000 issues years ago. However, although some progress has been made, several key project management actions associated with the assessment phase have not been completed. As a result, LSSC is not presently well-positioned to move forward to the more difficult phases of renovation, validation, and implementation in the Year 2000 process—phases that industry experts estimate could consume as much as three-fourths of Year 2000 project time and resources. LSSC still needs to take a number of actions to increase its chances of success, including (1) managing competing workload priorities, (2) planning for testing, (3) clarifying and coordinating written systems interface agreements, and (4) developing a contingency plan. To increase its chances of successfully managing its Year 2000 program, LSSC will also need to institutionalize a repeatable software change process that can be used from project to project. If these areas are not addressed soon, LSSC could find itself limited in its ability to meet the turn of century date. Given the prominence of date processing in CCSS and its central mission of sustaining the soldier in the field, LSSC cannot afford to delay any longer, and needs to demonstrate that it will perform, all the key actions associated with sound Year 2000 planning and management. In 1991, the Software Engineering Institute (SEI) introduced the Capability Maturity Model (CMM) to assist organizations in assessing the maturity level of their software development and maintenance processes. In general, software process maturity serves as an indicator of the likely range of cost, schedule, and quality of results that can be expected to be achieved by projects within a software organization. Our Year 2000 Assessment Guide points out that few activities within federal agencies operate above CMM level 1, and as a result, organizations lack the basic policies, tools, and practices necessary to successfully manage large-scale efforts. A CMM level 1 is the lowest level and is characterized by a software process that is ad hoc, and occasionally even chaotic. Few processes are defined and success depends on individual effort. We have recommended that federal agency information technology organizations be at least a CMM level 2 which is characterized by an established software development process discipline that is repeatable from project to project. In 1994, LSSC’s software development process was assessed by a team of LSSC and SEI-licensed contract staff. Using the CMM methodology, the team determined that LSSC should be ranked at a level 1 maturity. The assessment results concluded that LSSC lacked the basic software management practices necessary for repeatable software project success. The team also indicated that level 2 maturity could be attainable with a modest effort. Accordingly, the assessment team made recommendations that, if implemented, could provide the basis for LSSC’s attainment of a level 2 maturity. Based on the team’s assessment, LSSC developed an action plan to address the identified deficiencies. However, according to LSSC officials, the action plan was never implemented due to the reassignment of LSSC assessment staff, agency staff reductions, and lack of funding. After a period of nearly 3 years, LSSC resurrected the CMM assessment project in December 1996 to, once again, review the assessment team’s findings and recommendations and to propose follow-on actions to address the deficiencies. The review concluded that a project management system which would allow LSSC to better plan, estimate, and track software projects on an enterprise-wide basis was essential for LSSC to mature to a CMM level 2. While LSSC has an automated project management system under development, a member of the LSSC review said the system is inadequate because it is unable to track all software projects and may not address all level 2 requirements. This information was presented to LSSC’s Executive Steering Council in March 1997. However, at the time of our review, LSSC had made little progress in correcting the software process deficiencies and was still ranked at CMM level 1. Until LSSC moves on to the next CMM level, its ability to contend with the later stages of the Year 2000 effort will be constrained. Recently, LSSC officials informed us of their intent to obtain a CMM level 2 certification following completion of the Year 2000 project. In addition to lacking a mature software development and maintenance process, LSSC now has 42 percent fewer staff available to make the needed renovations to CCSS than it had in fiscal year 1990. Moreover, since fiscal year 1990, LSSC’s workload has increased, showing a notable jump in fiscal years 1997 and 1998—the 2 years when the majority of Year 2000 actions need to be performed to enable agencies to have a realistic chance of meeting the turn of century date time frame. At the same time that its staff is decreasing and its workload is increasing, LSSC continues to be tasked with other software projects by the Lead AMC Integration Support Office (LAISO). Despite these indicators of potential problems, LSSC has only recently begun to take the steps necessary to augment its staff with contract support for the renovation phase and has yet to fully resolve staffing issues concerning the development of test plans. In addition, LSSC has not prioritized its software project schedule to provide the structure needed to keep the Year 2000 project on schedule and within cost estimates. Until these issues are addressed, they pose unnecessary risk to the success of LSSC’s Year 2000 project. As of June 1997, LSSC reported that it had devoted 7 of its 315 total staff to the Year 2000 project full-time. While four contract support staff had been retained to train LSSC staff to use the automated software assessment tools and help with impact assessment, these contractor staff have since been released. As of August 1997, no contract staff were on board to augment LSSC staff during the renovation phase, although steps were underway to obtain additional contract support and to obtain an automated code correction solution. Also, LSSC reported that staff would be tasked to work exclusively on the Year 2000 renovation phase after completing an ongoing major systems change project related to a Base Realignment and Closure (BRAC) decision. LSSC officials stated that as the BRAC-related renovation begins to diminish in September 1997, both LSSC and contract staff would be transferred to the Year 2000 project. While we do not question the appropriateness of performing the BRAC-related work prior to Year 2000 work, we are concerned that LSSC’s Year 2000 project approach does not provide for alternatives should the BRAC target completion schedule slip and the subsequent LSSC staff and contractors not become available. Further, an examination of CCSS software release schedules since fiscal year 1990 shows that the number of projects has increased as much as five-fold. At the same time as the majority of CCSS Year 2000 actions are to be performed, LSSC’s schedule calls for 10 software change projects to be fielded in fiscal year 1997 and 8 in fiscal year 1998. These projects range in terms of complexity and magnitude from routine systems maintenance, which may require minimal effort, to the Year 2000 and BRAC projects, which call for comprehensive changes in many of CCSS’ subsystems. In past years, LSSC routinely accomplished two to four software change projects a year. This significant increase in workload will undoubtedly impact the CCSS Year 2000 project schedule for several reasons. First, LAISO, the workload manager for CCSS, has not ensured that competing projects do not adversely affect LSSC’s ability to complete the Year 2000 effort. Prioritization of projects could result in the postponement or cancellation of some of the competing projects. Second, LSSC has little historical experience dealing with a workload of this magnitude which is compounded by a workforce that has diminished significantly in recent years. Prior to commencing the validation (testing) phase of its Year 2000 effort, LSSC needs to fully address two key issues regarding its testing requirements and capabilities. Specifically, LSSC should be planning now to (1) assure that enough staff with the appropriate background and experience are available to develop Year 2000 test data and transactions and to review test results and (2) assess whether enough time has been scheduled to perform Year 2000 testing. Without planning how it will address these issues now, LSSC is increasing the risk that CCSS will not be fully validated in time for the change of century date. According to AMC’s Year 2000 Action Plan, many agencies will need to establish test environments which are specific to future date testing and which have no possibility of corrupting or destroying production data.Since the current CCSS test files do not contain the necessary Year 2000 test conditions and data, LSSC will need to establish Year 2000-specific test files to certify that CCSS is Year 2000 compliant. Such test data and transactions are typically designed by functional staff knowledgeable of the CCSS business processes. These staff review the testing results to ensure that Year 2000 software changes have processed data correctly and that other data and processes have not been inadvertently changed during testing. According to a LSSC official and LSSC staffing statistics, however, there are fewer functional staff now available to identify the date fields in the test transactions or test data needed to ensure that CCSS business processes are not adversely affected by the Year 2000 software changes. Also, LSSC officials stated that they expect the availability of these staff to continue to decrease over the next few years as staff retire and agency staff reductions continue. Further, LSSC is not allowing enough time for Year 2000 testing. While LSSC officials assert that the complexity and scope of the Year 2000 project is about the same as the BRAC project, LSSC’s June 1997 systems change release schedule calls for far less time to test Year 2000 changes than it does for BRAC changes. For example, BRAC testing began in February 1997 and is scheduled for completion in September 1997. Year 2000 testing is scheduled to begin in September 1998 and end almost 8 weeks later in November 1998. A LSSC official acknowledged that the amount of time scheduled for Year 2000 testing is insufficient, but stated that the schedule will be revised once ongoing negotiations to acquire an automated code correction service are resolved. The official also stated that he fully expects the Year 2000 test schedule to greatly increase beyond the currently scheduled 8 weeks but that the increased test time should be offset by the reduced renovation time expected to be garnered by using the automated code correction service. Although LSSC believes that the automated code correction service should provide increased Year 2000 testing time, it could not provide documented analysis to support this conclusion. While LSSC believes it can increase its testing time without increasing the overall Year 2000 project time, we are not as confident given LSSC’s CMM level 1 ranking. Trying to compensate for unrealistic time schedules by either shortening earlier phases of a software change project or by lengthening overall project time is characteristic of level 1 organizations. Until LSSC realistically assesses its testing requirements, capabilities, and time schedules, effective Year 2000 project management will become increasingly difficult to achieve, and LSSC will increase the risk that it may be unable to meet the demand imposed by Year 2000 testing. CCSS’ ability to successfully operate at the year 2000 hinges on the proper and timely exchange of data with other systems, both within the Army and with external Defense components. It is critically important during the Year 2000 effort that agencies ensure that interfacing systems have the ability to exchange data throughout the transition period and protect against the potential for introducing and propagating errors from one organization to another and ensure that interfacing systems have the ability to exchange data through the transition period. This potential problem may be mitigated through formal agreements between interface partners that describe the method of interface and assign responsibility for accommodating the exchange of data. Both the DOD Year 2000 Management Plan and AMC Year 2000 Action Plan place responsibility on component heads or their designated Year 2000 contact points to document and obtain system interface agreements in the form of memorandums of agreement (MOA) or the equivalent. Further, to help assure that interfaces continue to properly exchange data after systems are renovated for the year 2000, AMC has issued minimum MOA documentation requirements designed to produce consistency, assign accountability, and recognize a level of detail necessary for effective interface renovation among data exchange partners. While LSSC has developed MOAs to document interface specifics between CCSS and its interfacing systems and is in the process of finalizing those agreements with system owners, nearly all the MOAs lack basic information necessary for effective management and implementation of the interfaces. According to AMC Year 2000 guidance and the accompanying requirements of the standard MOA, the agreements are to specify the (1) points of contact for reporting progress and coordinating schedules and (2) date the agreement becomes effective. To successfully implement interface changes, these agreements should also communicate the type, form, and frequency of transactions exchanged, the windowing technique that is being used at each end of the interface, and the review process for monitoring interface renovation progress and reconciling differences. However, our review disclosed that 39 of 41 MOAs that LSSC had finalized as of July 1997 failed to fully follow AMC’s guidance or include other information necessary to ensure that LSSC can successfully communicate with interface partners. Our Year 2000 Assessment Guide stresses the importance of adequately addressing interface and data exchange issues. Without such information, the MOAs do not serve to communicate and coordinate the actions designed to help assure that Year 2000 changes are made properly and promptly by LSSC and its interfacing partners. Timely and complete information on all systems interfaces that may be affected by Year 2000 changes is essential to the success of the LSSC Year 2000 compliance program. The amount of work required to coordinate the data being exchanged between systems must be known as early as possible and documented in written MOAs so that LSSC may complete renovation schedules, allocate resources, plan testing, and schedule implementation. The year 2000 represents a great potential for operational failure to CCSS that could adversely impact core business processes as well as those of entities that depend on the CCSS system for information. To mitigate this risk of failure, our Year 2000 Assessment Guide, DOD’s Year 2000 Management Plan, and the Army’s Project Change of Century Action Plan suggest that agencies perform risk assessments and prepare realistic contingency plans that identify alternatives to ensure the continuity of core business processes in the event of operational failure. These alternatives could include performing automated functions manually or using the processing services of contractors. While LSSC has taken the first steps toward development of a contingency plan by assessing the level of risk to each business area that could be affected by processing errors and by determining how that risk can be mitigated or reduced, at the completion of our review, LSSC had not yet developed a contingency plan. Further, despite explicit guidance from DOD and the Army to develop contingency plans should Year 2000 corrections to CCSS not be completed in time, LSSC officials stated that no contingency plan would be developed for CCSS. They maintained that AMC does not require a contingency plan for CCSS because CCSS is not scheduled for replacement prior to the advent of the year 2000. While AMC’s Year 2000 Action Plan states that contingency plan development is only required for replacement systems and implies that all other systems are exempt, the AMC plan also states that the guidance, policy, and responsibilities identified in the Army’s Project Change of Century Action Plan are mandatory and are the basis of the AMC plan. Nevertheless, despite LSSC’s and AMC’s position that a contingency plan is not needed for CCSS because the system is not being replaced prior to the year 2000, the system still risks unanticipated operational failure. Without a contingency plan that identifies specific actions to be taken if CCSS fails at the year 2000, the procurement of weapon systems and their spare parts, accounting for the sale of Army equipment and services to allies, and the financial management of $9 billion of inventory could be disrupted. As a result, the Army could be unable to efficiently and effectively equip and sustain its forces around the world. Given the dangers associated with an operational failure of this magnitude, LSSC needs the protection provided by good contingency planning to ensure that options are available if CCSS is not able to operate at the year 2000. Recently, LSSC officials stated that they have begun preparing an initial contingency plan, which they estimate will be completed by September 30, 1997. If CCSS cannot correctly process dates on and after January 1, 2000, military equipment, such as tanks, artillery, aircraft, missiles, munitions, trucks, electronics, and other supporting materials for the soldier, in all likelihood, will not be ordered, stored, transported, issued, paid for, or maintained. Mobilization plans and contingencies would be significantly impaired if materiel is delayed. However, LSSC has yet to resolve several critical problems associated with the assessment phase to ensure that (1) systems are adequately tested, (2) contingency plans are developed, and (3) interface partners are fully aware of LSSC’s Year 2000 plans. Furthermore, during the same time that LSSC is addressing the Year 2000 issue, the agency is also working to implement considerably more software projects than it has in the past. This unprecedented workload is compounded by a reduced staff level and LSSC’s basic lack of a mature software development and maintenance process. Together, these factors raise the risk level of the Year 2000 project beyond what is normally expected of a software modification effort of this magnitude. Until these problems are resolved, LSSC is not well-positioned to move forward into the more time-consuming phases of renovation, validation, and implementation. As a result, we believe LSSC will find it increasingly difficult to prepare CCSS in time for the arrival of the year 2000. We recommend that you: Act to improve LSSC’s software development process that will provide the basis for achieving CMM level 2 maturity. Immediately assess the impact of competing workload and staffing demands on the CCSS Year 2000 project. Based on this assessment, consider (1) canceling or deferring less critical software projects until after the Year 2000 project is substantially completed and (2) augmenting the Year 2000 project with staff having the necessary skills to ensure timely completion of the project. Ensure that LSSC has the capability to complete the testing of all CCSS subsystems and programs. Specifically, LSSC should (1) determine test requirements, (2) identify the testing staff needed, (3) finalize Year 2000 test plans describing how the testing staff will be acquired and scheduled for developing Year 2000 compliant test scenarios and data, and (4) revise the Year 2000 test schedule to assure that enough time is available to meet Army-mandated deadlines for Year 2000 implementation. Ensure that written interface agreements describe the method of data exchange between interfacing systems, name the entity responsible for performing the system interface modification, and state the completion date. Develop a contingency plan that includes specific actions for ensuring that the Army’s logistic functions continue to operate at appropriate levels if all or part of CCSS fails to work at the year 2000. In written comments on a draft of this report, the Office of the Under Secretary of Defense (Acquisition and Technology) concurred with all of our recommendations to improve the Army’s LSSC Year 2000 program. Specifically, DOD agreed that a contingency plan would be developed by September 30, 1997, to ensure continuity of operations if all or part of CCSS fails to operate by the year 2000. DOD also outlined a number of actions that have recently been initiated that are aimed at reducing and prioritizing LSSC’s current workload, and increasing staff with the necessary skills to help ensure the timely completion of the Year 2000 project. In addition, DOD pointed to several actions, both taken and planned, to improve its capability to complete Year 2000 testing of CCSS subsystems and programs. While we have not reviewed LSSC’s latest actions, if properly implemented, we believe they could help resolve the workload and testing issues we identified. In concurring with our recommendation regarding the need to initiate actions to improve LSSC’s software development process, DOD recognized the value of achieving a CMM level 2 maturity and agreed that LSSC does not have all configuration management procedures in place to reach CMM level 2 at this time. However, DOD stated that LSSC’s history indicates that it can accomplish large projects successfully and that LSSC will meet the mandated dates for the BRAC and Year 2000 projects without achieving CMM level 2. After completion of these projects, LSSC plans to resume its efforts to achieve a CMM level 2 maturity. We believe LSSC’s position comes at some risk. The discipline derived from reaching a CMM level 2 maturity can greatly enhance LSSC’s ability to address the Year 2000 challenge. This higher level of maturity is key to reducing the risk of schedule slippage, cost overruns, and poor software quality. As our report states, we have recommended that information technology organizations be at least a CMM level 2 to successfully manage large-scale projects such as the Year 2000 project. Our Year 2000 Assessment Guide provides interim actions that level 1 organizations can take prior to the year 2000 to minimize the risk of failure, such as training staff on proven industry system development and program management practices and soliciting assistance from organizational entities experienced in performing or managing major software conversions. LSSC could benefit from these interim actions. In concurring with our recommendation on strengthening written interface agreements, DOD stated that LSSC will formalize MOAs between interface partners. It also agreed to include specific detailed information in MOAs, but only when appropriate. As our report stated, we believe that, at a minimum, MOAs should also contain essential information for effective management of system interfaces, such as the type, form, and frequency of transactions exchanged, the windowing technique to be used, and the review process for monitoring interface renovation progress and reconciling differences. This additional information would help to ensure that interface partners are sufficiently prepared to handle unforeseen problems that may occur and to plan for contingencies. The full text of DOD’s comments is provided in appendix I. This report contains recommendations to you. Within 60 days of the date of this letter, we would appreciate receiving a written statement on actions taken to address these recommendations. We appreciate the courtesy and cooperation extended to our audit team by LSSC officials and staff. We are providing copies of this letter to the Chairmen and Ranking Minority Members of the Senate Committee on Governmental Affairs; the Subcommittee on Oversight of Government Management, Restructuring and the District of Columbia, Senate Committee on Governmental Affairs; and the Subcommittee on Government Management, Information and Technology, House Committee on Government Reform and Oversight; the Honorable Thomas M. Davis, III, House of Representatives; the Secretary of Defense; the Deputy Secretary of Defense; the Acting Under Secretary of Defense (Acquisition and Technology); the Acting Under Secretary of Defense (Comptroller); the Acting Assistant Secretary of Defense (Command, Control, Communications and Intelligence); the Secretary of the Army; Commanders of the Army Materiel Command and Communications- Electronics Command; the Director of the Office of Management and Budget; and other interested parties. Copies will be made available to others upon request. If you have any questions on matters discussed in this letter, please call me at (202) 512-6240, or John B. Stephenson, Assistant Director, at (202) 512-6225. Major contributors to this report are listed in appendix II. The following is GAO’s comment on the Department of Defense’s letter dated September 12, 1997. 1. DOD provided a number of clarifications to the report that we have incorporated as appropriate. Denice M. Millett, Evaluator-in-Charge Michael W. Buell, Staff Member The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed: (1) the status of the Logistics Systems Support Command's (LSSC) efforts to correct Commodity Command Standard System (CCSS) Year 2000 systems problems; and (2) the appropriateness of LSSC's strategy and actions for ensuring that CCSS Year 2000 issues will be successfully addressed. GAO noted that: (1) the Year 2000 problem is one of the most comprehensive and complex information management projects ever faced by LSSC; (2) if not successfully completed, the procurement of weapon systems and their spare parts, accounting for the sales of Army equipment and services to allies, and the financial management of $9 billion of inventory could be disrupted; (3) as a result, it could be extremely difficult to efficiently and effectively equip and sustain the Army's forces around the world; (4) LSSC has completed several actions to address the CCSS Year 2000 problem; (5) a Year 2000 project manager and management staff have been designated, a project manager charter and schedule were developed, and supplementary contractor support was acquired to assist with assessment tasks; (6) regularly scheduled quarterly meetings are held by the Army Materiel Command (AMC) headquarters to report LSSC Year 2000 status; (7) these steps are compatible with the Department of Defense's (DOD) suggested approach and consistent with those found in GAO's five-phased approach for planning, managing, and evaluating Year 2000 projects; (8) although LSSC commenced its Year 2000 project over a year ago, there are several issues facing LSSC that, if not completely addressed, may result in the failure of CCSS to successfully operate at the year 2000; (9) LSSC has yet to completely address: (a) competing workload priorities and staffing issues; (b) the appropriate mix and scheduling of needed testing data and expertise as well as the development of test plans; (c) the scope and substance of written interface agreements with system interface partners to ensure that CCSS subsystems will be capable of exchanging data at the year 2000; and (d) contingency plan development to help assure that Army missions will be accomplished if CCSS is not fully available to users by the year 2000; (10) LSSC's risk of failure is increased because the agency has not attained the level of software development and maintenance maturity that would provide the foundation needed for successful management of large-scale projects such as the Year 2000 initiative; and (11) because CCSS is used to support military readiness, these critical elements must be resolved and aggressively pursued to enable LSSC to achieve a Year 2000 compliant environment prior to the year 2000.
This report examines two types of LTA platforms: aerostats and airships. Both use a lifting gas—most commonly helium—but an aerostat is tethered to the ground while an airship is free moving. Aerostats lack a propulsion mechanism and are connected to a mooring station on the ground by a long cable called a tether. The tether, in addition to securing the aerostat to one general area above the ground, usually provides power to the payload, such as ISR sensors and communications equipment, and carries data between the payload and ground control station. Airships, on the other hand, are manned or unmanned, self-propelled vehicles that have directional control. See figure 1 for an example of an aerostat system. There are three basic types of airships: (1) non-rigid—which has no frame and maintains its envelope (external structure) shape through the slightly pressurized gas it contains; (2) semi-rigid—which also maintains its shape through the slightly pressurized gas it contains, but also has a structural keel along the bottom of the envelope to help distribute loads; and (3) rigid—which has an internal rigid frame to maintain its shape and to distribute lift and load weight. Blimps flying above sporting events are commonly non-rigid airships, whereas the Hindenburg airship of the 1930s is an example of a rigid airship. Airships can be further categorized by their shape—conventional or hybrid. A conventional airship has an ellipsoidal shape reminiscent of those that fly over sporting events. A hybrid airship combines the buoyant lift of a lighter-than-air gas with the aerodynamic lift created by the shape of the airship as it flies through the air. Shaped roughly like the cross- section of an aircraft wing, a hybrid airship can generate up to 30 percent of its lift as it flies. Additional lift can be generated by directing the thrust of on-board propulsion systems (called vectored thrust) downward. Because of the additional sources of lift, hybrid airships theoretically can take off in a heavier-than-air configuration. See figures 2 and 3 for respective depictions of conventional and hybrid airships. In the early to mid 1900s, and especially during World War II, the U.S. Navy operated a variety of airships for maritime patrol and fleet reconnaissance, including assistance in antisubmarine warfare. Additionally, in the early 1930s, airships were used for commercial transportation across the Atlantic Ocean. However, advances in fixed- wing aircraft design, capabilities, and availability, as well as in enemy antiaircraft weaponry, led to a marked decline in the military and commercial use of airships. For instance, the Navy disbanded its last airship unit in 1962, and since then, the military use of airships for other than research and development purposes essentially discontinued. Since 1978, DOD has operated aerostats along the southern U.S. border for counterdrug detection and monitoring. Additionally, civil government agencies have used aerostats for a variety of purposes, such as monitoring of environmental pollution, and atmospheric and climate research. For example, since 2009, the Environmental Protection Agency has used aerostats for the purpose of sampling air emissions from open sources, such as prescribed forest burns. Additionally, the Department of Commerce’s National Oceanic and Atmospheric Administration has used small aerostats to collect wind data. Furthermore, the Department of Homeland Security’s U.S. Customs and Border Protection is considering using aerostats for its border security mission. The overall investment of civil government agencies in LTA activities is small compared to that of DOD. See appendix II for examples of civil agency aerostat activities. While commercial use of airships has primarily been limited to sightseeing and advertising, there has been interest in using airships for cargo transportation to logistically austere locations, such as remote areas in Alaska and Canada. Several factors have increased DOD’s attention toward LTA platforms. The lack of enemy air defense capabilities in recent military operations has made threats to LTA platforms appear to be low, and the military’s demand for persistent ISR has grown significantly. For example, DOD plans to almost double the number of aerostats—from 66 to 125—in Afghanistan for ISR in fiscal years 2011 and 2012. Also, growing budget pressures have encouraged the study of potential solutions to military problems, such as persistent ISR and heavy-lift cargo transportation, which may reduce procurement and operations and maintenance costs. For example, a 2008 Army Science Board study that compared fixed-wing unmanned aircraft, space satellites, and LTA platforms for providing persistent communications, surveillance, and reconnaissance missions, concluded that airships offered great promise at being effective in supporting these missions because of factors including ease of reconfigurability, extended time on station, large payload capacity, and lower cost. LTA platforms face several significant operational hazards. For example, weather phenomena such as high winds and lightning have posed the highest threats to aerostats deployed by the military in Afghanistan. Before the arrival of hazardous weather conditions, aerostat operations must cease and the platform must be lowered and secured to the mooring station to help prevent platform or payload damage. Additionally, high winds can make airships hard to control and increase fuel consumption, reducing on-station endurance. Furthermore, combat operations can result in punctures in the fabric caused by bullets and other projectiles. However, low helium pressure in the envelope (which is only slightly higher than the surrounding atmospheric pressure) means small helium leaks from bullet holes are typically slow and repairs can usually wait until a normally scheduled maintenance period. We identified 15 key aerostat and airship efforts that were underway or had been initiated since 2007, and DOD had or has primary responsibility for all of these efforts. Most of these efforts have been fielded, completed, or terminated. Over the past 6 years, DOD’s overall investment has increased, and the estimated total funding of these efforts was almost $7 billion from fiscal years 2007 through 2012. However, funding estimates for aerostat and airship efforts under development beyond fiscal year 2012 decline significantly, although there is an expectation that investment in the area will continue. Highlights on the 15 aerostat and airships efforts that were underway or initiated since 2007 by DOD are presented in the table below—details of each are provided in appendix III. Most of the aerostat and airship efforts have been fielded or completed, and are intended to provide ISR support or persistent surveillance, with on-station duration time typically greater than fixed-wing unmanned aircraft. DOD’s pursuit of aerostats and airships is mostly due to the ability of these platforms to loiter for a longer period of time than fixed-wing unmanned aircraft, which makes them very suitable for supporting the ISR mission. The various ISR sensors used or planned for the aerostats and airships in our review include: electro-optical cameras to conduct optical monitoring of the electromagnetic spectrum from ultraviolet through far infrared; ground moving target indicator radars to detect, locate, and track vehicles throughout a large area when they are moving slowly on or just above the surface of land or water; unattended transient acoustic measurement and signature intelligence systems that use sets of microphones to capture sounds that are processed and analyzed to determine the direction of the points of origin and impact of mortar launch; and signals intelligence sensors to collect transmissions deriving from communications, electronic, and foreign instrumentation systems. The aerostat and airship efforts we identified vary in terms of the time they can operate on station in any single session. Their on-station endurance time is typically greater than that of fixed-wing unmanned aircraft. For example, the TARS aerostat is expected to stay on station for 6 days, whereas the LEMV airship is expected to stay on station for at least 16 days. In contrast, tactical and theater-level fixed-wing unmanned aircraft can stay on station from 6 hours for a Shadow aircraft, to 40 hours for a Sky Warrior. The amount of time on station is greatly dependent on how often the aerostats and airships need to be topped off with additional helium, and in the case of most airships, how often they have to be refueled. Over the past 6 years, overall total DOD investment in aerostat and airship development, acquisition, and operations and maintenance has increased, ranging from about $339 million in fiscal year 2007 to a high of about $2.2 billion in fiscal year 2010, and about $1.3 billion in fiscal year 2012, as illustrated in figure 4. DOD has invested almost $7 billion from fiscal years 2007 through 2012 on key aerostat and airship efforts in our review. Moreover, aerostat-related investment—$5.8 billion—accounted for more than 80 percent of the total. See appendix IV for additional details on the reported funding for these efforts. Over 90 percent of all estimated aerostat investment from fiscal years 2007 to 2012—almost $5.4 billion—is attributed to the development and procurement of three aerostat programs—JLENS, PGSS, and PTDS. Aerostat funding increased through fiscal year 2010 primarily because of increased demand for PGSS and PTDS aerostats in Afghanistan and Iraq. Most of the total estimated airship investment from fiscal years 2007 to 2012—approximately $1.1 billion—consists of research, development, test, and evaluation (RDT&E) costs. Of this amount, over 90 percent of the airship RDT&E investment—approximately $1 billion—is for the Blue Devil Block 2, ISIS and LEMV development efforts. The major increase depicted for fiscal year 2010 reflects an increase in RDT&E investment due to the beginning of funding for the Blue Devil Block 2 and LEMV development efforts, as well as a substantial increase for the ISIS development effort the Air Force began funding. Estimated funding for JLENS, ISIS, LEMV, and Project Pelican—efforts under development—is expected to decline significantly after fiscal year 2012, as illustrated in figure 5. However, according to DOD officials, investment in this area is expected to continue in the future. The aggregate funding for these four development efforts declines from $473 million in fiscal year 2012, to $23 million in fiscal year 2016. Funding for JLENS, the development effort with the highest estimated cost from fiscal years 2012 to 2016, drops from $369 million in fiscal year 2012 to $187 million in fiscal year 2013, $92 million in fiscal year 2014, $31 million in fiscal year 2015, and $23 million in fiscal year 2016. The original funding profile for JLENS showed substantively higher amounts over this time period, but due to a recent decision to reduce the number of JLENS aerostats that DOD intends to procure from 16 to 2, the current funding profile reflects this significant reduction in procurement. There are also investment uncertainties in the near term for LEMV and ISIS. According to LEMV program officials, if the first LEMV is successfully demonstrated in Afghanistan, then it may transition to an acquisition program which would likely require additional funding. Also, ISIS program officials do not yet know if ISIS will become a program of record. According to an official in the Office of the Under Secretary of Defense for Intelligence, while DOD expects to continue funding airship and aerostat efforts beyond fiscal year 2016, specific information regarding funding amounts is not available at this time. Furthermore, we did not find any current architectures, investment plans, or roadmaps that incorporated aerostat and airship efforts to indicate DOD’s commitment to increase or reduce its investment in this area. Three of the four aerostat and airship efforts under development, plus another airship development effort that was terminated in June 2012, have suffered from high acquisition risks because of significant technical challenges, leading to cost overruns and schedule delays. Additionally, DOD used the rapid acquisition process to acquire airships that had high technical risks. JLENS has experienced schedule delays and a Nunn-McCurdy unit cost breach, ISIS will not and LEMV did not meet their originally scheduled launch dates and have experienced cost overruns, and Blue Devil Block 2 was terminated to avoid substantially increasing costs caused by technical problems. The Army initiated JLENS system development in August 2005. JLENS consists of two large aerostats—over 240 feet in length—each with a 7,000 pound payload capacity for cruise missile detection and tracking. As we have previously reported, the program has experienced design issues associated with the mobile mooring transport vehicle, as well as schedule delays caused by synchronization of JLENS with the Army’s Integrated Air and Missile Defense program. JLENS was originally scheduled to enter production in September 2010. However, that same month, an aerostat accident resulted in the loss of one of the JLENS platforms. The accident, as well as recent system integration challenges, led to a decision to not procure production units. JLENS also incurred a critical Nunn-McCurdy program acquisition unit cost breach with the submission of the fiscal year 2013 President’s Budget due to a 100 percent reduction in planned procurement quantities—the program previously planned to procure 16 aerostats. Now, the program is scheduled to only acquire 2 aerostats using research and development funding, and is not expected to enter the production phase. ISIS is a joint Defense Advanced Research Projects Agency (DARPA) and Air Force science and technology effort initiated by DARPA in 2004. ISIS is to develop and demonstrate a radar sensor system that is fully integrated into a stratospheric airship measuring 510 feet in length and with a payload capacity of 6,600 pounds. ISIS has experienced technical challenges stemming from subsystem development and radar antennae panel manufacturing. Consequently, earlier this year DARPA temporarily delayed airframe development activities, and instead will mainly focus on radar risk reduction activities. During this time period, the ISIS team will develop an airship risk reduction plan and conduct limited airship activities. Based on the radar and airship risk reduction studies, DARPA will reassess the future plan for ISIS with the Air Force. The Army initiated development efforts on LEMV in 2010. At over 300 feet in length and with a goal of carrying a 2,500 pound payload, LEMV offers substantive potential ISR capabilities—if the program can meet its performance objectives. LEMV’s deployment is behind schedule by at least 10 months (about a 56 percent schedule increase) due to issues with fabric production, getting foreign parts cleared through customs, adverse weather conditions causing the evacuation of work crews, and first-time integration and testing issues. Also, LEMV is about 12,000 pounds overweight because components, such as tail fins, exceed weight thresholds. According to program officials, the increased weight reduces the airship’s estimated on-station endurance at an altitude of 20,000 feet from the required 21 days, to 4 to 5 days. However, current plans call for operating the airship at a lower altitude of 16,000 feet, which is expected to enable an on-station duration time of 16 days with minimal impacts to operational effectiveness (other than about a 24 percent reduction to on-station endurance). According to program officials, the biggest risk to program development was the ambitious 18-month initial development schedule (from June 2010 to December 2011). The Army successfully launched and recovered LEMV during its first flight in August 2012. The Army identified a fiscal year 2012 funding shortfall of $21.3 million resulting from the need for additional engineering and production support to mitigate and resolve technical issues at the LEMV assembly facility. The Air Force initiated development efforts on Blue Devil Block 2 in 2010. Much like LEMV, this effort was to deliver a large airship that would carry a 2,500 pound payload in support of the ISR mission. The length of the airship was 370 feet. Prior to its termination in June 2012, the Blue Devil Block 2 airship effort experienced significant technical problems resulting in cost overruns and schedule delays. According to an Air Force official, the Blue Devil Block 2 development effort had a very aggressive development schedule because it was intended to meet an urgent need for use in Afghanistan. Some of the technical problems included the tail fins, which were overweight and failed structural load design testing, rendering the airship not flyable. Other technical problems included the flight control software which experienced problems due to issues related to scaling—although the software worked well with a much smaller scale version of the airship, it did not work well with the much larger Blue Devil Block 2 airship. The Air Force terminated the Blue Devil Block 2 airship effort in June 2012 due to the technical problems experienced with the airframe and the need to avoid substantially increasing costs of the effort. For example, the contractor estimated that the 1-year post-deployment operations and maintenance costs would total $29 million, but the Air Force’s cost estimate ranged between $100 and $120 million—an estimate that was at least 245 percent higher than the contractor’s estimate. According to an Air Force official, the contractor’s estimate did not include costs such as for spare parts and repairs. We found that DOD used its rapid acquisition process to initiate two airship efforts to quickly deliver warfighter capabilities, but significantly underestimated the risks of meeting cost, schedule, and performance goals. DOD has taken a number of steps to provide urgently needed capabilities to the warfighter more quickly and to alleviate the challenges associated with the traditional acquisition process for acquiring capabilities. Some of these steps include quicker requirements validation and reduced levels of oversight, including exemption from disciplined analyses that help to ensure requirements are achievable within available technologies, design, and other resources, and that programs have adequate knowledge in hand before moving forward in the acquisition process. The success of this accelerated acquisition process is predicated on efforts that do not involve high development and acquisition risks, such as limiting technology development by using mature technologies. However, in the case of LEMV and Blue Devil Block 2, the risks of these acquisitions were higher than usual for rapid acquisitions. Specifically: The LEMV acquisition strategy was initially approved when technologies were estimated to be at technology readiness levels 4 through 7. At the time, DOD’s acquisition guidance recommended a DOD officials technology readiness level 6 for product development.stated that they were willing to assume higher risk with the potential of developing an asset that had much greater on-station endurance and could provide capabilities on a single platform rather than on multiple aircraft. They stated that the higher risk of the effort was justified because there were multiple other efforts that were already providing surveillance capabilities in theater. DOD officials stated that, at the time the LEMV initiative was started, they expected the airship could be scaled up from a commercially existing demonstration variant and that the Army could meet the 18 month schedule to design, fabricate, assemble, test, and deploy the system. However, as noted earlier, LEMV experienced schedule delays of at least 10 months, largely rooted in technical, design, and engineering problems in scaling up the airship to the Army’s needs. DOD also significantly underestimated the risk of the Blue Devil Block 2 development effort. The Secretary of Defense designated Blue Devil Block 2 as an urgent need solution to eliminate combat capability deficiencies that had resulted in combat fatalities. According to program officials, it was thought that the Blue Devil Block 2 airship would be a variant of commercially-available conventional airships and therefore deemed the technologies associated with the platform to be mature. However, the part of the program considered to be the lowest risk—the airship platform—turned out to be a high risk development effort. At the time of project cancelation, the Blue Devil Block 2 airship was more than 10,000 pounds overweight, which limited the airship’s estimated endurance. The weight issue contributed to other design concerns, the tail fins were too heavy and were damaged during testing, and the flight control software experienced problems related to scaling to a larger airship. The Air Force terminated the acquisition in June 2012. The experience of these two programs under the urgent needs acquisition process is not unique. We recently reported that urgent needs initiatives that required technology development took longer after contract award to field because of technical challenges and testing delays than initiatives that involved mature technologies. Additionally, as reported in a 2009 Defense Science Board Task Force Study, squeezing new technology development into an urgent timeframe creates risks for delays and ultimately may not adequately address an existing capability gap. DOD has not provided effective oversight to ensure coordination of its aerostat and airship development and acquisition efforts. Consequently, these efforts have not been effectively integrated into strategic frameworks, such as investment plans and roadmaps. At the time of our review, DOD did not have comprehensive information on all its efforts nor its entire investment in aerostats and airships. Additionally, DOD’s coordination efforts have been limited to specific technical activities, as opposed to having a higher level authority to ensure coordination is effective. These shortcomings may have led to an instance of duplication, which ended when one airship effort was terminated. DOD has recently taken steps to bolster oversight. Whether these steps are sufficient largely depends on the direction DOD intends to take with aerostat and airship programs. If it decides to make significant future investments in efforts, more steps may be needed to shape these investments. We have reported on the value of strategic planning for laying out goals and objectives, suggesting actions for addressing those objectives, allocating resources, identifying roles and responsibilities, and integrating relevant parties.and airship capabilities into its strategic frameworks for future acquisitions of unmanned or ISR systems. At the time of our review, DOD did not have a reliable inventory of its aerostat and airship efforts, including insight into its entire investment in aerostats and airships, or an office that could discuss the status of all of these efforts. We found several instances where aerostat and airship efforts were not well integrated into recent strategic planning documents, such as investment plans and However, DOD has not effectively integrated aerostat roadmaps, which can help guide and prioritize DOD’s investments.example: U.S. Army Unmanned Aircraft Systems Roadmap 2010-2035—which is to inform warfighting functional concepts, contribute to capabilities- based assessments, and assist in the development of resource- informed decisions on new technologies—mentions the concept of LTA vehicles, but does not specify the potential contributions of specific aerostats or airships. DOD’s Unmanned Systems Integrated Roadmap FY2011-2036— which is to address the recent surge in the use of unmanned systems and describe a common vision for the continued integration of unmanned systems into the DOD joint force structure—includes a description of several aerostat and airship efforts underway, but it does not specifically cover how or whether aerostats and airships could contribute to DOD’s force structure. Strategic frameworks and planning efforts can be essential to the effective oversight of portfolios, especially when they consist of multiple types of acquisitions in various stages of development, production, fielding, and sustainment. Such planning can help ensure DOD has the proper mix of platforms and a balanced investment portfolio among technology development, acquisitions, production, and sustainment activities, and thereby avoid unnecessary overlap in and duplication of effort. Adding aerostats and airships to the mix of other investments would add to the complexity of planning and oversight of relevant portfolios, but doing so could help to make (1) determinations of how aerostats and airships compare to other efforts and (2) effective trade-off decisions based on their capabilities and costs. Since 2007, DOD significantly increased its investment in airship and aerostat efforts, in large part to respond to the urgent warfighter ISR needs in Iraq and Afghanistan, but also to demonstrate LTA technologies and deliver new capabilities. As a result, numerous organizations throughout DOD have pursued aerostat and airship development and acquisition efforts. For example, the Army oversees and manages the GARP testbed, JLENS, LEMV, and some high altitude airship efforts; the Air Force manages TARS; DARPA and the Air Force are responsible for ISIS; the Navy undertook Star Light and is currently responsible for PGSS and AAFL; and the Office of the Assistant Secretary of Defense for Research and Engineering is responsible for Project Pelican. Given the wide variety of efforts, DOD has taken some positive steps to coordinate the various aerostat and airship development and acquisition efforts it has underway. However, these efforts have mostly occurred at technical levels where working groups, consisting of technologists from industry and government, collaboratively address technical issues, as opposed to having a higher level authority to ensure coordination is effective. DOD officials identified various examples of these coordination efforts that have taken place among the military services and departments: The Army formed a working group in which the U.S. Navy Naval Air Systems Command (which manages the PGSS program) participated to develop plans to merge the PGSS and PTDS aerostat rapid fielding initiatives into a Persistence Surveillance Systems-Tethered program of record. This program of record transition is expected to occur in 2014 and should help to ensure effective coordination between the efforts. The National Aeronautics and Space Administration (NASA) Ames Research Center signed an interagency agreement in July 2011 with the DOD Office of the Director of Defense Research and Engineering’s Rapid Reaction Technology Office to develop a prototype airship referred to as Project Pelican. Project Pelican is the U.S. government’s only airship effort to demonstrate ballast-free variable-buoyancy control technology through which the vehicle can control its buoyancy (and therefore go up and down) without the use of ballast and/or ground personnel and ropes. Both agencies agreed to mitigate long-term technical risk by demonstrating this technology. NASA is providing acquisition support services to DOD by overseeing the contractor’s technical efforts and DOD is funding the effort. The Air Force and DARPA are currently collaborating on the ISIS project. A February 2009 memorandum of agreement between the Air Force and DARPA outlines their respective roles, responsibilities, and development objectives. The project involves developing a large radar aperture that is integrated into the structure of a station-keeping stratospheric airship supporting wide-area persistent surveillance, tracking, and engagement of ground, maritime, air, and space targets. DARPA is providing program management, technical direction, security management, and contracting support. The Air Force is providing resources for program management, demonstration efforts, equipment, and base operations and support. Additionally, the project has used lessons learned from the Army’s HALE-D project, as they are both designed to operate at a high altitude. However, ISIS is unique in that the radar system is integrated into the airship’s platform—the radar is part of the airship structure. LEMV coordination is occurring among various Army organizations and military services and agencies. For example, the Army obtained lessons learned and best practices for its development of LEMV by leveraging the Navy’s AAFL program and the Army’s HALE-D effort, and the Navy developed flight-to-ground operational procedures for LEMV. Additionally, the Army has had informal coordination with the Blue Devil Block 2 effort in the past. For example, originally both airships had several diesel engine commonalities (they used the same type of engine), and program officials shared challenges and solutions they discovered as part of the process to modify the engines to meet their respective requirements. The Navy’s AAFL serves as a flying laboratory and risk reduction test- bed for sensors and other components and has assisted the Air Force with its Blue Devil Block 2 airship development. In 2011, the Air Force provided funding to the Navy to provide training to airship pilots to qualify them to fly the Blue Devil Block 2 airship. While these efforts indicate some military services and organizations are sharing lessons learned and technical solutions, DOD may be able to realize additional opportunities for coordination within the agency and throughout the government. For example, DOD officials told us that coordination between the LEMV and Blue Devil Block 2 projects and opportunities to share lessons learned had been limited because of their concurrent and accelerated development pace. Also, according to a U.S. Central Command official, information sharing between the PTDS and PGSS efforts has been limited, because the efforts are managed by different services, in areas such as test reports and operational impacts resulting from adverse weather. According to this official, better sharing of information could help to inform solutions for making aerostats more survivable. PGSS and PTDS program officials stated that the respective programs have steadily increased information sharing (including daily system status reports, aerostat incident reports, contracting information, budgets, and training programs of instruction) and collaboration on common aerostat issues (such as in-theater force protection for system operators, helium supply priorities, aerostat safety and weather information, and staff and crew tactical training). The shortcomings in planning, insight, and collaboration may have made some airship efforts susceptible to duplication. We identified two airship development efforts—LEMV and Blue Devil Block 2—that were potentially duplicative at the time of our review. However, the potential duplication ended when the Air Force terminated the Blue Devil Block 2 program in June 2012. Most of the desired capabilities for LEMV and Blue Devil Block 2 were similar, as shown in table 3. According to DOD officials, these two programs were expected to demonstrate ISR capabilities; however, they are two different types of vehicles with different design objectives. LEMV is a hybrid airship demonstration that is developing a new platform and the Blue Devil Block 2 was a conventional airship that was to place sensors on a mature commercial-based platform. However, both were expected to have the capability to conduct ISR missions at low altitude, and share other operational characteristics. For example, both airships were to operate at the same operational altitude of 20,000 feet, were expected to handle a payload weight capacity of 2,500 pounds, and shared some of the same types of sensors. The two airship efforts also were being developed concurrently and were expected to be deployed to Afghanistan for testing and operations around the same time. The National Defense Authorization Act for Fiscal Year 2012 directed the Secretary of Defense to designate a senior official with principal responsibility for DOD’s airship programs. In June 2012, the Deputy Secretary of Defense designated the Assistant Secretary of Defense for Research and Engineering as the senior official who will be responsible for the oversight and coordination of various airship-related programs across DOD. The statutory direction and appointment of the senior official are positive steps, but it is too early to assess the effectiveness of this official’s authorities and responsibilities in integrating and overseeing these activities. As of August 2012, the Office of the Assistant Secretary of Defense for Research and Engineering was defining the details relating to the authority, scope, and responsibilities of this new position. The overarching direction by the Deputy Secretary of Defense, in accordance with the statutory mandate, provides the senior official with authority over airship-related efforts. Because aerostat efforts respond to some of the same warfighter requirements as airships, such as for persistent ISR, and share some of the same technologies used in airship development efforts, such as materials, design, and fabrication, common oversight of both airships and aerostats could enable DOD to have better visibility over all of its aerostat and airship efforts and help to ensure these efforts are effectively overseen, planned, and coordinated. While DOD’s overall investment in this area has totaled nearly $7 billion in the past 6 years, near term funding estimates sharply decline beyond fiscal year 2012 and the level of future investment beyond fiscal year 2016 is not known. Until DOD makes the decisions regarding its investments in this area, the proper role of the senior oversight official will not be known. If DOD decides to make significant future investments in aerostat and airship capabilities, the senior official could play a key role in shaping those investments. If no future investments are anticipated, the role of the senior official may necessarily be focused more narrowly on the systems that are fielded or already in development. Aerostat and airship platforms are not a new concept, but they have recently been embraced in DOD because of their potential to provide continuous coverage capabilities quickly, especially in current military operations. Consequently, numerous organizations throughout DOD have pursued aerostat and airship development and acquisition efforts. DOD quickly initiated some of the larger programs with an eye toward leveraging commercial technologies and delivering capabilities to warfighters quickly to support current operations. But this rush came with high acquisition risk—particularly since there was a lack of knowledge about the amount of modifications and technology development that was required. Moreover, DOD’s limited oversight to ensure coordination of all of these efforts has resulted in ineffective integration of capabilities into broader strategic frameworks and limited investment knowledge and collaboration, making them susceptible to duplication. The appointment of the Assistant Secretary of Defense for Research and Engineering as the senior official responsible for the oversight and coordination of various airship-related programs is a positive step, but the role of the position remains to be clearly defined. Yet, the future is uncertain; at this point, no substantive investment is planned for aerostat and airship capabilities. If significant future investment is planned, the senior official could play a valuable role in shaping investments, ensuring they maximize return by integrating them into broader plans so that their capabilities can be leveraged and not unnecessarily duplicative. To address shortcomings in oversight to improve coordination, we recommend that the Secretary of Defense take the following three actions based on the extent of the department’s future investments in aerostats and airships. If DOD decides to curtail future investment, focus on ensuring that it has an inventory and knowledge of all current and planned efforts in the short term. If DOD decides to significantly increase future investment, include aerostat and airship capabilities in strategic frameworks to ensure visibility into and coordination with relevant efforts, guide innovation, and prioritize investments. Ensure the roles and responsibilities of the Assistant Secretary of Defense for Research and Engineering, as the senior official responsible for the oversight and coordination of various airship- related programs, are defined and commensurate with the level of future investment. We provided a draft copy of this report to DOD, DHS, and NASA for comment. In written comments on a draft of this report, DOD concurred with all three of our recommendations to address shortcomings in oversight to improve coordination of aerostat and airship development and acquisition efforts. DHS and NASA did not have formal comments on the draft report. Additionally, DOD, DHS, and NASA provided technical comments which were incorporated as appropriate. DOD’s written comments are reprinted in appendix V. We are sending copies of this report to appropriate congressional committees, the Secretaries of Defense and Homeland Security, and the Administrator of the National Aeronautics and Space Administration. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-4841 or chaplainc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are provided in appendix VI. To determine what key aerostat and airship systems across the federal government are being developed and acquired, including funding, purpose, and status of these systems, we reviewed documentation and interviewed officials on the status and progress of aerostat and airship development efforts in areas such as requirements, funding, costs, budgets, schedule, contracting, technology maturation, and actual or planned operational characteristics. In doing so, we developed an inventory of key airship and aerostat development and acquisition efforts which enabled a comparison of platform types, performance attributes, and costs. As part of identifying the universe of aerostat and airship efforts in the federal government, we interviewed agency officials and asked them about any knowledge they have regarding other systems that may currently exist. We also conducted Federal Procurement Data System—Next Generation (FPDS-NG) database and internet searches to inform ourselves about existing efforts; the internet searches included unclassified searches and background research. We corroborated and confirmed the accuracy of the FPDS-NG and internet search information with applicable agencies. Based on a review of funding data collected from agencies that we contacted as well as from presidential budget estimates and Selected Acquisition Reports as available, we determined that our definition of “key aerostat and airship systems being planned, developed, and acquired” includes two key criteria (1) total funding of $1 million or more from fiscal years 2007 to 2012, and (2) efforts to plan, develop or acquire systems that include both a platform and payload (such as sensors or cargo) capability. We analyzed documentation and interviewed officials from various offices of the Secretary of Defense; various offices within the Army, Navy, and Air Force; U.S. Central Command; Defense Advanced Research Projects Agency; Defense Logistics Agency; and offices of the Joint Chiefs of Staff. We also analyzed documentation and interviewed officials from civil agencies, including the Department of Homeland Security, Department of Energy, Environmental Protection Agency, National Oceanic and Atmospheric Administration, National Aeronautics and Space Administration, and Office of the Director of National Intelligence. We did not examine the development and utilization of lighter-than-air (LTA) technologies outside of the federal government. To identify any technical challenges these key aerostat and airship efforts may be facing, we analyzed documentation and interviewed officials from the organizations mentioned above. We used the collected information to assess any identified technical problems impacting the funding, cost, schedule, and performance of airships and aerostats. To determine how effectively the various key aerostat and airship efforts are being overseen to ensure coordination, and identify any potential for duplication, we assessed aerostat and airship investments, acquisitions, capabilities, and operations by analyzing documents and interviewing officials from the organizations listed above, analyzing the inventory of key efforts developed under our first objective, and reviewing prior GAO work for relevant criteria. Specifically, we assessed oversight at the programmatic and enterprise levels by reviewing organizational roles, responsibilities, and authorities as they relate to aerostat and airship development, acquisition, and operations efforts. We also determined the extent to which plans and planning activities integrated aerostat and airship development and acquisition efforts and capabilities within the Department of Defense (DOD). Reviewed plans and planning activities included architectures, roadmaps, investment plans, and requirements development. We also used the information relating to various aspects of the development and acquisition efforts, such as requirements, and actual or planned performance attributes, to assess whether any of the efforts are potentially duplicative. We conducted this performance audit from June 2011 to October 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide reasonable basis for findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Examples of Civil Government Use of Aerostats Purpose and status The purpose of DOC’s NOAA aerostats is to collect wind data up to approximately 1,641 feet above the ground. NOAA procured the aerostats, ground station, and supporting equipment in September 2011. Over the past 20 years, DOE’s Office of Science has supported a national scientific user facility called the Atmospheric Radiation Measurement Climate Research Facility, which is a unique system for continuous observations, capturing fundamental data on atmospheric radiation, cloud and aerosol properties. Since fiscal year 2010, DOE’s Office of Science has funded the use of a contracted aerostat to support research carried out through the Atmospheric Radiation Measurement Climate Research Facility. In 2009 EPA developed a tethered, aerostat-lofted sampling package for the purpose of sampling air emissions from open sources. Multiple aerostats have been purchased to support increasing payloads. EPA’s program has been largely funded through a project with DOD’s Strategic Environmental Research and Development Program to characterize emissions from open burning and open detonation of military ordnance. EPA will continue its emissions monitoring program using aerostats only if it can secure additional funding. Otherwise, the program will cease to exist. According to NASA, they occasionally fund research that deploys tethered balloons for atmospheric and weather observations. Also, NASA’s Jet Propulsion Laboratory continues to use tethered balloons for climate research. NASA plans to continue funding atmospheric and weather research using tethered balloons. For example, in fiscal year 2013, NASA plans to fund atmospheric and weather research that will use two tethered balloons in Yen Bai, Vietnam. Furthermore, NASA Goddard Space Flight Center’s Wallops Flight Facility owns and operates small commercially produced tethered blimps (advertising type) within the restricted airspace over Wallops Island. These aerostat systems are used for visibility markers during range operations and for lifting miniature experimental instrument packages. Cristina T. Chaplain, (202) 512-4841 or chaplainc@gao.gov. In addition to the contact named above, key contributors to this report were Art Gallegos, Assistant Director; Ami Ballenger; Jenny Chanley; Maria Durant; Arturo Holguín; Rich Horiuchi; Julia Kennon; Tim Persons; Sylvia Schatz; Roxanna Sun; and Bob Swierczek.
Use of lighter-than-air platforms, such as aerostats, which are tethered to the ground, and airships, which are freeflying, could significantly improve U.S. ISR and communications capabilities, and move cargo more cheaply over long distances and to austere locations. DOD is spending about $1.3 billion in fiscal year 2012 to develop and acquire numerous aerostats and airships. GAO was asked to determine (1) what key systems governmentwide are being developed and acquired, including funding, purpose, and status; (2) any technical challenges these key efforts may be facing; and (3) how effectively these key efforts are being overseen to ensure coordination, and identify any potential for duplication. To address these questions, GAO reviewed and analyzed documentation and interviewed a wide variety of DOD and civil agency officials. GAO identified 15 key aerostat and airship efforts that were underway or had been initiated since 2007, and the Department of Defense (DOD) had or has primary responsibility for all of these efforts. None of the civil agency efforts met GAO's criteria for a key effort. Most of the aerostat and airship efforts have been fielded or completed, and are intended to provide intelligence, surveillance, and reconnaissance (ISR) support. The estimated total funding of these efforts was almost $7 billion from fiscal years 2007 through 2012. However, funding estimates beyond fiscal year 2012 decline precipitously for aerostat and airship efforts under development, although there is an expectation that investment in the area will continue. Three of the four aerostat and airship efforts under development, plus another airship development effort that was terminated in June 2012, have suffered from high acquisition risks because of significant technical challenges, such as overweight components, and difficulties with integration and software development, which, in turn, have driven up costs and delayed schedules. DOD has provided limited oversight to ensure coordination of its aerostat and airship development and acquisition efforts. Consequently, these efforts have not been effectively integrated into strategic frameworks, such as investment plans and roadmaps. At the time of GAO's review, DOD did not have comprehensive information on all its efforts nor its entire investment in aerostats and airships. Additionally, DOD's coordination efforts have been limited to specific technical activities, as opposed to having a higher level authority to ensure coordination is effective. DOD has recently taken steps to bolster oversight, including the appointment of a senior official responsible for the oversight and coordination of airship-related programs. However, as of August 2012, DOD has not defined the details relating to the authority, scope, and responsibilities of this new position. Whether these steps are sufficient largely depends on the direction DOD intends to take with aerostat and airship programs. If it decides to continue investing in efforts, more steps may be needed to shape these investments. GAO recommends that DOD take actions based on the extent of its future investments in this area: (1) if investments are curtailed, ensure it has insight into all current and planned efforts in the short term; (2) if investments increase significantly, include the efforts in strategic frameworks to ensure visibility and coordination, guide innovation, and prioritize investments; and (3) ensure the roles and responsibilities of the senior official responsible for the oversight and coordination of airshiprelated programs are defined. DOD concurred with the recommendations.
Spare parts are defined as repair parts and components, including kits, assemblies, and subassemblies required for the maintenance of all equipment. Repair parts and components can include (1) reparable items, which are returned to the supply system to be repaired when they are no longer in working condition, and (2) nonreparable items, also called consumables, which are often used in repairing the reparable items because they cannot be economically repaired themselves. For example, a screw (a consumable) may be used in repairing a landing gear component (a reparable). The Defense Logistics Agency, headquartered at Fort Belvoir, Virginia, provides consumable supplies and spare parts to the military services, the Department of Defense, federal civilian agencies, and selected foreign governments. As part of its mission, the agency manages over 4.1 million consumable items. The vast majority of these items are considered consumable spare parts, and the remaining items include medicine, food, clothing, and fuel. Spare parts managed by the agency range from low-cost, commonly used items, such as fasteners and gaskets, to high-priced, sophisticated items, such as microswitches, miniature components, and precision valves vital to operating major weapon systems. The agency’s supply management operations are funded through the Defense-Wide Working Capital Fund, which operates as a revolving fund. The agency buys and sells spare parts to customers, who use appropriated funds to pay the agency. Sales receipts are then used to purchase additional items to meet new customer demand. In principle, the agency should recover the acquisition cost of the spare parts it sells, as well as its own operating costs, so that over the long term the fund breaks even financially. The Defense Supply Center in Richmond, Virginia, is designated as the Defense Logistics Agency’s lead center for air/aviation systems. The Defense Supply Center in Columbus, Ohio, is the designated lead center for land and sea/subsurface (maritime) systems. The Defense Supply Center in Philadelphia, Pennsylvania, is the lead center for troop support systems and general supply. Over the past 5 years, the number of spare parts the services purchased from the Defense Logistics Agency declined by about 24 percent, from 353 million in fiscal year 1996 to about 270 million in fiscal year 2000. However, the dollar value of spare parts obtained during the same period increased from $3.9 billion in fiscal year 1996 to $4.6 billion in fiscal year 2000 (about 18 percent). Electrical and electronic equipment components accounted for the highest proportion of total dollar value. The services acquired spare parts from 70 of the 78 federal stock groups. In commenting on a draft of this report, the department attributed the increased dollar value of sales to the increased value of items managed by the Defense Logistics Agency. This is clearly a contributing factor. However, at the same time as we recently reported, the Defense Logistics Agency has a number of actions underway to control spare part prices. Defense Logistics Agency data indicates that every year from 1996 to 2000, the agency supplied the services with smaller quantities of spare parts, from a total of about 353 million in fiscal year 1996 to about 270 million in fiscal year 2000, a decrease of about 24 percent (see fig. 1). The number of spare parts the services ordered also decreased about 19 percent during the period, from about 348 million to about 283 million. Defense Logistics Agency officials cited three main reasons for the decline: increased credit card usage, increased contractor maintenance support, and—primarily—military downsizing. The downsizing, which began in the early 1990s, continued through 2000. According to defense figures, the total number of active duty Navy fighter and attack aircraft declined from 504 in fiscal year 1996 to 432 in fiscal year 2000. Similarly, the number of ships (or ship battle forces) declined from 355 to 318. However, the number of Air Force fighter and attack aircraft remained steady at 936 during the period. As shown in figure 2, the number of spare parts sold by the Defense Logistics Agency to the Air Force dropped by about 23 million from about 137 million in fiscal year 1996 to about 114 million in fiscal year 2000, a 17 percent decrease. During this time, the Air Force increased the percentage of depot maintenance repair workload performed by the private sector. The other services also increased private sector depot maintenance performance, but to a lesser extent than the Air Force. Figure 3 shows that in fiscal year 1996, the Defense Logistics Agency sold the Army about 96 million spare parts, but by fiscal year 2000, that number had dropped to about 78 million, a decline of 18 million, or 19 percent. Figure 4 indicates that in fiscal year 1996, the Navy obtained about 110 million spare parts from the Defense Logistics Agency, but in fiscal year 2000 it bought only about 72 million spare parts, a decrease of about 38 million, or 35 percent. As shown in figure 5, the Marine Corps purchased about 6 million spare parts from the agency in fiscal year 2000, about 45 percent less than the approximately 11 million it had purchased in fiscal year 1996. The reported dollar value of the Defense Logistics Agency’s annual sale of spare parts to the services rose about 18 percent over the past 5 years, increasing from about $3.9 billion in fiscal year 1996 to about $4.6 billion in fiscal year 2000. The dollar value of ordered spare parts increased from about $3.9 billion to about $5.2 billion (about 25 percent). The reasons cited for the dollar value increase were (1) Defense Logistics Agency’s shift to a mix of more expensive spare parts and (2) price increases due to inaccurate initial price estimates, long periods between procurements, and/or substantial changes in the quantity of spare parts purchased. When disaggregated by service, the agency’s data indicates some variation in this trend. The dollar value of sales to the Air Force increased every year, while the annual sale value of spare parts to the Army, the Navy, and the Marine Corps fluctuated. Table 1 shows the overall increase as well as the annual fluctuations. During the 1996-2000 period, the Department of Defense transferred the management of more costly, complex, and sophisticated spare parts to the Defense Logistics Agency. Department officials indicated that the dollar value of the items transferred was significantly higher than the items being managed by the Defense Logistics Agency until that time. The items transferred from the services represented a higher percentage of the total inventory held by the agency, therefore contributing to the higher dollar value of spare parts sold to the services. Table 2 shows the average prices of the spare parts transferred as compared to those for spare parts not transferred and the prices for all spare parts sold to the services. The table also indicates the percentages of the total inventory that consisted of transferred spare parts during the 1996-2000 period. Although the price of some spare parts has increased significantly, for most spare parts it has not. In November 2000, we reported that prices of about 70 percent of spare parts requisitioned by the agency’s customers increased less than 5 percent a year during the 1989-98 period. This trend applied to all requisitioned spare parts, including those in frequent demand and aircraft-related spare parts. However, the prices of a relatively small number of spare parts did increase significantly—by 50 percent or more. The spare part prices increased for a number of reasons. The majority of the agency’s weapon system spare parts experienced a relatively low annual price change—less than 5 percent—from fiscal years 1989 through 1998. Most of the extreme price increases were due to inaccurate price estimates, outdated prices, or changes in quantities purchased. In other cases, prices increased significantly when long time periods—sometimes decades—passed between procurements. Agency purchasing officials cited other factors that can lead to price increases, including retooling of production lines between purchases, emergency procurements, and increases in the costs of raw materials. An Air Force official said that new technology also increased the cost as older aircraft were retrofitted with a new mix of more expensive spare parts to add capability. The services obtained spare parts from 70 of 78 federal stock groups over the past 5 years. In fiscal year 2000, electrical and electronic equipment components were the spare parts with the highest sales (in dollar value). However, different groups of spare parts led sales within each service during fiscal year 2000: Electrical and electronic equipment components led Defense Logistics Agency sales to the Navy; engine and turbine components led sales to the Air Force; and vehicular equipment components led sales to the Army and the Marine Corps. Although the same groups of spare parts remained among the top 10 (in terms of reported dollar value) over the 5-year period, the rankings of some groups and the amounts of money spent on each one varied. For example, engines, turbines, and components ranked fifth for the Air Force in fiscal year 1996 but moved to first place in fiscal year 2000. Table 3 shows the top 10 groups for fiscal years 1996 and 2000. The trends for aviation spare parts were consistent with those for the total spare parts supplied to the services. The Defense Logistics Agency’s lead supply center for aviation reported that the dollar value of annual sales to the services increased about 54 percent from fiscal year 1996 to fiscal year 2000, even though the center sold 28 percent fewer spare parts. Officials stated that the increases were caused in part by the Defense Logistics Agency’s shift to a mix of more expensive spare parts and increases in the price of aviation spare parts. Because our review covered only classes of spare parts, not individual items, we did not determine the extent to which the agency had changed the prices of individual spare parts. However, we recently reported on the Defense Logistics Agency’s efforts to identify and address price increases of spare parts and their causes. Tables 4 and 5, respectively, show the number of spare parts supplied and the reported dollar value of sales by the agency’s lead aviation supply center to each of the services from fiscal years 1996 through 2000. We judgmentally selected 10 aviation-related federal stock classes and found mixed purchasing trends during the study period: The Navy purchased substantially more spare parts in 3 of the 10 classes— components for jet engines, airframes, and wheel and brake systems. Engine parts accounted for over 60 percent of total spare parts purchased in the 10 classes. The Navy purchased fewer parts in the other seven classes of which three decreased by one-third or more. Over one-third of the spare parts the Army purchased in the 10 classes were for aircraft structural components. Overall, the quantities purchased increased for seven classes and declined for the other three. Annual purchases fluctuated in all classes. Similarly to the Navy, the Air Force purchased over 64 percent of spare parts in the 10 selected classes for engine parts. Overall, the Air Force increased its purchases in eight classes and decreased purchases in the others. Spare parts purchased for wheel and brake systems increased by over 703 percent from fiscal year 1996 to fiscal year 2000. Spare parts for engines and fuel systems increased by about 95 percent. Yearly purchases fluctuated in all classes. The Marine Corps purchased fewer than 1,600 spare parts a year from the 10 classes. The total number of spare parts purchased fluctuated from a low of 612 in fiscal year 1999 to a high of 1,574 in fiscal year 2000. Over half of spare parts purchased in all 10 classes were for airframe structural components. The total amounts charged by the agency for spare parts generally increased, even in those classes where the number of spare parts purchased decreased. In written comments on a draft of this report, the Department of Defense generally concurred with its contents. The department also provided technical comments, which we have incorporated where appropriate. The department’s written comments appear in appendix II. To obtain information on trends in the quantity, reported dollar value, demand, and kinds of spare parts, including aviation-related spare parts, which the services bought from the Defense Logistics Agency, we asked agency officials to supply the relevant data. The officials determined which items to include in the spare part category and developed the information for us by year, service, service center, and federal stock class. The officials provided data on the aggregate numbers of spare parts. We did not attempt to independently verify the agency’s information, nor did we verify the reasons for changes in trends. Defense Logistics Agency officials told us that in compiling their data, they used how spare parts were defined in the Integrated Consumable Item Support Model and by the Army in purchasing parts. For sales data, they used the standard unit price, which includes the cost recovery rate (the Defense Logistics Agency surcharge) at the time of sale. We performed our review from July 2001 through April 2002 in accordance with generally accepted government auditing standards. We are sending copies of this report to the appropriate congressional committees, the secretary of defense, the secretary of the army, the secretary of the air force, the secretary of the navy, the commandant of the Marine Corps, the director of the Defense Logistics Agency, and the director of the Office of Management and Budget. Please contact me on (202) 512-8412 if you or your staff have any questions regarding this report. Key contributors to this report were Jeanett H. Reid, George Morse, and Lawson Gist, Jr. Army Inventory: Parts Shortages Are Impacting Operations and Maintenance Effectiveness. GAO-01-772. Washington, D.C.: July 31, 2001. Navy Inventory: Parts Shortages Are Impacting Operations and Maintenance Effectiveness. GAO-01-771. Washington, D.C.: July 31, 2001. Air Force Inventory: Parts Shortages Are Impacting Operations and Maintenance Effectiveness. GAO-01-587. Washington, D.C.: June 27, 2001. Defense Inventory: Information on the Use of Spare Parts Funding Is Lacking. GAO-01-472. Washington, D.C.: June 11, 2001. Defense Inventory: Army War Reserve Spare Parts Requirements Are Uncertain. GAO-01-425. Washington, D.C.: May 10, 2001. Major Management Challenges and Program Risks: Departments of Defense, State, and Veterans Affairs. GAO-01-492T. Washington, D.C.: March 7, 2001. Major Management Challenges and Program Risks: A Government-wide Perspective. GAO-01-241. Washington, D.C.: January 2001. High-Risk Series: An Update. GAO-01-263. Washington, D.C.: January 2001. Defense Acquisitions: Prices of Navy Aviation Spare Parts Have Increased. GAO-01-23. Washington, D.C.: November 6, 2000. Defense Acquisitions: Price Trends for Defense Logistics Agency’s Weapon System Parts. GAO-01-22. Washington, D.C.: November 3, 2000. Contingency Operations: Providing Critical Capabilities Poses Challenges. GAO/NSIAD-00-164. Washington, D.C.: July 6, 2000. Defense Inventory: Process for Canceling Inventory Orders Needs Improvement. GAO/NSIAD-00-160. Washington, D.C.: June 30, 2000. Defense Inventory: Opportunities Exist to Expand the Use of Defense Logistics Agency Best Practices. GAO/NSIAD-00-30. Washington, D.C.: January 26, 2000. Defense Inventory: Improvements Needed to Prevent Excess Purchases by the Air Force. NSIAD-00-5. Washington, D.C.: November 10, 1999. Defense Inventory: Management of Repair Parts Common to More Than One Military Service Can Be Improved. GAO/NSIAD-00-21. Washington, D.C.: October 20, 1999. Military Operations: Some Funds for Fiscal Year 1999 Contingency Operations Will Be Available for Future Needs. GAO/NSIAD-99-244BR. Washington, D.C.: September 21, 1999. Department of Defense: Status of Financial Management Weaknesses and Actions Needed to Correct Continuing Challenges. GAO/T-AIMD/NSIAD-99-171. Washington, D.C.: May 4, 1999. Defense Inventory: DOD Could Improve Total Asset Visibility Initiative With Results Act Framework. GAO/NSIAD-99-40. Washington, D.C.: April 12, 1999. Defense Reform Initiative: Organization, Status, and Challenges. GAO/NSIAD-99-87. Washington, D.C.: April 21, 1999. Defense Inventory: Status of Inventory and Purchases and Their Relationship to Current Needs. GAO/NSIAD-99-60. Washington, D.C.: April 16, 1999. Defense Inventory: Continuing Challenges in Managing Inventories and Avoiding Adverse Operational Effects. GAO/T-NSIAD-99-83. Washington, D.C.: February 25, 1999. High-Risk Series: An Update. GAO/HR-99-1. Washington, D.C.: January 1999. Major Management Challenges and Program Risks: Department of Defense. GAO/OCG-99-4. Washington, D.C.: January 1999.
The Defense Logistics Agency (DLA) reported that a shortage of spare parts has caused a decline in the military services' readiness, particularly in aviation readiness. In response, Congress provided $1.1 billion in additional funding to purchase spare parts. According to DLA, shortages are a result of aging systems and high operational tempo, which increase the total number of spare parts required. The number of spare parts the military services ordered declined between 1996 and 2000, but the dollar value increased by 18 percent. Further, spare parts purchased were drawn from 70 of 78 stock groups. Defense officials told GAO that military downsizing was the primary reason for the decline and that credit card usage and contractor maintenance support also contributed. The reasons cited for the increase were (1) DLA shifts to a mix of more expensive spare parts and (2) price increases due to inaccurate initial price estimates, long periods between procurements, and substantial changes in the quantity of spare parts purchased.
In addition to processing approximately 150 million individual tax returns and issuing more than 100 million refunds during the filing season, IRS provides a range of taxpayer services, including through telephones, written correspondence, and on its website. Based on recent data from IRS, compared to last year, IRS’s telephone service has improved in the 2016 filing season. From January 1 through March 26, 2016, IRS received about 38.2 million calls to its automated and live assistor telephone lines—a slight decrease compared to the same period last year. Of the 14.7 million calls seeking live assistance, IRS had answered 9.9 million calls—a 72 percent increase over the 5.7 million calls answered during the same period last year. Further, the average wait time to speak to an assistor also decreased from 24 to 10 minutes. IRS anticipated that 65 percent of callers seeking live assistance would receive it this filing season, which ended April 18. IRS’s performance for telephone service during the filing season as of March 26, 2016 has exceeded IRS’s anticipated level—74 percent of callers have received live assistance. IRS attributed this year’s improvements to a number of factors. As noted above, of the additional $290 million IRS received in December 2015, it allocated $178.4 million (61.5 percent) for taxpayer services to make measurable improvements in its telephone level of service. With the funds, IRS hired 1,000 assistors who began answering taxpayer calls in March, in addition to the approximately 2,000 seasonal assistors it had hired in fall 2015. To help answer taxpayer calls before March, IRS officials told us that they detailed 275 staff from one of its compliance functions to answer telephone calls. IRS officials said they believe this step was necessary because the additional funding came too late in the year to hire and train assistors to fully cover the filing season. IRS also plans to use about 600 full-time equivalents of overtime for assistors to answer telephone calls and respond to correspondence in fiscal year 2016. This compares to fewer than 60 full-time equivalents of overtime used in fiscal year 2015. However, IRS expects that the telephone level of service will decline after the filing season. As a result, the telephone level of service for the entire 2016 fiscal year is expected to be at 47 percent. As we reported in March 2016, IRS’s telephone level of service for the fiscal year has yet to reach the levels it had achieved in earlier years (see figure 1). In addition to answering telephone calls, IRS responds to millions of letters and other correspondence from taxpayers. In 2015, we reported that the percentage of correspondence cases in IRS’s inventory classified as “overage”—cases generally not processed within 45 days of receipt by IRS—has stayed close to 50 percent since fiscal 2013. Minimizing overaged correspondence is important because delayed responses may prompt taxpayers to write again, call, or visit a walk-in site. Moreover, an increasing overage rate could lead to more interest paid to taxpayers who are owed refunds. In March 2016, IRS officials attributed improvements made this filing season, in part, to assistors working overtime. These officials reported that IRS’s office that responds to taxpayer inquiries and handles adjustments had slightly more than 700,000 correspondence cases in inventory at the end of January and expect about 1 million cases in inventory by the end of April. They described IRS’s correspondence inventory as manageable, but steadily increasing. Officials said that, after the filing deadline, assistors will turn their attention to correspondence. IRS also offers online services to millions of taxpayers through its website, including tax forms and interactive tax assistance features. According to IRS, the agency wants to expand online service to provide greater convenience to taxpayers which has the potential to reduce costs in other areas, such as its telephone operations. We have made recommendations to IRS and the Department of the Treasury (Treasury), as well as a matter for congressional consideration, to assist IRS in improving its customer service. Examples include: Telephone and Correspondence. In December 2012, we recommended that IRS define appropriate levels of service for telephones as well as correspondence. IRS neither agreed nor disagreed with this recommendation and, as of October 2015, the agency had not developed these customer service goals. While IRS has taken some steps to modify services provided to taxpayers, a strategy would help determine the resources needed to achieve customer service goals. Recognizing the importance of such as strategy, in December 2014, we recommended that IRS systematically and periodically compare its telephone service to the best in business to identify gaps between actual and desired performance. IRS disagreed with this recommendation, noting that it is difficult to identify comparable organizations. We do not agree with IRS’s position; many organizations run call centers that would provide ample opportunities to benchmark IRS’s performance. Recognizing the need to improve performance responding to taxpayer correspondence, in December 2015, we recommended to Treasury that it include overage rates for handling taxpayer correspondence as a part of Treasury’s performance goals. Treasury neither agreed nor disagreed with this recommendation. Online Services. In April 2013, we recommended that IRS develop a long-term online strategy that should, for example, develop business cases for all new online services. In March 2016, IRS officials reported that IRS’s Future State initiative is intended to provide better service to taxpayers through multiple channels of communication, including online. We have not yet assessed IRS’s Future State initiative. However, a long- term comprehensive strategy for online services should help ensure that IRS is maximizing the benefit to taxpayers from this investment and reduce costs in other areas, such as for IRS’s telephone operations. Comprehensive Customer Service Strategy. In fall 2015, Treasury and IRS officials said they had no plans to develop a comprehensive customer service strategy or specific goals for telephone service tied to the best in the business and customer expectations. These officials told us that the agencies’ existing efforts were sufficient. However, we continue to believe that, without such a strategy, Treasury and IRS can neither measure nor effectively communicate to Congress the types and levels of customer service taxpayers should expect and the resources needed to reach those levels. Therefore, in December 2015, we suggested that Congress consider requiring that Treasury work with IRS to develop a comprehensive customer service strategy. In April 2016, IRS officials told us that the agency has established a team to consider our prior recommendations in developing a comprehensive customer service strategy or goals for telephone service. During the filing season many taxpayers learn that their private information has been stolen and they have been victims of IDT refund fraud. This generally occurs when the taxpayer attempts to file a tax return only to learn that one has already been filed under the taxpayer’s name. For these taxpayers, IRS has taken action to improve customer service related to IDT refund fraud. As we reported in March 2016, between the 2011 and 2015 filing seasons, IRS experienced a 430 percent increase in the number of telephone calls to its Identity Theft Toll- Free Line. As of March 19, 2016, IRS had received more than 1.1 million calls to this line. During this time, 77 percent of callers seeking assistance on this telephone line received it compared to 54 percent during the same period last year. Average wait times during the same period have also decreased—taxpayers were waiting an average of 14 minutes to talk to an assistor, a decrease from 27 minutes last year. As we reported in April 2016, billions of dollars have been lost to IDT refund fraud and this crime continues to be an evolving threat. IRS develops estimates of the extent of IDT refund fraud to help direct its efforts to identify and prevent the crime. While its estimates have inherent uncertainty, IRS estimated that it prevented or recovered $22.5 billion in fraudulent IDT refunds in filing season 2014 (see figure 2). However, IRS also estimated, where data were available, that it paid $3.1 billion in fraudulent IDT refunds. Because of the difficulties in knowing the amount of undetectable fraud, the actual amount could differ from these estimates. IRS has taken steps to address IDT refund fraud; however, it remains a persistent and continually changing threat. IRS recognized the challenge of IDT refund fraud in its fiscal year 2014-2017 strategic plan and increased resources dedicated to combating IDT and other types of refund fraud. In fiscal year 2015, IRS reported that it staffed more than 4,000 full-time equivalents and spent about $470 million on all refund fraud and IDT activities. As described above, IRS received an additional $290 million in fiscal year 2016 to improve customer service, IDT identification and prevention, and cybersecurity efforts. The agency plans to use $16.1 million of this funding to help prevent IDT refund fraud, among other things. As we reported in April 2016, the administration requested an additional $90 million and an additional 491 full-time equivalents for fiscal year 2017 to help prevent IDT refund fraud and reduce other improper payments. IRS estimates that this $90 million investment in IDT refund fraud and other improper payment prevention would help it protect $612 million in revenue in fiscal year 2017, as well as protect revenue in future years. As we previously reported, IRS also works with third parties, such as tax preparation industry participants, states, and financial institutions to try to detect and prevent IDT refund fraud. In March 2015, the Commissioner of the IRS convened a Security Summit with industry and states to improve information sharing and authentication. IRS officials said that 40 state departments of revenue and 20 tax industry participants have officially signed a partnership agreement to enact recommendations developed and agreed to by summit participants. IRS plans to invest a portion of the $16.1 million it received in fiscal year 2016 into identity theft prevention and refund fraud mitigation actions from the Security Summit. These efforts include developing an Information Sharing and Analysis Center where IRS, states, and industry can share information to combat IDT refund fraud. Even though IRS has prioritized combating IDT refund fraud, fraudsters adapt their schemes to identify weaknesses in IDT defenses, such as gaining access to taxpayers’ tax return transcripts through IRS’s online Get Transcript service. According to IRS officials, with access to tax transcripts, fraudsters can create historically consistent returns that are hard to distinguish from a return filed by a legitimate taxpayer. This can make it more difficult for IRS to identify and detect IDT refund fraud. Because identity thieves are “adaptive adversaries” who are constantly learning and changing their tactics as IRS develops new IDT strategies, IRS will need stronger pre-refund and post-refund strategies to combat this persistent and evolving threat. While there are no simple solutions, our past work has highlighted ways IRS can combat this threat. Improved authentication. Improving authentication could help IRS prevent fraud before issuing refunds. In January 2015, we reported that IRS’s authentication tools have limitations and recommended that IRS assess the costs, benefits and risks of its authentication tools. For example, individuals can obtain an e-file PIN by providing their name, Social Security number, date of birth, address, and filing status for IRS’s e-file PIN application. Identity thieves can easily find this information, allowing them to bypass some, if not all, of IRS’s automatic checks according to our analysis and interviews with tax software and return preparer associations and companies. After filing an IDT return using an e-file PIN, the fraudulent return would proceed through IRS’s normal return processing. In response to our recommendation, in November 2015, IRS developed guidance for its Identity Assurance Office to assess costs, benefits, and risk. According to IRS officials, this analysis will inform decision-making on authentication-related issues. IRS also noted that the methods of analysis for the authentication tools will vary depending on the different costs and other factors for authenticating taxpayers in different channels, such as online, phone, or in-person. In February 2016, IRS officials told us that the Identity Assurance Office plans to complete a strategic plan for taxpayer authentication across the agency in September 2016. While IRS is taking steps, it will still be vulnerable until it completes and uses the results of its analysis of costs, benefits, and risks to inform decision- making. W-2 Pre-refund Matching. Another pre-refund strategy is earlier matching of employer-reported wage information to taxpayers’ returns before issuing refunds. As we reported in August 2014, thieves committing IDT refund fraud take advantage of IRS’s “look-back” compliance model. Under this model, rather than holding refunds until completing all compliance checks, IRS issues refunds after conducting selected reviews, such as verifying identity by matching names and Social Security numbers and filtering for indications of fraud. However, we found that the wage information that employers report on the Form W- 2, Wage and Tax Statement (W-2), has generally been unavailable to IRS until after it issues most refunds. According to IRS, pre-refund matching would potentially save a substantial part of the billions of taxpayer dollars currently lost to fraudsters. Increasing electronically-filed (e-file) W-2s. In December 2015, the Consolidated Appropriations Act, 2016 amended the tax code to accelerate W-2 filing deadlines to January 31. This represents important progress. Building on that, other policy changes may also be needed in concert with moving W-2 deadlines. Agency officials and third-party stakeholders told us that these changes include lowering the employee threshold requirement for employers to e-file W-2s. Because of the additional time and resources associated with processing paper W-2s submitted by employers, Social Security Administration officials told us that a change in the e-file threshold would be needed to sufficiently increase the number of e-filed W-2s. Backlogs in paper W-2s could result in IRS receiving W-2 data after the end of the filing season. Therefore, we have suggested that Congress should consider providing the Secretary of the Treasury with the regulatory authority to lower the threshold for electronic filing of W-2s from 250 returns annually to between 5 to 10 returns, as appropriate. Assessing the costs and benefits of pre-refund W-2 matching. In August 2014 we reported that the wage information that employers report on Form W-2 is unavailable to IRS until after it issues most refunds. Also, if IRS had access to W-2 data earlier, it could match such information to taxpayers’ returns and identify discrepancies before issuing billions of dollars of fraudulent IDT refunds. We recommended that IRS assess the costs and benefits of accelerating W-2 deadlines. In response to our recommendation, IRS provided us with a report in September 2015 discussing (1) adjustments to IRS systems and work processes needed to use accelerated W-2 information, (2) the potential impacts on internal and external stakeholders, and (3) other changes needed to match W-2 data to tax returns prior to issuing refunds, such as delaying refunds until W-2 data are available. IRS’s analysis for this report will help it determine how to best implement pre-refund W-2 matching, given the new January 31 deadline for filing W-2s. Improving feedback on external leads. A post-refund strategy to combat IDT refund fraud involves IRS’s External Leads Program. This program involves financial institutions and other external parties providing information about emerging IDT refund trends and fraudulent returns that have passed through IRS detection systems. In August 2014, we reported that IRS provided limited feedback to external parties on IDT leads they submitted and offered external parties limited general information on IDT refund fraud trends. We recommended that IRS provide actionable feedback to all lead-generating third parties, and IRS neither agreed nor disagreed. However, in response to our recommendation, IRS took a number of steps. First, in November 2015, IRS reported that it had developed a database to track leads submitted by financial institutions and the results of those leads. IRS also stated that it had held two sessions with financial institutions to provide feedback on external leads provided to IRS. Second, in December 2015, IRS officials told us that the agency sent a customer satisfaction survey asking financial institutions for feedback on the external leads process. The agency was also considering other ways to provide feedback to financial institutions. Third, in April 2016, IRS officials told us that they plan to analyze preliminary survey results by mid-April 2016. Finally, IRS officials reported that the agency shared information with financial institutions in March 2016 and plans to do so on a quarterly basis. The next information sharing session is scheduled in June 2016. We are following up with IRS on these activities to determine the extent to which IRS has addressed our recommendation. In addition to securing taxpayer information to help prevent IDT refund fraud, there are additional concerns for maintaining security of taxpayer data. As we reported in March 2016, IRS has implemented numerous controls over key financial and tax processing systems; however, it had not always effectively implemented access and other controls, including elements of its information security program. Access controls are intended to prevent, limit, and detect unauthorized access to computing resources, programs, information, and facilities. These controls include identification and authentication, authorization, cryptography, audit and monitoring, and physical security controls, among others. In our most recent review in March 2016, we found that IRS had improved access controls, but some weaknesses remain. Examples include: Identifying and authenticating users—such as through user account-password combinations—provides the basis for establishing accountability and controlling access to a system. IRS established policies for identification and authentication, including requiring multifactor authentication for local and network access accounts, and establishing password complexity and expiration requirements. It also improved identification and authentication controls by, for example, expanding the use of an automated mechanism to centrally manage, apply, and verify password requirements. However, weaknesses in identification and authentication controls remained. For example, the agency used easily guessable passwords on servers supporting key systems. Authorization controls limit what actions users are able to perform after being allowed into a system. They should be based on the concept of “least privilege,” granting users the least amount of rights and privileges necessary to perform their duties. While IRS established policies for authorizing access to its systems, we found that it continued to permit excessive access in some cases. For example, users were granted rights and permissions in excess of what they needed to perform their duties, including for an application used to process electronic tax payment information and a database on a human resources system. Cryptography controls protect sensitive data and computer programs by rendering data unintelligible to unauthorized users and protecting the integrity of transmitted or stored data. IRS policies require the use of encryption and it continued to expand its use of encryption to protect sensitive data. However, key systems we reviewed had not been configured to encrypt sensitive user authentication data. IRS also had weaknesses in configuration management controls, which are intended to prevent unauthorized changes to information system resources (e.g., software and hardware), and provide assurance that systems are configured and operating securely. Specifically, while IRS developed policies for managing the configuration of its information technology (IT) systems and improved some configuration management controls, it did not, for example, ensure security patch updates were applied in a timely manner to databases supporting two key systems we reviewed, including a patch that had been available since August 2012. To its credit, IRS had established contingency plans for the systems we reviewed, which help ensure that when unexpected events occur, critical operations can continue without interruption or can be promptly resumed, and that information resources are protected. Specifically, IRS had established policies for developing contingency plans for its information systems and for testing those plans, as well as for implementing and enforcing backup procedures. Moreover, the agency had documented and tested contingency plans for its systems and improved continuity of operations controls for several systems. Nevertheless, the control weaknesses we found can be attributed in part to IRS’s inconsistent implementation of elements of its agency-wide information security program. The agency established a comprehensive framework for its program, including assessing risk for its systems, developing system security plans, and providing employees with security awareness and specialized training. However, IRS had not updated key mainframe policies and procedures to address issues such as comprehensively auditing and monitoring access. In addition, the agency had not fully addressed previously identified deficiencies or ensured that its corrective actions were effective. During our most recent review, IRS told us it had addressed 28 of our prior recommendations; however, we determined that 9 of these had not been effectively implemented. We concluded in our November 2015 report that the collective effect of the deficiencies in information security from prior years that continued to exist in fiscal year 2015, along with the new deficiencies we identified, were serious enough to merit the attention of those charged with governance of IRS and therefore represented a significant deficiency in IRS’s internal control over financial reporting systems as of September 30, 2015. To assist IRS in fully implementing its agency-wide information security program, we made two new recommendations to more effectively implement security-related policies and plans. In addition, to assist IRS in strengthening security controls over the financial and tax processing systems we reviewed, we made 43 technical recommendations in a separate report with limited distribution to address 26 new weaknesses in access controls and configuration management. Implementing these recommendations—in addition to the 49 outstanding recommendations from previous audits—will help IRS improve its controls for identifying and authenticating users. This, in turn, will allow IRS to limit users’ access to the minimum necessary to perform their job-related functions, protect sensitive data when they are stored or in transit, audit and monitor system activities, and physically secure its IT facilities and resources. In commenting on drafts of our reports presenting the results of our fiscal year 2015 audit, the IRS Commissioner stated that while the agency agreed with our new recommendations, it will review them to ensure that its actions include sustainable fixes that implement appropriate security controls balanced against IT and human capital resource limitations. In conclusion, this year’s tax filing season has generally gone smoothly and IRS has improved customer service. While IRS has some initiatives to review customer service and consider improvements, it still needs to develop a comprehensive strategy for customer service that will meet the needs of taxpayers. This strategy could include setting customer service goals as well as benchmarking and monitoring performance. IRS also needs to strengthen its defenses for addressing IDT refund fraud that is informed by assessing the cost, benefits, and risks of IRS’s various authentication options. Finally, weaknesses in information security can also increase the risk posed by IDT refund fraud. While IRS has made progress in implementing information security controls, it needs to continue to address weaknesses in access controls and configuration management and consistently implement all elements of its information security program. The risks to which the IRS and the public are exposed have been illustrated by recent incidents involving public-facing applications, highlighting the importance of securing systems that contain sensitive taxpayer and financial data. Chairman Roskam, Ranking Member Lewis, and Members of the Subcommittee, this concludes my statement. I look forward to answering any questions that you may have at this time. If you have any questions regarding this statement, please contact Jessica K. Lucas-Judy at (202) 512-9110 or LucasJudyJ@gao.gov, James R. McTigue, Jr. at (202) 512-9110 or mctiguej@gao.gov, Gregory C. Wilshusen at (202) 512-6244 or wilshuseng@gao.gov, or Nancy Kingsbury at (202) 512-2928 or kingsburyn@gao.gov. Other key contributors to this statement include Neil A. Pinney, Joanna M. Stamatiades, and Jeffrey Knott, (assistant directors); Dawn E. Bidne; Mark Canter; James Cook; Shannon J. Finnegan; Lee McCracken; Justin Palk; J. Daniel Paulk; Erin Saunders Rath; and Daniel Swartz. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
IRS provides service to tens of millions of taxpayers and processes most tax returns during the filing season. It is also a time when legitimate taxpayers may learn that they are a victim of IDT refund fraud, which occurs when a thief files a fraudulent return using a legitimate taxpayer's identity and claims a refund. In 2015, GAO added IDT refund fraud to its high-risk area on the enforcement of tax laws and expanded its government-wide high-risk area on federal information security to include the protection of personally identifiable information. With IRS's reliance on computerized systems, recent data breaches at IRS highlight the vulnerability of sensitive taxpayer information. This statement discusses IRS's efforts to address (1) customer service declines, (2) IDT refund fraud challenges, and (3) information security weaknesses. This statement is based on GAO reports issued between 2012 and 2016 and includes updates of selected data. The Internal Revenue Service (IRS) improved phone service to taxpayers during the 2016 filing season compared to last year. According to IRS, this is due in part to the additional $290 million in funding Congress provided to improve customer service, identity theft (IDT) refund fraud, and cybersecurity efforts. However, IRS expects its performance for the entire fiscal year will not reach the levels of earlier years. In 2012 and 2014, GAO made recommendations for IRS to improve customer service, which it has yet to implement. Consequently, in December 2015, GAO suggested that Congress require the Department of the Treasury (Treasury) to work with IRS to develop a comprehensive customer service strategy that incorporates elements of these prior recommendations. IDT refund fraud poses a significant challenge. Although the full extent of this fraud is unknown, IRS estimates it paid $3.1 billion in IDT fraudulent refunds in filing season 2014, while preventing the processing of $22.5 billion in fraudulent refunds (see figure). IRS has taken steps to combat IDT refund fraud, such as increasing resources dedicated to combating the problem. However, as GAO reported in August 2014 and January 2015, additional actions can further assist the agency, including assessing the costs, benefits, and risks of improving methods for authenticating taxpayers. In addition, the Consolidated Appropriations Act, 2016 included a provision to accelerate filings of W-2 information from employers to the IRS that would help IRS with pre-refund matching. GAO suggested that Congress provide Treasury with authority to lower the threshold for e-filing W-2s, which would further enhance pre-refund matching. In March 2016, GAO reported that IRS had instituted numerous controls over key financial and tax processing systems; however, it had not always effectively implemented other controls intended to properly restrict access to systems and information, among other security measures. While IRS had improved some of its access controls, weaknesses remained in controls over key systems for identifying and authenticating users, authorizing users' level of rights and privileges, and encrypting sensitive data. These weaknesses were due in part to IRS's inconsistent implementation of its agency-wide security program, including not fully implementing 49 prior GAO recommendations. GAO concluded that these weaknesses collectively constituted a significant deficiency for the purposes of financial reporting for fiscal year 2015. As a result, taxpayer and financial data continue to be exposed to increased risk. GAO previously suggested that Congress consider requiring that Treasury work with IRS to develop a customer service strategy, and providing Treasury with the authority to lower the annual threshold for e-filing W-2s. GAO made prior recommendations to IRS to combat IDT refund fraud, such as assessing the costs, benefits, and risks of taxpayer authentication options, and 45 new recommendations to further improve IRS's information security controls and the implementation of its agency-wide information security program.
The credit card industry is composed of issuers, processors, and card networks. Typically banks, thrifts, and credit unions are the organizations that issue credit cards and underwrite the credit that is provided to consumers. The issuance of credit cards is highly concentrated, with the eight largest issuers representing 88 percent of all outstanding consumer credit card balances reported by CardWeb.com, Inc., as of year-end 2005. Processors provide a wide range of services for thousands of issuers, including card production, transaction processing, and production and mailing of billing statements. The level of services provided by processors can differ depending on a specific issuer’s needs. For example, some issuers handle all billing calculations and maintain all related data within the organization and rely on processors solely for printing and mailing billing statements. Other issuers, including many of the smaller issuers, use processors to perform all necessary services related to their credit cards. Finally, credit card networks facilitate payment transactions between cardholders and merchants by transferring information and funds between a merchant and a cardholder. Credit card users can be characterized into two groups—those who use their cards for purchases but consistently pay their outstanding balance in full every month (convenience users) and those who carry a balance on their cards (revolvers). Different data sources report that in 2004 revolvers represented between approximately 46 and 55 percent of cardholders. Various data sources indicate that the proportion of cardholders that pay only the minimum payment or slightly more than the minimum payment at any given time ranged from about 7 and 40 percent between 1999 and 2005, while issuers indicated that a small percentage of their cardholders (from less than 1 percent and up to 10 percent) make multiple consecutive minimum payments. According to a survey conducted by the Federal Reserve in 2004, the median balance for U.S. families that carried balances on bank-type credit cards was $2,200, and the average balance was $5,100. Each issuer determines the minimum payments that cardholders must pay each billing cycle to keep an account in good standing. Issuers calculate minimum payment amounts in a variety of ways, including as a set percentage of a cardholder's outstanding balance, or the sum of all interest and fees to be paid as well as some portion of the principal balance, among other ways. For example, some issuers calculate minimum payments as 1 percent of the outstanding balance plus any finance charges and fees (such as late fees or over-the-limit fees) incurred for that billing period. Historically, required minimum payments generally averaged about 5 percent of the outstanding balance, but these amounts declined to about 2 percent in the last decade. The decrease in minimum payment rates lowered a cardholder’s monthly payment obligation, but also further delayed a cardholder’s repayment of principal. In some cases, the amount required for the minimum payment was not sufficient to cover all incurred interest or other transaction charges, which increased the outstanding balance. Concerns about such increases—known as negative amortization—as well as other practices compelled four federal banking regulators to issue guidance in January 2003 that stated that issuers should require minimum repayment amounts so that cardholders’ current balances would be paid off—amortize—over a reasonable period of time. The guidance was designed to discourage minimum payment formulas that result in prolonged negative amortization of accounts, a practice viewed by regulators as raising safety and soundness concerns. However, it is possible that a bank could satisfy a regulator’s expectations by requiring minimum payment amounts that represent less than the 5 percent of outstanding principal that previously was customary in the industry. According to a representative of the Office of the Comptroller of the Currency, by year-end 2005, nearly all the issuers that it oversees (which includes the largest issuers in the United States) had controls in place to address concerns regarding negative amortization of credit card accounts. As part of the Bankruptcy Act, issuers will be required to provide cardholders with information about the consequences of making minimum payments on outstanding credit card balances. More specifically, the act requires creditors to print on the billing statements of revolving credit products (of which credit cards are a form) a generic disclosure that “making only the minimum payment will increase the interest you pay and the time it takes to repay your balance.” In addition to the generic disclosure, the law requires creditors to choose from two options for providing additional information to cardholders: (1) providing a toll-free telephone number that cardholders could use to obtain the actual number of months that it would take to repay their outstanding balance if they made only minimum payments or (2) providing an example of the length of time required to pay off a sample balance at an interest rate of 17 percent and a toll-free telephone number cardholders could call to get an estimate of the time required to repay their balances. These requirements are intended to increase consumer awareness of the consequences of these types of payments. The Federal Reserve is currently establishing regulations to implement the new law, which it expects to complete in 2007. The minimum payment disclosure requirements will take effect 12 months after the final regulations are published. While the Bankruptcy Act mandated that generic disclosures be made to consumers on their billing statements, some lawmakers had sought to require additional and more customized disclosures that would have provided each cardholder with customized information about the costs and time involved in paying off credit card balances resulting from habitually making only minimum payments. Amendments that would have mandated these customized disclosures failed to pass prior to the passage of the Bankruptcy Act. While the details vary, five bills were pending in Congress as of March 2006 that would mandate that issuers provide customized disclosures to consumers. Table 1 illustrates the differences between the disclosure options that issuers will be required to implement as a result of the Bankruptcy Act and an example of the type of customized disclosures that have been envisioned as part of various legislative proposals. “Making only the minimum payment will increase the interest you pay and the time it takes to repay your balance.” “Minimum Payment Warning: Making only the minimum payment will increase the interest you pay and the time it takes to repay your balance.” “Minimum Payment Warning: Making only the minimum payment will increase the amount of interest paid and the length of time to repay the outstanding balance.” “For more information, call this toll-free number: ____________.” “For example, making only the typical 2% minimum monthly payment on a balance of $1,000 at an interest rate of 17% would take 88 months to repay the balance in full.” “For example, your balance of will take months to pay off…” The information required to be provided is the actual number of months that it will take the cardholder to repay his/or her outstanding balance. “For an estimate of the time it would take to repay your balance, making only the minimum payments, call this toll-free number: _______________. “…at a total cost of in principal and in interest if only the minimum monthly payments were made.” “To pay off your balance in 3 years, you would need to pay monthly.” An attempt to mandate customized disclosures on the consequences of making minimum payments also was made at the state level. In 2001, California enacted a law that required issuers to provide the state’s cardholders with more detailed information about making minimum payments. Issuers were required to provide one of two disclosure options. Both options required the issuer to provide a minimum payment warning. In addition to the minimum payment warning, one option required issuers to print an example of the length of time required to pay off a sample balance amount using a sample interest rate. Further, issuers were required to provide cardholders, via a toll-free telephone number, with information about both the length of time required and total cost of paying an outstanding balance if only minimum payments were made. The second option, which was mandated if a cardholder did not pay more than the minimum payment for 6 consecutive months, required issuers to print on the billing statement individualized information indicating an estimate of the number of years and months and the approximate total cost to pay off the total balance due, based on the terms of the credit agreement, if the holder were to make only the minimum payment. The disclosure also included a toll-free telephone number to a credit counseling referral service. In December 2002, the U.S. District Court for the Eastern District of California held that the state statute was preempted by federal law and determined that the law was inapplicable to all federally chartered banks, savings associations and credit unions. According to a staff attorney for the California Attorney General’s office involved in the case, the judge effectively invalidated the law for all issuers because federally chartered issuers held more than 95 percent of credit card debt in the state at the time, thereby compelling the state for fairness reasons to relieve all issuers from compliance with the law. According to credit card issuers and others we interviewed, providing customized estimates to cardholders would be feasible. However, the precision of these estimates would depend upon the assumptions incorporated in the calculations needed to produce this information, which can vary based on decisions about how various factors are included. Issuers also said providing such information could expose them to legal liability and suggested a variety of regulatory actions to address these concerns. Although uncertainty about format and content prevented issuers and processors from providing precise cost estimates, they told us the largest individual cost components for large and small issuers appeared to be ongoing postage and call center operations, as well as one-time programming costs. Total projected costs to implement customized disclosures varied widely. However, issuers already are going to bear some of these costs to implement Bankruptcy Act disclosures; and, according to an industry analyst, the costs appear very small when compared with large issuers’ net income. Issuers and others familiar with the proposed minimum payment disclosure indicated to us that providing cardholders with estimates of various consequences of making minimum payments would be possible. Representatives for all six large credit card issuers whom we interviewed acknowledged that their computer systems could be programmed to use individual cardholder account information to calculate estimates of the information envisioned to be disclosed. These calculations would include the amount of time required to pay off a cardholder’s specific balance if only the minimum payment were made, the total amount of interest incurred over that time, and the amount a cardholder would be required to pay each billing cycle to pay off an outstanding balance over a given period. Some credit card issuers and processors already had successfully developed the capability to produce tailored estimates for their cardholders as a result of customized minimum payment disclosures that had been required in California in 2002. One of these issuers developed this capability internally, while another used a third-party processor that developed this functionality for all its issuer clients to use. Besides noting that they could produce customized disclosures, some issuers said they would prefer to provide customized rather than generic information to cardholders. For example, representatives for one large issuer told us they would prefer the Bankruptcy Act option that would require them to produce actual repayment times for cardholders, obtainable by calling a toll-free telephone number provided in billing statements. In a comment letter responding to the Federal Reserve’s advance notice of proposed rulemaking, a representative for another large issuer said that existing disclosure provisions should be implemented in such a way as to encourage issuers to provide customized information to cardholders. These two large issuers said they supported providing customized information to their cardholders because they believe cardholders would find it more relevant than generic information. A representative for one of these issuers also said the issuer would benefit because providing customized information over the telephone would require the shortest statement to be printed on a billing statement of the two options under the Bankruptcy Act and could be printed anywhere on a billing statement, which could be easier to implement. Although generally having fewer resources than larger issuers, small banks that issue credit cards also could likely implement customized disclosures, but such a requirement could represent a larger burden for those that do not use third-party processors. A representative of a trade association representing community banks told us customized estimates would be feasible for small institutions because the work to implement such a requirement would be done largely by the third-party processors already used to manage cardholder data and process billing statements. According to staff of the National Credit Union Administration and the Federal Deposit Insurance Corporation who were familiar with the operations of smaller financial institutions offering credit cards, most small issuers use third-party processors to assist with card operations because the small issuers lack the resources to provide such a product themselves. For example, small issuers typically assign only one or two people to manage their credit card programs that, according to representatives of a third-party processor, would not be adequate for managing the technical, legal, and compliance issues that would be required to provide the proposed customized disclosure. However, small institutions benefit from economies of scale by working through third-party processors. For example, a representative for a third-party processor with thousands of small-bank clients told us that the processor requires all small institutions to use the same billing statement format or template. Therefore, changes made by the third-party processor to the billing statement template would apply to all clients using that template. In this case, the processor’s costs to modify the template would be spread across its client base. A representative from a federal banking regulator told us that if issuers discontinue a credit card program upon the implementation of new disclosure requirements, it would likely be because the program had been marginally profitable or unprofitable even before the requirements took effect. Issuers and others told us the calculations needed to produce customized information require the incorporation of certain assumptions, and their precision can vary depending on various choices that can be made as part of these calculations. The calculations needed to produce customized information require assumptions about future cardholder behavior or changes in account terms. For example, an estimate of the time required to pay off a cardholder’s current balance would assume that the cardholder does not make more purchases with the card. Any subsequent increase to a cardholder’s outstanding balance would lengthen the repayment period and also likely increase the total amount of interest to be paid for a cardholder making minimum payments. Additionally, the estimates produced would assume that a consumer continuously paid exactly the minimum payment and that payments would be made by the due date. Other assumptions would address potential changes in account terms. For example, calculations would assume that the interest rate applied to the cardholder’s balance remained constant. However, changes in future interest rates are likely, and such changes could affect the time required to fully repay a given balance. Similarly, the estimates produced would assume that the formulas issuers use to allocate payments to the various balances subject to different interest rates, among other things, also would stay the same. In addition to these assumptions, the choices that lawmakers, regulators or issuers make about calculation methods also affect the precision of the customized estimates. These choices include how issuers compute minimum payment amounts or finance charges, among other things. For example: Minimum payment formulas vary among issuers and each issuer could have as many as six different methods for determining the minimum payment on a single account. Some card issuers calculate minimum payment amounts as a set percentage of a cardholder’s outstanding balance, while others include all interest and fees to be paid as well as some amount of the principal balance. Further, issuers differ in their absolute minimum payment amounts (e.g., $10, $15, $20). Estimates based on each firm’s actual formula for calculating minimum payments therefore would differ from estimates calculated using a standard formula for all issuers. Many issuers have credit cards that charge different rates for different types of transactions, such as purchases, cash advances, or balance transfers from other credit cards. Estimates that require issuers to incorporate the various interest rates that apply to their cardholders’ outstanding balances would differ from those based on formulas that assume a single interest rate, including ones using a composite rate. As a result, if lawmakers or regulators mandated use of a standardized calculation to prepare customized minimum payment estimates, cardholders could receive less precise estimates. In contrast, requiring issuers to calculate estimates using actual interest rates—including cases in which multiple interest rates apply to different portions of a total balance—and include other information that specifically reflects each issuer’s own terms and practices likely would lead to more precise estimates. Because some issuers saw the assumptions that must be incorporated into the calculations for customized minimum payment disclosures as unrealistic, they and others questioned whether such disclosures provided useful information. For example, some issuer representatives noted that the customized disclosures presented estimates that would be accurate only as long as cardholders did not make further purchases and the interest rate on the card remained constant. However, issuers said that such situations were not representative of most cardholders’ behavior or today’s credit environment. Some issuers mentioned that, for these reasons, the Bankruptcy Act disclosure options were a good compromise between Congress and the industry. As a result, issuers and others stated that these disclosures deserve a chance to work before further, more detailed disclosures are required. According to some issuers and a third-party processor, providing customized estimates to cardholders could expose card issuers to increased legal risk. Because of the imprecise nature of customized minimum payment estimates, some issuers expressed concerns about facing lawsuits. For example, some issuer representatives told us that issuers were concerned about being held responsible for adverse consequences experienced by cardholders who misinterpreted the estimates, which incorporate certain assumptions and calculation choices that affect their precision. Issuers and others said litigation (e.g., class action lawsuits) could arise out of such misinterpretations and subject issuers to significant legal costs, even if they took reasonable actions under the guidance to provide cardholders with customized information. A representative of a trade association for community banks told us the threat of legal liability would be more onerous for small issuers. The extent to which requiring customized disclosures would increase issuers’ legal risk is not certain because cardholders’ ability to sue can vary. For example, under TILA provisions, class action lawsuits are not available to cardholders with grievances under the minimum payment disclosure requirement added by the Bankruptcy Act. However, TILA provides cardholders with a private right of action against issuers, which could make issuers that failed to comply with the minimum payment disclosure requirements liable for actual losses incurred by cardholders. In addition, an Office of the Comptroller of the Currency official told us that the possibility exists that a cardholder may have a private right of action against an issuer for erroneous disclosures under a state’s consumer protection law. Although various “safe harbor” provisions in TILA already protect issuers from unintentional errors resulting from good-faith efforts to comply with rules and regulations, organizations we interviewed suggested a variety of additional legal protections if disclosure requirements were to change. For example, a representative for an issuer suggested that issuers could use calculation methods previously deemed acceptable to the Federal Reserve. Issuers that performed calculations according to the approved methods would be considered in compliance with the disclosure requirements. Also, issuer representatives and a representative of a consumer interest group said that the estimates that issuers calculate could be subject to a tolerance test, which would give issuers a margin of error (e.g., a few months) within which the estimates could be deemed accurate. Another legal protection could involve determining whether issuers followed required steps— according to defined assumptions and calculation methodologies—to calculate the customized information. For example, regulation could establish parameters for the calculations, such as how to treat accounts with multiple interest rates. However, a representative of a consumer interest group and credit card processor cautioned that while a higher level of standardization of the calculations could help protect issuers from lawsuits because expectations would be clearer, standardized calculations might not be sufficient to reflect variation in the terms and conditions of various credit card products. Although not certain about the form and content of a customized minimum payment disclosure, issuer and processor representatives were able to identify the implementation components that likely would be the most costly, including postage, computer programming, and call center operations. However, the estimates of the total implementation costs varied widely. Further, issuers already would incur some portion of the costs to provide customized disclosures in providing the Bankruptcy Act disclosures; thus, not all of the cost estimates we obtained represent the cost of customized disclosures exclusively. Credit card issuers and processors—the entities with the best data about the cost to implement customized disclosures—were unable to provide precise cost estimates for a variety of reasons. First, factors affecting actual paper and postage costs cannot be determined until a law requiring a customized disclosure is enacted and implementing regulations issued. Such factors could include how customized disclosures would be formatted (e.g., font size, spacing) and where such disclosures would be required to be placed in the billing statement (e.g., front page, leaflet). Second, decisions about calculation methods and the treatment of variables could affect estimates for programming computers. For example, representatives for two large issuers told us that if issuers had to make complex calculations, actual programming costs could be as much as four to five times higher than if simpler calculations were required. Third, some issuers were uncertain of the costs that would be incurred outside their own organizations, for example, by third-party processors. Accordingly, some issuers generated estimates based on previous experiences (such as implementing similar requirements) or by making assumptions about implementation requirements, such as the required location and length of a disclosure. Two large issuers and two third-party processors provided us with estimates of postage costs, which they said would be potentially the highest cost item to implement a customized disclosure. Postage cost increases could occur if adding the disclosure also added an additional page to the monthly statement. This added weight could move the statement into a higher postage category. Adding a page to billing statements could increase postage costs because, as one large issuer explained, issuers generally manage the amount of information they include in their mailings to meet a 1-ounce limit, which according to a representative of a third-party processor costs on average $0.30 per statement to mail. The incremental cost of moving from a 1-ounce bulk postage rate to a 2-ounce rate would be on average about $0.23, or almost an 80 percent increase, according to representatives for two third-party processors. However, requiring that additional information be included in a billing statement would not necessarily push all billing statements into a higher postage category because issuers add and remove information (such as advertising) from statements to meet weight limits, according to representatives for some issuers. According to representatives of a third- party processor, postage rates for small issuers that mail statements through third-party processors would be relatively the same as for large issuers. A representative of another third-party processor told us small issuers get the same bulk postage rates as large issuers because their mailings are combined. Postage rates decline as more statements make up a mailing. However, postage costs for small issuers that mail statements at retail rates would be higher. We were unable to determine the proportion of small issuers that use retail postage rates. According to issuers and processors, additional postage arising from implementing customized minimum payment disclosures for a large issuer could be as high as about $14 million annually. We obtained postage cost estimates from representatives for two large issuers that mail up to 50 million statements each month. According to one of these issuers, annual postage costs could increase up to about $5 million if all cardholders were required to receive the customized information envisioned in a proposed disclosure on the first page of every billing statement. We estimated this to be an increase of about 5 percent to annual postage costs for mailing billing statements. Representatives for the other issuer told us their postage costs could increase by as much as about $14 million annually to implement customized disclosures on the first page of billing statements. We estimated this to represent about an 8 percent increase to the issuer’s annual postage costs to mail billing statements. The representatives estimated these disclosures to be twice the length of a generic disclosure, thereby forcing more than 20 percent of statements to require an additional page. Differences in these estimates are attributable to the number of billing statements that the issuers estimated would require additional postage, which differs across issuers depending on the format of their statements and the assumptions they made about formatting for the proposed disclosure. Although estimated postage cost increases appear to constitute the largest component of projected implementation costs, issuers usually incur much higher postage costs for other purposes. For example, a credit card industry analyst told us postage costs for mailing statements are insignificant when compared with the expense per issuer of mailing about 4-5 billion solicitations each year, a typical amount for the largest card issuers. In contrast—based on our analysis of CardWeb.com, Inc., data— we estimate that even the largest issuers mail less than 1 billion statements per year. Also, postage costs could decline as the number of cardholders receiving billing statements in electronic formats increases. Representatives for some issuers told us that the proportion of cardholders receiving statements in an electronic format is small, but growing. According to representatives of one large issuer, between 2002 and 2004, electronic statement use among their cardholders increased about 85 percent, and 6 to 12 percent received statements electronically. Representatives for a smaller issuer told us that about 10 percent of its cardholders used the issuer’s Web site to get information about their card accounts. According to issuers and others, expenses related to programming computer systems to develop tailored estimates would be another major cost of implementing customized disclosures. Programming costs are one- time costs for designing, testing, and implementing computer code. Once in place, the new or revised programs would use cardholder account data to provide estimates of the repayment period, total interest costs, and monthly payment amount to pay off a balance if only minimum payments were to be made. Issuers’ programming costs would arise from the time their own information technology staff spend making systems modifications or from the increased expenses from the use of third-party processors, which maintain information systems that store issuers’ cardholder account data as well as develop, print, and mail billing statements. Estimates for programming generally were $1 million or less and depended on the complexity of the required calculations and issuers’ information systems. For example, representatives of a large issuer and a card processor representing over one thousand large and small issuers told us the up-front costs to develop and program computer code for a customized disclosure would cost about $500,000 but could cost as much as $1 million for more complex calculations. In providing us with estimates, we asked issuers and third-party processors to assume that calculations would reflect issuers’ actual account terms and practices at the time the information was produced, including interest rates, account balances, and methods for calculating finance charges and minimum payments. However, representatives for the same large issuer told us programming costs could be as much as $5 million for the most complex calculations—for example, a calculation that would require issuers to factor in such situations as temporary zero percent promotional interest rates. We obtained estimates from others for programming under the Bankruptcy Act provisions, which only require one calculation to estimate a cardholder’s repayment period. These estimates were generally less than $500,000. For example, one lender stated in a comment letter to the Federal Reserve that such programming would cost about $412,500. Estimated programming costs for smaller issuers that use third-party processors were lower than for large issuers. We obtained estimates for programming the customized provisions under the Bankruptcy Act from a processor and a medium-sized issuer. A representative of the processor estimated it would cost about $300,000 to modify information systems to accommodate the Bankruptcy Act disclosure option requiring issuers to provide an estimate of the repayment time. According to the representative, this cost would be spread across the processor’s small- and medium-size issuer client base of about 5,000 issuers. In addition, representatives for a medium-sized issuer told us it would cost the issuer $5,000 to $10,000 to have its third-party processor modify its information systems to accommodate customized provisions contained in the Bankruptcy Act. They further noted that it would cost about $150 per hour to hire a processor to program the other two messages that are envisioned to be included in customized disclosures. Costs for programming would vary depending on the level of precision that would be required and the complexity of an issuer’s account practices. Some issuers have more complex pricing schemes that could increase the programming required to develop estimates that more closely reflect a cardholder’s situation. For example, as noted above, many large issuers engage in transaction-based pricing, in which different rates of interest apply to balances originating from different transactions (such as purchases, cash advances, or balance transfers). Programming a calculation that accounts for a variety of balances at different interest rates, while more precise, is more complex than a calculation that uses one balance and one interest rate. Adding further to the complexity, with multiple balances and interest rates, decisions would need to be made about the order in which to allocate cardholder payments to the outstanding balances. A smaller portion of the programming estimates we received was for reformatting billing statements to accommodate the text of the disclosure. Issuers use various formats or templates to present cardholders with information about their accounts, including transactions, payment due dates, and rewards program information. Issuers may also use different templates for different card programs, such as cards with rewards (e.g., cash-back or travel benefits) or private-label cards associated with major retailers. The issuers use an average of three statement templates, with the smallest issuers using just one and the largest issuers using as many as 100 templates, according to representatives of third-party processors serving large and small issuers. One representative estimated one-time costs of about $13,500 per issuer, assuming three templates required revision. Programming costs for small issuers would generally be the same on a per- unit (statement template) basis. However, a representative of another third- party processor told us reformatting costs would be substantially lower for small issuers because the processor requires all small issuers to use the same statement template, thereby spreading reformatting costs across the thousands of institutions using that statement. Issuers estimated that call-center costs would increase following the implementation of customized disclosures because the centers would receive more and longer telephone calls from customers. One large issuer told us its costs could increase by about $3 million in the first few months following implementation of customized disclosures. However, this issuer said these calls likely would taper off after cardholders became familiar with the customized information. In addition, an issuer in a comment letter to the Federal Reserve noted that the Bankruptcy Act requirements would increase call volume and duration, which could increase its expense for servicing customer calls by about $900,000 monthly. As part of preparing to implement the California disclosure requirements, six large issuers estimated incurring expenses averaging about $680,000 monthly to operate a telephone bank upon implementing minimum payment disclosures in California. Perhaps reflecting the uncertainties and range of assumptions noted above, the estimates that we obtained of total first-year costs ranged from $9 million to $57 million for large issuers. For example, representatives of one issuer estimated that postage, programming, and customer service costs could total approximately $9 million, but also noted that the issuer could incur additional costs, such as training staff and retaining legal services to keep abreast of regulatory changes and court decisions that could affect compliance. Not all issuers from whom we obtained data were able to provide total estimates based on individual implementation cost components. Instead, these issuers provided us with only aggregated estimates based on their experiences in implementing California’s minimum payment disclosure requirements; and these estimates generally were higher than those provided by another issuer and two processors that estimated individual component costs. For example, representatives of one large issuer estimated the company would have spent a total of $57 million in the first year following implementation had it implemented the California requirements, which roughly resembled portions of the customized disclosure we studied. The issuer separated this estimate into two categories of one-time, start-up costs and ongoing costs. The one-time costs would be about $30 million, which would include programming computer systems and modifications to customer service systems, among other things. Ongoing costs would be about $27 million annually, including postage and handling a higher number of calls from cardholders, among other things. In documents filed with a federal district court, three large issuers estimated it would cost them about $41 million each in the first year to implement California’s customized disclosure requirements. Of this amount, about $18 million would pay for one-time, start-up costs with the remaining $23 million for ongoing costs. As noted above, impending minimum payment disclosure requirements under the Bankruptcy Act could soon require issuers to make programming and billing statement changes that could consequently reduce estimated costs to implement any additional customized disclosures. For example, one Bankruptcy Act option would require issuers to produce actual information about a cardholder’s repayment period if only minimum payments were made and make this information available to cardholders over the telephone. Programming expenses made up front to meet that requirement could reduce the programming costs for implementing customized disclosures. Also, estimated increases to postage costs associated with a new customized disclosure requirement may be overstated in that they do not account for increased postage costs issuers will already have incurred for implementing the Bankruptcy Act requirements. Because the cost estimates we obtained were not comprehensive, it is not possible to ascertain how additional customized minimum payment disclosure requirements would affect issuers’ overall profitability. However, the costs of implementing customized disclosures do not appear to be significant in terms of large issuers’ net income. According to a credit card industry analyst, estimates for implementing the customized minimum payment disclosures are insignificant to issuers and easily would be absorbed. The analyst noted that estimates for start-up and ongoing costs in the first year would be so small that they would be the equivalent of a rounding error in terms of net income. Comparing these estimated implementation costs with issuers’ operating expenses also indicated that such costs might not significantly increase their operating expenses. To determine how estimates of the costs to implement customized disclosures—which ranged from $9 million to $57 million—would affect the operating expense of the issuers that provided us with these estimates, we identified operating expenses and amounts in outstanding credit card loans from financial reports and data the issuers provided to us. By adding the estimates of total implementation costs to the amount each issuer reported in operating expenses, we found that the ratio of their operating expenses to their outstanding credit card loans—a metric commonly used by industry analysts—would stay the same or increase slightly. For example, we found that the issuer that provided us with a $9 million estimate for total implementation costs for the first year would experience no change to its operating expense ratio. The issuer that provided us with a $57 million estimate would experience an increase in current ratio from approximately 3.3 percent to about 3.5 percent. According to CardWeb.com, Inc., monthly operating expense ratios for the 150 issuers that it monitors generally averaged between 4.2 and approximately 6.0 percent from January 2001 to December 2005. Most of the revolver cardholders—those that carry a balance on their credit cards—who we interviewed preferred to receive a customized disclosure on minimum payment consequences. Although some convenience users also preferred a customized disclosure, most saw generic disclosures or no disclosure at all as sufficient for their needs. Those preferring the customized disclosure did so because it would be cardholder-specific, change each month based on account transactions, and provide more information than the two Bankruptcy Act options. However, opinions as to how the customized disclosure would influence cardholder behavior varied, with some believing that such a disclosure would have a great impact and others believing that it would have little impact. To assess the usefulness of providing a customized disclosure to cardholders, we interviewed 112 adult cardholders and asked for their preferences for three disclosure statements—the two generic disclosure options from the Bankruptcy Act and an example of a proposed customized disclosure—or no disclosure at all. We categorized the cardholders into two groups, of 38 convenience users and 74 revolvers, based on their responses to questions about their credit card payment behaviors. The cardholders recruited for the interviews did not form a random, statistically representative sample of the U.S. population. As described in table 1 (in the background section), the two generic disclosure options shown to cardholders include one that contains a minimum payment warning statement only, and another that contains a minimum payment warning statement and an example of the amount of time needed to pay off a sample balance. Table 1 also includes an example of a customized disclosure, similar to the one that cardholders were shown. Revolvers generally preferred to receive a customized disclosure about the consequences of making minimum payments. Specifically, more than half of the revolvers (42 out of 74) choose to receive the customized disclosure over the two Bankruptcy Act disclosure options or no disclosure at all (see fig. 1). As figure 1 shows, the revolvers—including some for whom the customized disclosure was not the preferred option—also generally found the information contained in the customized disclosure to be useful. Sixty- eight percent (50 out of 74) of the revolvers found the customized disclosure either extremely or very useful, while 23 percent (17 out of 74) found the customized disclosure slightly useful or not useful at all. Although more convenience users preferred the customized disclosure to either of the generic ones, the majority (60 percent) were satisfied with receiving a generic disclosure or no disclosure at all. The number of convenience users preferring the customized disclosure (15 out of 38) was equal to the total number who preferred the generic disclosures. As shown in figure 2, while 37 percent (14 out of 38) of convenience users found the customized disclosures extremely or very useful, 55 percent (21 out of 38) found it slightly useful or not useful at all. The reasons given by both revolvers and convenience users for preferring the customized disclosure generally were similar. Many of the cardholders who preferred the customized disclosure or thought that it was more useful than a generic disclosure said they did so because the information provided would be specific to their account and change each month, based on their transactions. For example, if issuers were providing a customized disclosure, the information on the monthly billing statements would take into account any changes in customers’ accounts that occurred since the previous billing cycle, including new purchases, payments received, changes in interest rates, and any fees that might have been assessed. The customized disclosure, therefore, would provide cardholders with a new “snapshot” of their account each month, as of the date the bill was calculated. Many of the cardholders noted that, even if the information was outdated by the time they received it (e.g., if they had made additional purchases), just having an idea of the payments needed to pay off their balances would be helpful. One respondent noted that she found the customized disclosure more useful than the generic example in the Bankruptcy Act disclosure because, even though her issuer cannot anticipate future purchases or changes in her interest rate, the customized disclosure still would be closer to reality. Some respondents also found the dynamic nature of the disclosure helped them understand the consequences of making minimum payments more than the generic examples because they would be better able to see how purchases or payments made on their account affected their repayment estimates. Additionally, some respondents noted that because the customized disclosure would be updated each month they could track their account and use the information for budgeting or financial planning purposes. Although issuers and others stated that the information would not be practical for cardholders because the estimates would assume no activity on the account, we did not find that the cardholders we interviewed believed this limited the usefulness of the customized information. In fact, after we explained to cardholders that the customized disclosure would represent only a point-in-time estimate and that the information would change if there were additional activity on their account, 79 percent (89 of 112 cardholders) found the customized disclosure more useful than the generic example in one of the Bankruptcy Act options. Cardholders also preferred a customized disclosure because such a disclosure provided them with new and additional information. We found that the majority of cardholders already demonstrated a basic understanding of the consequences of making only minimum payments. For example, 68 percent of the cardholders could explain that both the length of time and amount of interest they would pay would increase if they made only minimum payments. An additional 29 percent of respondents could name at least one of these two consequences. Because many cardholders already understood that making only minimum payments could be harmful to their financial condition, the information provided by either of the Bankruptcy Act disclosures would not be new to the cardholder. One cardholder told us that he preferred the customized disclosure because he already understood the concept addressed in both Bankruptcy Act disclosure options; however, the customized disclosure provided him with personalized details that he found helpful. Another cardholder mentioned that the customized disclosure gave him a “plan,” whereas the other two options were “merely warnings” and would not tell him anything he did not already know. In addition to providing cardholder-specific information on the length of repayment, a customized disclosure also could include information on the total amount of interest a cardholder would pay if only minimum payments were made, and the monthly payment amount needed to repay the balance over some time period (e.g., 3 years). During our interviews, several cardholders told us that seeing such information would be useful to them. For example, some cardholders told us they found the information on the monthly amounts needed to repay the balance over a time period to be the most useful part of the disclosure because it provided them with a plan for how to pay off their balances. The majority of cardholders we interviewed (57 percent) indicated that they were unlikely to take the initiative to call the toll-free telephone numbers required by the Bankruptcy Act, and many indicated that they had not calculated the information on their own to obtain individualized information. Therefore, if the customized disclosure were not provided directly on their billing statement, they would be unlikely to receive any individualized information at all. In fact, many cardholders mentioned that they liked the customized disclosure because it eliminated the need for them to calculate the information on their own or call a toll-free telephone number. Additionally, most of the cardholders were not aware of or using existing tools such as amortization calculators that are available on the Internet. Only 41 percent of cardholders were aware of these calculators, and only 33 percent of those who were aware of the tools had used them. Also, according to financial educators, it is important to provide customized disclosures because most cardholders are not able to calculate amortization periods and total interest payments correctly. Not all of the cardholders chose to receive the customized disclosure or found the information that it contained useful. As shown in figures 1 and 2, 30 percent (22 of 74) of the revolvers and 39 percent (15 of 38) of the convenience users preferred to receive one of the two Bankruptcy Act options. Some of these cardholders explained that they thought the generic disclosures mandated by this act were simpler and easier to understand. Others indicated that the example provided in one of the Bankruptcy Act options gave them a good understanding of the consequences of making minimum payments, without having to see specific estimated numbers based on personal account information. Other cardholders specifically stated that they found the customized disclosure confusing, and some noted that having the option to call the toll-free number if they wanted additional information was sufficient. Finally, some cardholders preferred not to receive any disclosure on the consequences of making minimum payments, primarily because they already understood the consequences of making minimum payments. Some cardholders were concerned that issuers would pass on to them the costs associated with providing customized disclosures. Other cardholders told us they probably would not pay attention to the disclosure or that they would not read it because they did not read their credit card statements. This report does not contain all the results from the interviews. The interview guide and a more complete tabulation of the results can be viewed at GAO-06-611sp. Opinions varied on how effective customized disclosures would be in influencing cardholder behavior. Consumer groups, financial educators, and many of the cardholders we interviewed indicated that considerable benefits might result from providing cardholders with customized disclosures. Such benefits could include cardholders making larger payments or otherwise changing how they use their credit cards. Customized disclosures might have greater impact because they would be more noticeable than other disclosures. For example, a consumer group representative and financial educator told us that cardholders generally are more likely to notice a customized disclosure over a generic one. They compared providing the generic Bankruptcy Act disclosures on cardholders’ billing statements to providing smokers with the Surgeon General’s Warning on a cigarette pack, and noted that once cardholders become familiar with a generic minimum payment disclosure, they are likely to ignore it and not be influenced by the information that it contains. The risk of a repeated and identical disclosure being ignored appears real, as some of the cardholders we interviewed said that after seeing the generic Bankruptcy Act disclosures a few times they probably would stop reading them. In contrast, cardholders told us that that they would be more likely to notice customized information each month. Representatives from some consumer groups and other organizations told us that, because the example contained in one of the generic Bankruptcy Act disclosures contains a sample balance and interest rate that is not reflective of most cardholders’ accounts, cardholders likely would dismiss it entirely because they would assume it did not apply to them. Customized disclosures also were seen as having a potentially significant impact on cardholder behavior because they would provide information that changes as the cardholder’s situation changes. For example, one representative of a third-party credit card processor told us that she believes that if cardholders were shown information that changed each month according to the actions they took, they then would be more likely to change their behavior. Many of the cardholders also indicated that a customized disclosure would be more influential than a generic disclosure in causing them to consider increasing monthly payments. For example, one respondent said that during the months when she might not pay her full balance, seeing the customized disclosure would make her want to “scrape together more money from savings” to make a larger payment. Additionally, another respondent noted that the customized disclosure would influence him to take disposable income and put it toward his credit card balance. Another said that seeing the amount of interest he was paying would make him want to pay off the balance sooner. Additionally, two of the cardholders we interviewed told us that seeing new information every month would help them make decisions for the future and might change the way in which they used their credit cards. However, others, including issuer representatives and industry researchers, indicated that customized disclosures might not be effective in changing consumer behavior. They noted that not all cardholders need the information provided in the customized disclosure. For example, while customized disclosures could provide convenience users with illustrative information, the cardholders—by paying their balances in full each month—already are modeling behavior that customized information was designed to promote. As a result, these cardholders would appear not to need this additional disclosure. Many of the convenience users we interviewed—who preferred not to receive a customized disclosure— explained that they paid their balance in full each month, already understood the consequences of making only minimum payments, and therefore did not need the additional reminder. Instead, most of the convenience users told us that they would rather receive information on the first page of their billing statement that would be more useful to them, such as information on a credit card reward program. Additionally, because a customized disclosure would assume that only the exact minimum amount would be paid, representatives of some issuers told us that such disclosures would be of limited use to the large number of cardholders who, although not fully paying the balance each month, do pay more than the minimum amount due. Some organizations also said that customized disclosures might have a limited impact on cardholder behavior overall because the number of cardholders that make consecutive minimum payments appears to be small. According to issuers, minimum payment disclosures, whether customized or generic, are useful only to the cardholder population that revolves balances—specifically, the smaller subset of that population that habitually makes minimum payments. According to six of the issuers we contacted, the percentage of their customers who make minimum payments is small. As a result, most issuers questioned the value of implementing customized disclosures that would benefit such a small percentage of their customers. Additionally, representatives of one large issuer told us that their firm had implemented the minimum payment disclosures required under the California law for 3 months and, while acknowledging that these disclosures were in place for a brief period, indicated that they did not notice a difference in the number of cardholders making minimum payments. As a result of this experience, the representatives said that they did not expect the proposed customized disclosure to have much of an impact either. Customized disclosures also might have little impact on cardholder behavior because some cardholders are not able to make larger than minimum payments. Many of the cardholders we interviewed who made minimum payments told us that they did so because they could not afford to pay more. Competing expenses and a lack of additional disposable income were the primary reasons these cardholders gave for making at least one minimum payment within the last year. A representative from a large issuer also told us that cardholders who make minimum payments lack the ability to regularly pay more. Issuers, consumer groups, and others that we interviewed suggested alternatives for providing cardholders with customized information on the consequences of making minimum payments. Among the alternatives mentioned were targeting customized disclosures to only certain cardholders or not requiring the disclosure to appear on the first page of cardholders’ billing statements. While these alternatives might make it easier and less costly for issuers to implement customized disclosures, they also may reduce the desired impact of the disclosure because fewer cardholders would receive the information or notice the disclosure. Rather than providing customized disclosures, some suggested that government agencies, issuers, financial educators, and consumer groups expand general financial education efforts on the consequences of making minimum payments. Consumer groups, issuers, and others suggested that the population of cardholders that would receive customized disclosures could be narrowed. For example, a consumer group representative suggested the information could be targeted only to cardholders most likely to need it, such as revolvers. A representative of another consumer group told us that such information ought to be provided to any cardholders that paid the minimum amount or close to the minimum amount in any given month. Some issuer representatives asserted that the population receiving customized disclosures ought to be even narrower, such as cardholders who have made minimum payments for several consecutive months. Limiting the number of cardholders who receive customized disclosures offers some advantages to issuers and some disadvantages to cardholders. For example, providing customized information to a more limited number of cardholders would lower issuer costs, such as paper and postage, by reducing the number of billing statements that might require an additional page. However, limiting customized disclosures to cardholders who pay only the minimum could preclude other cardholders from benefiting from such information. For example, many of the cardholders we interviewed identified themselves as paying “a lot more than the minimum payment,” “almost their entire balance,” or their “entire balance” each month, yet found the customized disclosure to be either extremely or very useful. Some of these cardholders noted that even though they do not typically make minimum payments or close to the minimum payment, the disclosure still provided them with useful information in case they ever experienced a time when they would need to make minimum payments. Some of the convenience users who found the customized disclosure useful explained that the information served as a good reminder on the consequences of making minimum payments. A second alternative that issuers and others identified would be to place the disclosure in a location other than the first page of the billing statement. For example, issuers could be allowed to print the customized disclosure on either the back side of a statement page or on a subsequent page. One regulatory official noted that issuers could provide text on the first page that informs cardholders that customized information is available elsewhere in the statement. Issuers and a card processor told us that space on the first page of the billing statement is at a premium because it typically contains a lot of important information, such as messages on the status of an account (e.g., over-the-limit notices). Providing the customized minimum payment disclosures to cardholders in a location other than the first page of the billing statement would offer issuers some cost advantages, but a disadvantage of such a change could include less impact on cardholder behavior. Not being required to place the disclosure on the first page of billing statements could make implementing the disclosure easier and less costly for issuers because they might not need to reformat their statement templates. However, according to consumer groups and others, not placing the information on the first page of the statement would reduce its prominence and likely its influence on cardholder behavior. For example, one representative told us that cardholders might be less likely to notice the disclosure if it was not prominently positioned on their billing statement. An industry expert confirmed that the primary tool issuers use to communicate with their cardholders is the monthly billing statement. Therefore, removing the customized minimum payment disclosure from the billing statement entirely could decrease the number of cardholders who read it at all. A third suggestion that could reduce the cost of customized disclosures would involve providing the information electronically or online. According to an issuer and a consumer group we contacted, customized information could be provided to cardholders in electronic statements sent by issuers. Cardholders also could access such information directly on issuers’ Web sites. For example, issuers could provide online calculators in which cardholders could enter their balances, applicable interest rates, and payment amounts to obtain repayment and other estimates specific to their accounts. At a credit card industry symposium held in June 2005, participants advocated increasing the reliance on technology for delivering more useful consumer disclosures. One issuer that we interviewed already has implemented an online calculator to provide its customers with customized information, while another issuer told us they were currently developing one. Making customized disclosures available online, rather than in monthly statements, could prevent cardholders from receiving outdated information and allow cardholders to access the information when they need it, rather than limit them to a monthly statement. Online availability also presents cardholders with the ability to receive only the information they prefer. Online disclosures also could give cardholders the flexibility to obtain the information they deemed most useful to them. For example, some cardholders found customized disclosures only slightly useful, because they made more than the minimum payment every month. Additionally, one cardholder said that he would rather see the calculation that showed the monthly payment amount that would be required to pay his balance off in 1 year, rather than some longer period. However, an exclusively online presentation could also reduce the impact of the disclosure. Removing the disclosure from the billing statements could greatly decrease the number of cardholders that see such information, because not all cardholders have easy access to the Internet. Some cardholders we interviewed mentioned that although they were aware of online calculators to help them estimate credit card payoff times, they had not used them because they did not have easy access to the Internet. In addition, even cardholders with the ability to obtain such information online might not utilize it. For example, only two of the 43 cardholders with whom we spoke that identified themselves as typically paying “the minimum amount” or “more than the minimum amount, but not much more than the minimum,” had used an online credit card calculator. Some of these cardholders were not comfortable with using the Internet for personal finance. In addition, the consumers we interviewed generally greatly preferred receiving minimum payment disclosures in their billing statements. Of the 112 cardholders we interviewed, 73 percent preferred to receive such information on their monthly billing statement, while about 11 percent preferred receiving the information via the Internet. A fourth alternative for providing customized information on minimum payment consequences to cardholders would be to do so less often than monthly. For example, issuers could provide the information to cardholders quarterly or annually. Several of the cardholders we interviewed (about 24 percent) were amenable to receiving the disclosure less frequently than monthly. This alternative could reduce both postage and paper costs for issuers because additional pages to print the disclosure would be needed less frequently. However, if cardholders received the information less frequently, they would not be reminded of the consequences of making minimum payments in the months they did not receive the disclosure. For example, one cardholder we interviewed who typically made only slightly more than minimum payments said that she “just doesn’t really think about it when she makes the payment,” but with a customized disclosure “in front of her, she would think about it more.” Another noted, “Having it [the customized disclosure] in front of you with your specific information makes it easier to keep in the back of your mind that you should be quick to pay your balance off sooner.” Consumer groups, federal regulators, and others identified options for improving the information cardholders receive on the consequences of making minimum payments that would not entail providing customized information. For example, one issuer representative advocated expanding the generic example in one of the Bankruptcy Act options by developing a wider range of balance amounts and interest rates. With several different examples available, issuers could provide cardholders with a disclosure that contained a sample balance and interest rate that would be closer to those in the cardholder’s actual account, without having to incur the expense of producing disclosures using the exact amounts. While this approach would not provide cardholders with estimates as specific to their situation as a customized disclosure, it likely would provide better information to cardholders whose balances and interest rates were not similar to those currently used in the example contained in the Bankruptcy Act disclosure. Finally, instead of providing customized disclosures, federal regulators, educators, and consumer groups mentioned that consumer awareness could be improved by requiring issuers and others to increase financial education efforts tailored to minimum payment messages. Issuers could do this by including information about the consequences of making only minimum payments in solicitation letters or the introductory packages consumers receive when they obtain a new credit card. Government agencies and financial education providers could make additional use of advertisements in various media to underline messages about the consequences of making minimum payments. Our work indicates that credit card issuers and processors have the necessary data and systems capabilities to provide customized minimum payment disclosures—that is, to include customized information in billing statements that would show the length of time required to pay off each cardholder’s actual balance and the additional interest that would be incurred if only the minimum payment is made each month, as well as the monthly payment required to pay off an outstanding balance in a given time period. However, such disclosures are only point-in-time estimates that would fluctuate as cardholders make additional purchases or increase their payment amount. Credit card issuers and processors would incur initial costs, estimated to be from less than $1 million to up to several million dollars, to revise their systems to make these calculations. They would also likely incur additional costs resulting from higher postal charges—if including such disclosures increases the size of cardholder statements— and from increased customer service expenses, as they respond to account holder questions about these disclosures. While these additional costs could increase the ongoing expenses of producing and mailing billing statements, card issuers are already obligated to bear some portion of these costs as they implement the minimum payment disclosures mandated by the Bankruptcy Act. While we cannot estimate the incremental costs of providing customized disclosures, the known estimated costs appear to be small relative to the income of the largest issuers, which account for the vast majority of cardholder accounts. Further, the costs to the thousands of small card issuers would be minimal because of their use of third-party processors. While most of the revolver cardholders whom we spoke with found customized disclosures very useful, the impact that they might have on cardholder payment behavior could vary. Many of the consumers that we interviewed told us that customized disclosures provided more useful information than the generic disclosures mandated by the Bankruptcy Act, with the majority of revolvers preferring to receive customized disclosures. However, the majority of convenience users, while finding some value in the information contained in customized disclosures, were satisfied with receiving either the generic disclosures or no additional disclosure at all. While cardholders told us that such disclosures could strongly influence their decisions about making minimum payments, not all cardholders’ financial circumstances would allow them to increase their payment amounts. Therefore, the ultimate impact of providing additional disclosures could vary. While providing cardholders with additional disclosures about the consequences of making only minimum payments on their credit cards would appear to provide them with useful information, such disclosures would raise issuer costs and whether the impact on consumer behavior would be large or small is not known. However, various options, which have both advantages and disadvantages, for providing such information exist. For example, providing customized information only to those cardholders who revolve credit card balances or by providing it to all cardholders but on a less frequent basis or in another location besides the first page of the monthly billing statement could make it easier and less costly for issuers to implement customized disclosures. These options, however, could lessen the potential impact of the customized disclosure because fewer cardholders would receive, or be likely to notice, the information. We provided a draft of this report to the Federal Deposit Insurance Corporation, the Federal Reserve, the Federal Trade Commission, the National Credit Union Administration, the Office of the Comptroller of the Currency, and the Office of Thrift Supervision for their review and comment. In a letter from the National Credit Union Administration, the Chairman notes that the Administration agrees with the findings of our report, including that customized disclosures for consumers could feasibly be required of card issuers at a potentially significant but relatively reasonable cost and such disclosures could be useful and desirable for some consumers despite the uncertainty of their impact. The Chairman also notes that, collectively, the potential impact on credit unions of requiring card issuers to provide customized disclosures to consumers should be minimal, particularly since many use third-party processors. However, the Chairman’s letter also notes that the financial impact of customized disclosure requirements could still be significant for these small issuers and even more significant for moderate sized financial institutions servicing their own credit card portfolios, particularly in institutions where credit card interest margins are already low. The letter also notes that considering the incremental costs of customized disclosures is important because such costs will ultimately be passed on to consumers through increased fees or higher interest rates, which could result in a negative impact on the same consumers whom the disclosures are meant to help. As a result, the Chairman indicates that some of the alternatives to providing customized disclosures that are mentioned in our report could be more economically efficient than implementing customized disclosures to increase consumer awareness of the consequences of making minimum payments. We also received technical comments from the Federal Reserve, the Federal Trade Commission, and the Office of the Comptroller of the Currency, which we incorporated as appropriate. The Federal Deposit Insurance Corporation and the Office of Thrift Supervision did not provide any comments. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this report. At that time, we will send copies of this report to the Chairman, Federal Deposit Insurance Corporation; the Chairman, Federal Reserve; the Chairman, Federal Trade Commission; the Chairman, National Credit Union Administration; the Comptroller of the Currency; and the Director, Office of Thrift Supervision and to interested congressional committees. We will also make copies available to others upon request. The report will be available at no charge on the GAO Web site at http://www.gao.gov. The results of the interviews will also be available on the GAO Web site at GAO-06-611sp. If you or your staff have any questions regarding this report, please contact me at (202) 512-8678 or woodd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Our objectives were to (1) determine the feasibility and cost of requiring credit card issuers (issuers) to provide cardholders with customized minimum payment information, (2) assess the usefulness of providing customized information to cardholders, and (3) identify options for providing cardholders with customized or other information about the financial consequences of making minimum payments. To determine the feasibility and cost of providing cardholders with customized minimum payment information, we reviewed current and proposed disclosure requirements, including Title XIII of the Bankruptcy Abuse Prevention and Consumer Protection Act of 2005 (Bankruptcy Act), which amended the Truth in Lending Act (TILA) to require issuers to make disclosures regarding the consequences of making only the minimum payment. We also reviewed the advance notices of proposed rulemaking that the Board of Governor’s of the Federal Reserve System (Federal Reserve) issued. The proposed rulemaking is associated with the Federal Reserve’s self-initiated comprehensive review of the open-end (revolving) credit rules in Regulation Z, which implements TILA, as well as the implementation of the minimum payment disclosure requirements of the Bankruptcy Act. We also reviewed California’s Civil Code, section 1748.13, which had also mandated that consumers receive disclosures regarding minimum payment consequences. We discussed the feasibility and cost of providing customized information to cardholders with the staff of six major issuers and one mid-size issuer. We determined that these issuers account for about 67 percent of actively used credit card accounts as of year-end 2005. We provided issuers with a list of 16 cost items to facilitate discussions of the costs to implement customized minimum payment disclosures. We also met with the staff of two third-party credit card processors (processors) that manage card account data and produce billing statements for thousands of large and small issuers, who provided us with cost estimates and technical information about implementing customized disclosures. In addition, we obtained cost estimates for another three large issuers from court documents associated with a constitutional challenge of a California statute that required issuers to include minimum payment disclosures on billing statements sent to California cardholders. We supplemented our interview data with a review of 17 comment letters that issuers, processors, and trade associations submitted to the Federal Reserve that addressed the implementation of minimum payment disclosure provisions contained in the Bankruptcy Act. We reviewed two studies about costs of regulatory reforms and used them to shape our approach with issuers and processors to study the costs to implement customized minimum payment disclosures. To better understand how producing customized disclosures could affect issuer costs, we also discussed issuer operations and profitability with two broker-dealer research analysts that monitor credit card issuing banks and industry developments. We also met with representatives of federal banking regulators—the Federal Reserve, Office of the Comptroller of the Currency, Federal Deposit Insurance Corporation, Office of Thrift Supervision, and National Credit Union Administration—that oversee financial institutions offering credit cards, and met with representatives of the Federal Trade Commission, which oversees nonbank credit card issuing entities. We also attended a roundtable hosted by the McDonough School of Business at Georgetown University where representatives of credit card issuers, industry trade associations, law firms, federal regulatory agencies, and a consumer interest group addressed implementation issues relating to the provision of customized minimum payment disclosures. To assess the usefulness of providing customized disclosures to cardholders, we conducted in-depth interviews with a total of 112 adult cardholders in three locations: Boston, Chicago, and San Francisco, in December 2005. We contracted with OneWorld Communications, Inc., to recruit a sample of cardholders that generally resembled the demographic makeup of the U.S. population in terms of age, education levels, and income. However, the cardholders recruited for the interviews did not form a random, statistically representative sample of the U.S. population. Cardholders had to speak English and meet certain other conditions: having owned at least one general-purpose credit card for at least the last 12 months prior to the interview, and not have participated in more than one focus group or similar in-person study in the 12 months prior to the interview. We selected proportionally more people who typically carried balances on their credit card (revolvers) rather than those who regularly paid off their balances (convenience users)—compared with their actual proportions in the U.S. population—because we judged revolvers as likely more in need of the information provided in the customized disclosure. See table 2 for the demographic information on the cardholders we interviewed. During these consumer interviews, we obtained cardholders’ opinions to assess the usefulness of the customized disclosure by asking them a number of open- and closed-ended questions, and asking them more tailored follow-up questions as necessary to more fully understand their answers. All cardholders were asked questions to determine their typical credit card payment behavior and elicit what they already knew about the consequences of making only minimum payments. To determine their preferences for various disclosures, we showed each participant three sample disclosure statements. Two of these sample disclosure statements contained the language and generic example mandated by the Bankruptcy Act minimum payment disclosure provisions. The other disclosure presented an example of language incorporating the components of the proposed customized disclosure we studied. The sample disclosure statements we showed to cardholders can be found in GAO-06-611sp. Each of the cardholders we interviewed was asked a series of questions about each of the three disclosure statements, including how “understandable,” “influential,” “useful,” and “helpful” each disclosure was to their understanding of the consequences of making minimum payments. After seeing the three statements, cardholders also were asked to compare the statements and choose the statement they would prefer to receive. Additionally, cardholders were asked how they would prefer to receive such information, and how frequently they would like to receive it. Narrative answers to open-ended questions were categorized into various themes based on the cardholders’ responses. The reliability of the coding scheme was assessed by comparing the answers of a second, independent coder with a number of the answers. The interview instrument that was used to interview cardholders, as well as the results to the closed-ended questions can be found in GAO-06-611sp. The data collected through our in-depth cardholder interviews are subject to certain limitations. For example, the data cannot be generalized to the entire U.S. population of credit cardholders. In addition, our sample distribution between convenience users and revolvers was not reflective of the estimates of the proportion of such cardholders in the overall U.S. cardholding population because we purposely oversampled revolvers. Additionally, the self-reported data we obtained from cardholders are based on their opinions and memories, which may be subject to error and may not predict their future behavior. We gathered additional information on the usefulness of providing customized disclosures to cardholders by reviewing existing academic research on consumer protection disclosures and applicable public comment letters on the Federal Reserve’s advance notices of proposed rulemaking. We also interviewed credit card issuers and processors, and a variety of industry, academic, government, consumer interest, and financial education organizations for their opinions on the usefulness of customized disclosures. To identify other ways of providing cardholders with customized or other information about the financial consequences of making minimum payments, during our interviews we asked issuers and processors, as well as a variety of academic, government, consumer interest, and financial education organizations for suggestions and alternative options to providing customized disclosures. We discussed some suggestions with issuers and processors to determine their feasibility. We also asked the 112 cardholders for their opinions on other ways to communicate the financial consequences of minimum payments. We conducted our study between June 2005 and April 2006 in Boston, Chicago, San Francisco, and Washington, D.C., in accordance with generally accepted government auditing standards. In addition to those named above, Cody Goebel, Assistant Director; Christine Houle; John C. Martin; Marc Molino; Carl Ramirez; Omyra Ramsingh; Barbara Roesmann; and Kathryn Supinski made key contributions to this report.
The Bankruptcy Abuse Prevention and Consumer Protection Act of 2005 requires that credit card issuers (issuers) include in all cardholder billing statements a generic warning, or "disclosure," about the potential financial consequences of consistently making only the minimum payment due on a credit card. However, some have urged that consumers should instead receive "customized" disclosures in their billing statements that use cardholders' actual balances and the applicable interest rates on their accounts to show the consequences of making only minimum payments, such as estimates of the time required to repay balances and the total interest amount resulting from continual minimum payments. In response to a congressional request, this report assesses the (1) feasibility and cost of requiring issuers to provide cardholders with customized minimum payment information, (2) usefulness of providing customized information to cardholders, and (3) options for providing cardholders with customized or other information about the financial consequences of making minimum payments. Representatives of credit card issuers and processors that handle billing and other operations for issuers said they have the technological capability to provide cardholders with customized minimum payment information. The calculations that would be included in such disclosures require various assumptions, including that no more charges are made on the account, and decisions on how to address other issues, such as balances subject to multiple interest rates, that would affect the estimates' precision. Issuers and processors estimated that the most significant costs of providing customized disclosures would be for additional postage, computer programming, and customer service. Although uncertain about exactly what calculations would be required, the estimates that issuers provided for total implementation costs ranged from $9 million to $57 million. In GAO's interviews with 112 cardholders, most who typically carry credit card balances (revolvers) found customized disclosures very useful and would prefer to receive them in their billing statements. These consumers liked that customized disclosures would be specific to their accounts, would change based on their transactions, and would provide more information than generic disclosures. However, cardholders who pay their balances in full each month were generally satisfied with receiving generic disclosures or none at all. Consumer groups, financial educators, and others indicated that customized disclosures could reduce cardholders' tendency to make minimum payments; conversely, issuers foresaw limited impact because few cardholders make minimum payments and not all can afford to pay more. Alternatives for providing customized disclosures include providing them only to revolvers, providing them less frequently, or in a location other than the first page of billing statements. While such alternatives could lower issuer costs, they could also decrease the customized disclosures' potential impact.
The fee-for-service part of the Medicare program processes more than a billion claims each year from about 1.5 million providers of health care or related services and equipment to beneficiaries. These providers bill Medicare for their services and supplies which, among other things, consist of inpatient and outpatient hospital services, physician services, home health care, and durable medical equipment (such as walkers and wheelchairs). Preventing fraud and ensuring that payments for these services and supplies are accurate can be complicated, especially since fraud can be difficult to detect, as those involved are engaged in intentional deception. For example, fraud may involve providers submitting claims with false documentation for services not provided, which may appear to be valid. To address Medicare’s vulnerability to fraud, Congress enacted a provision in the Health Insurance Portability and Accountability Act of 1996 (HIPAA) that established the Medicare Integrity Program. HIPAA provides this program with dedicated funds to identify and combat improper payments, including those caused by fraud. In addition, when Congress passed the Patient Protection and Affordable Care Act (PPACA) in 2010, it provided CMS with additional authority to combat Medicare fraud, and set a number of new requirements specific to the program. For example, PPACA gave CMS the authority to suspend payment of Medicare claims pending an investigation of a credible allegation of fraud and required it to conduct certain new provider and supplier enrollment screening procedures intended to strengthen the process, such as checking providers’ licensure. In April 2010, CMS established the Center for Program Integrity (CPI) to enable a strategic and coordinated approach to program integrity initiatives throughout the agency and to build on and strengthen existing program integrity efforts. As the component responsible for overseeing the agency’s Medicare program integrity efforts, the center’s mission is to ensure that correct payments are made to legitimate providers for covered, appropriate, and reasonable services for eligible beneficiaries. To accomplish its mission, the center has undertaken a strategy to supplement the agency’s “pay and chase” approach, which focuses on the recovery of funds lost due to payments of fraudulent claims, with an approach that is directed toward the detection and prevention of fraud before claims are paid. The strategy has concurrent objectives to (1) enhance efforts to screen providers and suppliers enrolling in Medicare to prevent enrollment by entities that might attempt to defraud or abuse the Medicare program and (2) detect aberrant, improper, or potentially fraudulent billing patterns and take quick actions against providers suspected of fraud. In addressing the second objective, CPI intends to use predictive analytics technologies to detect potential fraud and prevent payments of claims that are based on fraudulent activities. Accordingly, CPI is the focal point for all activities related to FPS. CPI uses contractor services to support the agency’s program integrity initiatives. Among these are contractors tasked with specific responsibilities for ensuring that payments are not made for claims that are filed incorrectly or that are identified as being associated with potentially fraudulent, wasteful, or abusive billing practices. Specifically, Medicare Administrative Contractors (MAC) are responsible for processing and paying Medicare fee-for-service claims, and Zone Program Integrity Contractors (ZPIC) are responsible for identifying and investigating potential fraud in the program. When processing claims, MACs review them prior to payment to ensure that payments are made to legitimate providers for reasonable and medically necessary services covered by Medicare for eligible individuals. The systems that the MACs use for processing and paying claims, called “shared systems,” execute automated prepayment controls called “edits,” which are instructions programmed into the system software to identify errors in individual claims and prevent payment of incomplete or incorrect claims. For example, prepayment edits may identify claims for services unlikely to be provided in the normal course of medical care, such as more than one appendectomy on the same beneficiary and other services that are anatomically impossible. Most of the prepayment edits implemented by CMS and its contractors are automated, meaning that if a claim does not meet the criteria of the edit, payment of that claim is automatically denied. However, while these prepayment edits are designed to prevent payment errors that can be identified by screening individual claims, they cannot detect providers’ billing or beneficiaries’ utilization patterns that may indicate fraud. Specifically, the capability to collect and analyze data that are submitted over a period of time is necessary for a system to be able to identify patterns in behavior. ZPICs are responsible for identifying and investigating potential fraud in the Medicare fee-for-service program. CPI directs and monitors their activities. These contractors identify claims and provider billing patterns that may indicate fraud and investigate leads from a variety of sources, including complaints and tips lodged by beneficiaries. ZPICs operate in seven geographical zones across the country, and each ZPIC is responsible for conducting program integrity activities in its geographic jurisdiction. (Fig. 1 depicts the ZPIC zones and corresponding geographic areas.) Varying levels of fraud risk prevail across the zones. For example, Zone 7 includes an area known to be at high risk of fraud, while Zone 2 covers a geographically large and predominantly rural area that may be at a lower risk of fraud. The ZPICs include about 510 data analysts, investigators, and medical record reviewers. Data analysts use automated tools to analyze data on claims, providers, and beneficiaries in their efforts to identify fraud, support investigations, and search for new fraud schemes. Investigators examine fraud leads by performing a range of investigative actions, such as provider reviews and interviews with beneficiaries and providers. The medical record reviewers examine medical records and provide clinical knowledge needed to support analysts’ and investigators’ work. As a result of their analyses and investigations, ZPICs may refer to law enforcement and initiate administration actions against providers suspected of fraud. Specifically, if the contractors uncover suspected cases of fraud, they refer the investigation to the HHS Office of Inspector General (OIG) for further examination and possible criminal or civil prosecution. ZPICs also initiate a range of administrative actions, including revocations of providers’ billing privileges and payment suspensions, which allow CMS to stop payment on suspect claims and prevent the payment of future claims until an investigation is resolved. They initiate administrative actions by recommending the actions to CMS and coordinating with the MACs to carry them out. For example, ZPICs recommend payment suspensions to CMS and, if CMS approves, the MACs implement the suspension. Table 1 describes the types of administrative actions ZPICs can recommend against providers. While CMS had the authority to impose payment suspensions prior to PPACA, the law specifically authorized CMS to suspend payments to providers pending the investigation of credible allegations of fraud. CMS is required to consult with the HHS OIG in determining whether a credible allegation of fraud exists. Definition Provider-specific prepayment edits are used to identify claims for medical review. Beneficiary- or provider-specific prepayment edits are used to prevent payment for non-covered, incorrectly coded, or inappropriately billed services. A provider’s Medicare billing privileges are terminated. Payment suspension Medicare payments to a provider are suspended, in whole or in part. Medicare payments received by a provider are in excess of amounts due and payable. In cases of suspected fraud, ZPICs can recommend the implementation of prepayment edits that apply to specific providers and automatically deny claims or flag claims for prepayment review. In these cases, prepayment edits are considered by CMS to be administrative actions. CMS and its contractors have, for more than a decade, used information technology systems to support efforts to identify potential fraud in the Medicare program. These systems were developed and implemented to analyze claims data in support of program integrity analysts’ efforts to detect potentially fraudulent claims after they were paid so that actions could be taken by CMS to collect funds for the payments made in error (i.e., the pay-and-chase approach). For example, in 2002 CMS implemented its Next Generation Desktop to provide data regarding beneficiaries’ enrollment, claims, health care options, preventive services, and prescription drug benefits. This system is also used as an analytical tool during investigations and provides enhanced data to law enforcement personnel, such as data about complaints against providers reported by beneficiaries. Further, in 2006, CMS implemented the One Program Integrity (One PI) system for use in helping to identify claims that were potentially fraudulent and to recover the funds lost because of payments made for claims determined to be fraudulent. The system was intended to enable CMS’s program integrity analysts and ZPICs to access from a centralized source the provider and beneficiary data related to claims after they have been paid. As a result of our prior study of One PI, in June 2011 we made a series of recommendations regarding the status of the implementation and use of the system.our study, agency officials agreed with all of them, including recommendations that CMS define measurable financial benefits expected from the implementation of the system and establish outcome-based performance measures that gauge progress toward meeting program goals that could be attributed to One PI. In commenting on the results of In addition to systems and tools provided and maintained by CMS, the ZPICs have developed and implemented their own information technology solutions to analyze claims and provider data in their efforts to detect potentially fraudulent claims that were paid by Medicare. For example, the ZPICs have their own case management systems and custom-developed algorithms for analyzing data from their zone-specific databases that can supplement the data and tools available from CMS. The ZPICs can also incorporate data from other sources into their databases, including data from state databases on provider licensing and incorporated businesses, and Internet searches of research websites. While the program integrity contractors have been using these systems to support CMS’s efforts to identify improper and potentially fraudulent payments of Medicare claims, they have not previously had access to information technology systems and tools from CMS that were designed specifically to identify potentially fraudulent claims before they were paid. In this regard, CMS intends to use predictive analytics as an innovative component of its fraud prevention program. To advance the use of predictive analytics technologies to help prevent fraud in the Medicare program, the Small Business Jobs Act of 2010 appropriated $100 million to CMS, to remain available until expended, for the development and implementation of a predictive analytics system. Enacted on September 27, 2010, the law required CMS to implement a system that could analyze provider billing and beneficiary utilization patterns in the Medicare fee-for-service program to identify potentially fraudulent claims before they were paid. To do this, the system was to capture data on Medicare provider and beneficiary activities needed to provide a comprehensive view across all providers, beneficiaries, and geographies. It was also intended to identify and analyze Medicare provider networks, provider billing patterns, and beneficiary utilization patterns to identify and detect suspicious patterns or anomalies that represent a high risk of fraudulent activity. The act further required the system to be integrated into Medicare’s existing systems and processes for analyzing and paying fee-for-service claims in order to prevent the payment of claims identified as high risk until such claims were verified to be valid. The act also specified when and how CMS should develop and implement the system. Specifically, it required that CMS select at least two contractors to complete the work and that the system be developed and implemented by July 1, 2011, in the 10 states identified by CMS as having the highest risk of fraud. The act further required the Secretary of HHS to issue, no later than September 30, 2012, the first of three annual implementation reports that identify savings attributable to the use of predictive analytics, along with recommendations regarding the expanded use of predictive analytics to other CMS programs. The act stated that based on the results and recommendations of the first report, the use of the system was to be expanded to an additional 10 states at the next highest risk of fraud on October 1, 2012; similarly, based on the second report, the use would then be expanded to the remaining states, territories, and commonwealth on January 1, 2014. To meet the act’s requirements, CMS assigned officials within CPI responsibility for the development, implementation, and maintenance of FPS. These officials included a business process owner, information technology program manager, information technology specialist, and contracting officer. In defining requirements for the system to address the mandate of the Small Business Jobs Act, these program officials planned to implement by July 1, 2011, system software for analyzing fee-for- service claims data, along with predictive analytic models that use historic Medicare claims and other data to identify high-risk claims and providers. Program officials further planned, by July 2012, to implement functionality into FPS to enable automatic notification to system users of potentially fraudulent claims and to prevent payments of those claims until program integrity analysts determined that they were valid. In April 2011, CMS awarded almost $77 million to a development contractor to implement, operate, and maintain the system software and to design a first set of models for the initial implementation of FPS. The agency awarded about $13 million to a second contractor in July 2011 to CPI develop additional models that could be integrated into the system.also engaged its internal program integrity analysts to help design the models and test the initial implementation of the system. FPS is a web-based system that is operated from a contractor’s data center and accessed by users via the agency’s secured private network. The system is comprised of software that analyzes fee-for-service claims data as the claims are being processed for payment, along with hardware, such as servers that support connections between users’ facilities and CMS’s network, and devices that store the data used and generated by the system. The system software and predictive models are designed to analyze the claims data and generate alerts to users when the results of analyses identify billing patterns or provider and beneficiary behavior that may be fraudulent and warrant administrative actions. In September 2011, CPI established a group that works with and provides training to the ZPICs on how to use FPS to initiate administrative actions more quickly against providers suspected of fraud. According to CPI officials, they intend to continue to refine the system to provide analysts and investigators with data and statistical information useful in conducting investigations based on input provided during these training sessions. In response to the Small Business Jobs Act, CMS implemented its initial release of FPS by July 1, 2011. While the act called for CMS to first implement the system for use in the 10 states identified by CMS as having the highest risk of fraud, the agency chose to deploy the system to all the ZPIC geographic zones. In addition, the system was integrated with existing data sources and systems that process claims, but it was not yet integrated with CMS’s claims payment systems. As of May 2012, CMS had spent nearly $26 million on the implementation of FPS. Of this amount, about $1 million was spent for internal CMS staff and $25 million for the development and modeling contractors. CMS’s initial release of the system consisted of system software for analyzing fee-for-service claims data and predictive analytic models that use historic Medicare claims and other data to identify high-risk claims and providers. After the initial release, CMS implemented three more releases of software through July 1, 2012, that incorporated changes or enhancements to the system as well as additional models. The four system releases yielded a total of 25 predictive analytic models in three different categories and with varying levels of complexity. Specifically, these consisted of the following model types: Rules-based models, which are to filter potentially fraudulent claims and behaviors, such as providers submitting claims for an unreasonable number of services. These models also are intended to target fraud associated with specific services, including those that CMS has stated are at high risk for fraud, such as home health agency services and durable medical equipment suppliers. These are the simplest types of models since the analysis conducted using them only involves counting or identifying types of claims and comparing the results to established thresholds. Anomaly-detection models, which are to identify abnormal provider patterns relative to the patterns of peers, such as a pattern of filing claims for an unreasonable number of services. These models generate analyses that are more complex because they require identification of patterns of behavior based on data collected over a period of time, and comparisons of those patterns to established behaviors that have been determined to be reasonable. Predictive models, which are to use historical data to identify patterns associated with fraud, and then use these data to identify certain potentially fraudulent behaviors when applied to current claims data. These models are intended to help identify providers with billing patterns associated with known forms of fraud. This is the most complex type of model implemented into FPS because it not only requires analysis of large amounts of data but may also require detection of several patterns of behavior that individually may not be suspicious but, when conducted together, can indicate fraudulent activity. Of the 25 models that CMS had implemented by July 1, 2012, 14 were rules-based, 8 were anomaly-detection, and 3 were predictive. Table 2 describes the four releases of FPS, including the numbers and types of models. While the act called for first implementing the system in the 10 states at highest risk of fraud and incrementally assessing and expanding its use throughout the country until January 2014, program officials stated that analysts in all the zones—and covering all states—were provided the ability to access and use FPS when it was initially implemented. The officials stated that they took this approach because program integrity activities are implemented and managed within the seven zones rather than by states, and the 10 highest-risk states were dispersed across multiple ZPIC zones. According to the officials, making the system available to the 10 highest- risk states thus required making it accessible to all of the zones. Program officials further stated that use of the system by ZPICS in all the zones was intended to provide a national view of claims data and to allow the identification and tracking of fraud schemes that crossed zones. The FPS business owner added that while analysts assigned to the ZPICs were the primary intended users of FPS, the system was also made available to CMS’s internal program integrity analysts and to investigators with HHS OIG. System reports showed that during the first year of implementation, CMS authorized almost 470 analysts and investigators from the ZPICs, CMS, and the HHS OIG to use FPS, including about 80 from legacy Program Safeguard Contractors (PSC). Program officials reported that, of these, almost 400 analysts were actively using the system as of April 2012. Moreover, program officials told us that the system was being used by almost all the program integrity analysts expected to do so. To use the system, program integrity analysts access FPS via CMS’s secured network from workstations within their facilities. As noted during our observation of a demonstration at CMS’s offices, FPS processed and analyzed claims data using the models, then prioritized the claims data for review based on whether they were consistent with scenarios depicted by the models. When the system identified high-risk claims data, it generated an alert based on that data. As more claims were screened throughout the day, the system automatically continued to generate alerts associated with individual providers. It then generated alert summary records (ASRs) for the providers and scored the risk level of the records based on collective results of the individual alerts. The system notified FPS users of the ASRs. The analysts using the system were to review the ASRs and conduct additional research to determine whether further investigation was needed to verify that the related claims were valid. As required by the act, CMS integrated FPS with existing data sources and systems that process claims. To integrate FPS with CMS’s existing information technology infrastructure, the contractors tasked to develop the system and models were required to capture data from several existing sources needed to provide a comprehensive view of activities across providers, beneficiaries, and geographies. Access to these sources was also needed to allow for analysis of Medicare provider networks, along with billing and beneficiary utilization patterns, in order to identify suspicious patterns or anomalies that could indicate fraud. For example, these data provide information about historical activities, including any suspicious activities related to a particular service or provider that had been noted in the past, or about the status of providers’ enrollment in the Medicare fee-for-service program. Thus, the data are needed by FPS to analyze incoming claims data to identify patterns of behavior like those known to indicate fraud. According to program officials and our review of system specifications, the contractors integrated supporting data from various sources, as identified in table 3. To facilitate analyses of claims data, fee-for-service and durable medical equipment claims are first transmitted to FPS from CMS’s Common Working File and the Common Electronic Data Interchange (both described in table 3). The system analyzes the claims data based on the types of models integrated into the system and the supporting data extracted from other CMS data sources, such as the Integrated Data Repository and the Provider Enrollment Chain and Ownership System. FPS’s analytical capabilities were integrated with CMS’s existing systems for processing fee-for-service claims, as required by the act. In describing this integration, program officials stated that claims data for medical services are transmitted to FPS after prepayment edits are applied by the “shared systems” (systems that the MACs use to process claims)--usually 3 to 5 days from the time claims are submitted to CMS. All the fee-for- service claims data are transmitted to FPS at the same time they are submitted to the payment processing component of the shared systems.Figure 2 illustrates the integration of FPS claims analysis with CMS’s existing fee-for-service claims processing systems. While FPS was integrated with existing data sources and systems that process claims, it had not been further integrated with CMS’s claims payment systems. Specifically, FPS had not been integrated with the components of the shared systems that process the payment of claims. However, this level of integration is required to enable FPS to prevent the payment of potentially fraudulent claims until they have been verified by program integrity analysts and investigators. While the act called for the implementation of FPS by July 1, 2011, including this capability, the agency’s program plans initially indicated that it was to be implemented by July 1, 2012. However, the business process owner of FPS stated that planning for the development of this system functionality required extensive discussions regarding design and requirements with entities that maintain and use other systems, particularly the shared systems. Consequently, FPS program officials did not complete requirements definition until May 2012. The official told us, and high-level program plans and schedules indicate, that CMS now intends to complete integration of the capability in January 2013. Although CMS has identified January 2013 as a target date for completing the development, testing, and integration of FPS with the claims payment systems, program officials had not yet defined detailed schedules for completing the associated tasks required to carry this out. Best practices, such as those described in our cost estimation guide, emphasize the importance of establishing reliable program schedules that include all activities to be performed; assign resources (labor, materials, etc.) to those activities; identify risks and their probability; and build appropriate reserve time into the schedule. However, FPS program officials had not yet developed such schedules and did not indicate when they intend to do so. Until it develops reliable schedules for completing associated tasks, the agency will be at risk of experiencing additional delays in further integrating FPS with the payment processing system, and CMS and its program integrity analysts may lack the capability needed to prevent payment of potentially fraudulent claims identified by FPS until they are determined by program integrity analysts to be valid. While CMS has not integrated FPS with its claims-payment system, it is using FPS to change how potential fraud is identified and investigated as part of its fraud prevention strategy. CMS has directed the ZPICs to incorporate the use of FPS into their processes and investigate high-risk leads generated by the system. The contractors with whom we spoke stated that investigations based on leads generated by FPS are similar to those from other sources. Further, CMS has taken steps to address certain initial challenges that ZPICs encountered in using FPS. CMS is using FPS to identify providers with aberrant billing patterns and prioritize those providers for investigation as part of its strategy to prevent Medicare fraud. With the implementation of the system, CMS directed the ZPICs to prioritize investigations of leads from the system that meet certain high-risk thresholds. CMS program integrity officials stated that, as of April 2012, about 10 percent of ZPIC investigations were initiated as a result of using FPS. By prioritizing these investigations, these officials told us that they intend for ZPICs to target suspect providers for investigation as soon as aberrant billing patterns that are consistent with fraud are identified, rather than targeting providers that have already received large amounts of potentially fraudulent payments. In addition, investigations of leads from FPS should be faster because the leads provide information about the type of fraud being identified, and the system is designed to provide investigators with data and statistical tools to conduct investigations. CMS program integrity officials also told us that the agency intends to use FPS to deny only a small number of claims without further investigation once it completes integration of FPS with its claims-payment system and that ZPICs will continue to coordinate with the MACs to take administrative actions against providers. In addition to directing ZPICs to investigate leads from FPS, CPI also established a working group, referred to as the command center, to work with and provide training to the ZPICs on how to use the system to initiate administrative actions more quickly against providers suspected of fraud. On a recurring basis, typically every 2 weeks, select staff from a ZPIC travel to CMS to receive training related to the system and to discuss current FPS trends and investigations. CMS officials stated that these training sessions and discussions help the analysts develop new and streamlined approaches for gathering evidence and taking action against providers suspected of potential fraud. For example, CMS conducted training with ZPIC staff on how to investigate system leads that target certain forms of fraud, such as fraud associated with home health services. In addition, ZPICs received training on how best to use the system to ensure that resulting administrative actions, such as revocations of providers’ billing privileges, are well supported by the evidence. For example, ZPICs received training on Medicare revocation policies and processes and were provided with examples of successful revocations that were initiated based on system leads. Finally, based on these training sessions, CMS continues to refine the system to provide investigators with data and statistical information useful in conducting investigations. Concurrent with the implementation of FPS and to further help move away from its pay-and-chase approach to detecting fraud, CMS has directed the ZPICs to focus on recommending and initiating administrative actions—especially the revocation of Medicare billing privileges—against providers suspected of fraud. As directed by CMS, ZPICs previously focused their investigative efforts on gathering evidence to verify overpayments and developing criminal and civil cases for law enforcement agencies—a lengthy process that often involved many investigative steps. In particular, CMS program integrity and ZPIC officials cited the large amount of time and resources involved in reviewing medical records—an investigative process in which staff with clinical backgrounds review claims to determine whether billed services are potentially fraudulent or inconsistent with clinical practice. According to CMS program integrity officials, the information provided by FPS is well- matched with the evidence necessary for ZPICs to recommend revocations against providers without having to conduct extensive investigations. These officials also told us that they have directed the ZPICs to focus on pursuing revocations because revocations prohibit providers suspected of fraud from billing Medicare. Moreover, revoking a provider’s enrollment limits the need to expend additional resources tracking their claims or gathering evidence to justify the denial of suspect claims as compared to other administrative actions, such as suspension of payments to suspect providers. All of the ZPICs have integrated FPS into their existing processes for identifying and investigating potentially fraudulent claims and providers. All but one of the ZPICs established FPS teams as a way to incorporate the system into their processes. These teams consist of ZPIC staff designated as the primary users of the system. The ZPICs generally take the following steps when using FPS: Monitor FPS and triage its investigative leads: Since CMS requires the ZPICs to conduct preliminary reviews of high-risk leads from the system, staff on the FPS teams monitor the system for new investigative leads—ASRs—that exceed the high-risk thresholds. CMS requires the ZPICs to determine whether the providers associated with those leads are “suspect” or “non-suspect.” These reviews are often conducted by the FPS teams. ZPIC officials told us that they often look for certain patterns associated with fraud when making this determination. For instance, identification of a provider that bills for a small number of beneficiaries but an excessive number of services may lead to a suspect determination. Refer suspect providers for further investigation: Suspect leads become formal investigations of the provider and are generally referred to other ZPIC investigators for further investigation. For example, a lead from FPS related to home health services may be referred to an investigator with expertise in that area. Conduct investigation: Once a lead from the system is assigned to an investigator, it is investigated similarly to other leads. The investigator can take multiple investigative actions to determine whether the provider is engaged in potential fraud including interviewing the provider and the provider’s beneficiaries, conducting onsite audits to review a provider’s records and assess whether the provider’s facilities are appropriate for the services provided, determining whether there are other complaints against the provider, and conducting additional data analysis using FPS and other tools. The ZPICs can refer suspect providers to HHS OIG or recommend them for administrative actions. Officials from the ZPICs reported that FPS has not fundamentally changed the way in which they investigate fraud. The system has not, according to ZPIC officials, significantly sped up investigations or enabled quicker administrative actions in most instances. Instead, officials reported that leads from the system were broad indicators that particular providers were suspect, but did not in all cases provide sufficient evidence of potentially fraudulent billing to allow for faster investigations or resolutions. FPS investigations were similar to those from other sources in that they often required additional investigative steps, such as beneficiary and provider interviews. On the other hand, ZPICs reported certain advantages as a result of using FPS. For example, analysts can query the system for specific data to support their analysis of leads and export data from FPS into other systems they use to conduct additional analysis of claim lines flagged by FPS. Data generated by the system may also notify investigators of information available in other CMS databases, such as the national Fraud Investigation Database. In addition, using FPS’s near-real-time claims data, some investigators reported identifying and conducting interviews with beneficiaries shortly after they received services from providers under investigation, when beneficiaries can better recall details about their care. Finally, information in FPS has also helped substantiate leads from other sources. For example, one ZPIC noted that its investigators use information from the system to help verify tips and complaints about suspected fraud. All ZPICs that we interviewed told us that they experienced initial challenges using FPS. CMS has been responsive to many of these challenges and has developed processes for soliciting and incorporating ZPIC input and feedback on the system and its use. Certain ZPICs attributed some early challenges with the system to CMS not soliciting their input during the development and initial implementation of FPS. CMS has since developed a process in which ZPICs submit “change requests” to propose changes to the system’s functionality and enhancements to the models so that they better target suspect providers. The command center also serves as a forum for ZPICs to discuss and provide feedback on FPS and its use. These processes for soliciting and implementing feedback are consistent with key practices we have previously identified for implementing management initiatives. In particular, feedback can provide important insights about operations from a front-line perspective. The challenges ZPICs faced using FPS centered on several common themes, and CMS has taken steps to address these challenges: Impact on continuing proactive data analysis investigations: Officials from all of the ZPICs we interviewed reported that the implementation of the system represented a change in direction that limited some of their own proactive data analysis and investigations. This happened because the ZPICs were required to devote more time and resources to following up on leads from the system and less on investigations that were already under way from other sources, including earlier proactive data analysis. In addition to investigating leads from the system, the ZPICs investigate leads based on their own data analysis and cited specific advantages of their proactive investigations. Specifically, while FPS models address specific types of potential fraudulent activity, the ZPICs conduct proactive data analysis and investigations to target forms of fraud that are not addressed by those models. Additionally, ZPIC officials told us that fraudulent activity varies by region and proactive data analysis and investigations are needed to keep up with localized and emerging trends of fraud. CMS officials told us that they plan to have ZPICs continue their proactive data analysis and investigations in addition to those in response to FPS leads. Certain CMS directions for using FPS: ZPICs identified certain CMS directions for using the system that created workload challenges. For example, the agency initially directed the ZPICs to continue tracking and reevaluating providers that were determined to be nonsuspect, which led the ZPICs to expend resources investigating those providers. In response to ZPIC complaints about having to reevaluate providers determined to be nonsuspect, agency program integrity officials told us that they revised the policy so that the ZPICs only reevaluate nonsuspect providers under certain conditions and also modified FPS to alert ZPICs when such providers should be reexamined. False positives: ZPICs told us that certain FPS models identified and prioritized the investigation of a relatively high proportion of false positives—i.e., improper identification of suspect providers that were not engaged in fraud. Some of these false positives related to the nationwide application of models, which did not take into account localized conditions that may help explain certain provider billing patterns. For example, a physician in a rural area may provide care for beneficiaries dispersed across a large geographic range— something that would raise suspicion for a physician in an urban area. ZPICs also told us that the system sometimes prioritized leads that target forms of fraud that are not prevalent in their zone and that investigating such false positive leads has taken time away from other investigations. In response to ZPIC feedback that certain models produced a high number of false positive leads, CMS changed the way the system generates leads and how it assigns risk scores to providers identified by those models. According to program integrity officials, CMS is also considering approaches to control for geographic variations in fraud. FPS functionality: ZPICs cited challenges related to aspects of FPS’s functionality. For example, when first implemented, the system only provided data directly relevant to the aberrant billing patterns associated with its leads. ZPICs, however, told us that determining whether a provider is potentially suspect requires contextual and background information, such as provider profile and billing history information. Because this information was not provided by FPS, the ZPICs had to use other sources to obtain this information. Based on this feedback, CMS updated the system so that its leads now provide users with contextual and background information on providers identified by the system. CMS’s use of FPS has generally been consistent with key practices for using predictive analytics technologies identified by private insurers and state Medicaid programs we interviewed. The use of sophisticated predictive analytics to address health care fraud—including predictive modeling and social network analysis—is relatively new, and not all insurers and programs that we interviewed use these techniques. The Automated Provider Screening system was implemented by CMS in December 2011. This system validates data received from providers when enrolling in Medicare and identifies providers that may be at high risk for fraud based on those enrollment applications. process, should enable FPS to risk-score providers based on certain public records. Social network analysis is emerging as an important tool to combat organized health care fraud since it can be used to demonstrate linkages among individuals involved in fraud schemes. One official from a state Medicaid program noted that, since organized fraud operations often move from scheme to scheme, identifying the networks of individuals involved in fraud, rather than simply limiting their ability to perpetrate certain schemes, is increasingly important. While FPS does not yet include social network analysis, CMS program integrity officials were conducting a pilot to determine how to integrate social network analysis into future model development. These officials stated that they intend to analyze and implement results of the study, as appropriate, by the end of September 2012. Close and continuing collaboration between those developing predictive analytics systems and the investigative staff who use the systems improves analysis and helps limit false positives. Predictive analytics systems need effective and continuous feedback on the outcomes of investigations so that they can be refined and updated to better target fraudulent activity and reduce false positives. For example, investigative staff can guide the development of predictive models by providing information on emerging fraud schemes that they encounter during the course of their investigations. CMS has coordinated with the ZPICs to develop and refine FPS models. For example, CMS has obtained ZPICs’ input on emerging trends in potentially fraudulent activity to generate new ideas for FPS models. According to CMS program integrity and ZPIC officials, ZPIC staff with experience and expertise investigating particular types of fraud have been involved in developing FPS models. After models have been implemented, ZPICs have provided feedback on issues or challenges that they have encountered, which has subsequently been used by CMS to refine and update the models. Collaboration with external stakeholders, including other insurers and government health programs, can aid in the detection of fraudulent providers and leverages resources. Such collaborations enable information sharing about bad actors and emerging fraud schemes, which can be effective because providers engaged in fraud often do not target just one company or government program, but attempt to defraud many insurers and programs. CMS, along with other agencies involved in ensuring Medicare program integrity—specifically the HHS OIG, the Department of Justice, and the Federal Bureau of Investigation—have established a collaborative partnership with a number of private insurers and anti-health care fraud associations. A CMS program integrity official told us that CMS’s experiences with FPS will inform the information it shares with stakeholders and should enable the agency to share lessons learned regarding its use of predictive analytics with private insurers. Publicizing the use of predictive analytics technologies may deter providers from committing fraud. Providers may be more reluctant to commit fraud if they are aware of analytic systems in place to detect aberrant billing patterns. CMS has taken steps to publicize FPS among providers. For example, CMS distributed an article on its use of the system to the provider community and presented information on the system at a regional fraud summit and at other meetings attended by medical societies and other national healthcare organizations. Private insurers also noted that predictive analytics also identified vulnerabilities related to waste and abuse. CMS had not resolved or had not taken significant action to resolve nearly 90 percent of the vulnerabilities identified by ZPICs in 2009. The Clinger-Cohen Act of 1996 and OMB guidance emphasize the need for agencies to forecast expected financial benefits of major investments in information technology and measure actual benefits accrued through implementation. Doing so is essential to ensure that these investments produce improvements in mission performance. In addition to the need to define and measure financial benefits, as part of capital planning and investment control processes, OMB requires agencies to define and report progress against outcome-based performance measures that reflect goals and objectives of information technology programs. In doing so, agencies are required to set ambitious but achievable targets once performance measures are defined, establish milestones for meeting performance goals and targets that illustrate how progress toward accomplishing goals will be monitored by the agency, and conduct post-implementation reviews of systems to determine whether or not objectives were met and estimated benefits realized. OMB further requires agencies to submit business plans that address these elements throughout the life of a major investment to, among other things, provide a basis for measuring performance and identify who is The data reported in the accountable for deliverables of the program. plans are available to the public and are intended to provide Congress with critical information needed to conduct oversight of, and make decisions regarding, federal agencies’ investments in information technology programs. With regard to FPS, CMS had not yet defined an approach for quantifying the financial benefits expected from the use of the system. CPI officials stated that they had not yet determined how to quantify and measure financial benefits from the system, but that they intend to do so in the future. These officials stated their intention was to measure benefits based on savings resulting from the system’s contributions to the agency’s efforts to prevent payments of fraudulent claims. However, while CMS could potentially quantify financial benefits resulting from the amount of suspended payments or other administrative actions based on the results of FPS, the capability of the system that could provide benefits through the suspension of payments had not yet been implemented. The officials further acknowledged the difficulty with determining benefits or return on the agency’s investment in FPS in part because fraudulent providers’ knowledge of CMS’s use of the system could likely have a deterrent effect and, as intended, prevent fraudulent activity from occurring. In these cases, the amount of costs avoided would be unknown. FPS program officials told us that they were conducting a study to determine ways to quantify these benefits and planned to include this information in the implementation report that CMS was required to issue to Congress by September 30, 2012. However, as of October 10, 2012, the agency had not yet issued the report. OMB requires agencies to report at least annually on updates to plans or business cases for certain information technology investments and monthly to update the status of agency efforts to complete planned activities and meet established performance metrics. In addition to the difficulties associated with the agency’s efforts to quantify financial benefits of implementing FPS, CMS has not established or reported to OMB outcome-based performance measures, targets, and milestones for gauging the system’s contribution to meeting its fraud prevention goals. As part of the fraud prevention program’s long-term vision to stop payment on high-risk claims, program officials defined two goals: implement predictive modeling and other analytic technology systems capable of reporting alerts based on risk scores applied to near-real- time claims data, beginning July 1, 2011, and identify potentially fraudulent payments before final payment is authorized by CMS. As required, CMS initially reported to OMB performance measures, targets, and milestones in a September 2011 investment plan. According to program officials, FPS stakeholders, such as CPI program managers, provided input into the development of these measures. However, in further discussions, the FPS business process owner stated that the information that had been reported to OMB in the 2011 plan did not reflect the current direction of the FPS program and that another plan was developed in January 2012. The official stated that this latter plan was being used to manage the investment and that it identified different performance goals and measures than the one submitted to OMB. Specifically, whereas the plan submitted to OMB included as a performance target 60 new models to be developed and implemented in the system by July 2012, the revised plan, which had not been submitted to OMB, identified the implementation of 40 new models for the same time frame. Furthermore, the revised plan that CMS is using to manage the FPS investment does not define outcome-based performance measures that could be used to gauge progress toward the agency’s goal to identify potentially fraudulent payments of claims. Some of the performance measures defined in this plan—such as the number of trouble tickets generated or number of defects—can be used to monitor system performance, but cannot be used to measure progress toward meeting program goals. In this regard, CMS did not define measures or targets for meeting them that reflect the extent to which the system identifies potentially fraudulent claims. For example, such measures could track the number of ASRs in certain risk categories that result in investigations, revocations, payment suspensions, or other administrative actions that support the agency’s goal to prevent Medicare fraud. However, measures such as these, along with targets and milestones for meeting them, had not yet been defined. Program officials stated that they intended to refine the performance measures, targets, and milestones and submit a new FPS investment plan to OMB in June 2012; however, they have not yet done so, and it is unclear when they intend to submit a revised plan or refine the performance measures. The officials also said that they intended to present performance measures in the report that CMS was required to issue to Congress by the end of September 2012. However, as noted above, the agency has not yet issued the report. In refining the performance measures for the system, it will be important that the measures be based on desired outcomes of the overall fraud prevention program to help the agency gauge improvements attributable to the implementation of FPS. Further, while CMS’s technical review board requested FPS officials to conduct a post-implementation review 6 months after the system was implemented, program officials have not yet done so. These types of reviews are to be conducted to evaluate information systems after they become operational and determine whether their implementation resulted in financial savings, changes in practices, and effectiveness in serving stakeholders. In this regard, quantifiable financial benefits and measureable performance targets and goals provide information needed to conduct post-implementation reviews of systems. However, agency officials do not yet have the information needed to conduct such a review since they have not yet defined and measured any financial benefits realized as a result of using the system, or ways to measure its overall performance. Until the agency conducts its post-implementation review of FPS, CMS will be unable to determine whether the use of the system is beneficial and effective in supporting program integrity analysts’ ability to prevent payment of fraudulent claims, a key component of the agency’s broader strategy for preventing fraud in the Medicare program. As part of its efforts to move beyond a pay-and-chase approach to recovering fraudulent payments, CMS has taken important steps toward preventing fraud by implementing FPS in response to the Small Business Jobs Act of 2010. By integrating the system with its existing claims processing systems, the agency has provided most of the intended users an additional tool for conducting analysis of data soon after claims are submitted for payment and the ability to detect and investigate potentially fraudulent billing patterns more quickly. As implemented, the system provides functionality that supports program integrity analysts across the country in their efforts to identify and prevent payment of potentially fraudulent claims until they are determined to be valid. CMS has also used FPS as a tool to better coordinate efforts with ZPICs, the contractors primarily responsible for investigating fraud. For example, CMS officials have directed the ZPICs to prioritize the investigation of high-risk leads generated by the system and to use the system as part of their processes for investigating potentially fraudulent claims and providers. Accordingly, the ZPICs we examined have integrated the use and outcomes of the system into their zone-specific processes. While they noted both advantages and initial challenges associated with the implementation of FPS, CMS has taken steps to address those challenges. Specifically, program integrity officials solicited users’ feedback and incorporated it into the system design to improve the functionality and use of the system. Further, while the use of sophisticated predictive analytics to address health care fraud is relatively new, CMS’s use of FPS has generally been consistent with key practices identified by private insurers and state Medicaid programs we interviewed. However, these entities leverage the results of predictive analytics to address broader program vulnerabilities, such as closing prepayment edit gaps and policy loopholes, and CMS could benefit from using the results of FPS to address vulnerabilities in the Medicare program that could lead to fraudulent payments. Despite these efforts, agency officials have not yet implemented functionality in the system needed to suspend payment of high-risk claims until they are determined through further investigation to be valid, and have not yet developed detailed schedules for doing so. Additionally, they have not yet determined ways to define and measure financial benefits of using the system, nor have they established outcome-based performance measures and milestones for meeting the performance targets that reflect the goals of the agency’s fraud prevention program. Until such performance indicators are established, FPS officials will continue to lack the information needed to conduct a post-implementation review of the system to determine its benefits and effectiveness in supporting program integrity analysts’ efforts to identify potentially fraudulent claims and providers. Furthermore, CMS officials, OMB, and Congress may lack important information needed to determine whether the use of the system contributes to the agency’s goal of predicting and preventing the payment of potentially fraudulent claims for Medicare services. In this regard, the contribution of FPS to the agency’s effectiveness in preventing fraud will remain unknown. To help ensure that the implementation of FPS is successful in helping the agency meet the goals and objectives of its fraud prevention strategy, we are recommending that the Secretary of HHS direct the Administrator of CMS to define quantifiable benefits expected as a result of using the system, along with mechanisms for measuring them, and describe outcome-based performance targets and milestones that can be measured to gauge improvements to the agency’s fraud prevention initiatives attributable to the implementation of FPS. CMS officials could consider addressing these two recommendations when reporting to Congress on the savings attributable to FPS’s first year of implementation. We are also recommending that the Secretary direct the Administrator of CMS to develop schedules for completing plans to further integrate FPS with the claims payment processing systems that identify all resources and activities needed to complete tasks and that consider risks and obstacles to the program, and conduct a post-implementation review of the system to determine whether it is effective in providing the expected financial benefits and supporting CMS’s efforts to accomplish the goals of its fraud prevention program. In written comments on a draft of this report, signed by HHS’s Assistant Secretary for Legislation (and reprinted in appendix II), the department stated that it appreciated the opportunity to review the report prior to its publication. Additionally, HHS stated that it concurred with all of our recommendations and identified steps that CMS officials were taking to implement them. Among these were actions to define quantifiable benefits realized as a result of using FPS, which agency officials intend to report in their first annual report to Congress. HHS also stated that CMS intends to establish outcome-base performance targets and milestones based on the first year of the system’s implementation and use, and that the agency has developed detailed plans and schedules such as those we described for further integrating FPS into the Medicare fee-for-service claims payment processing systems. Finally, the department stated that CMS plans to conduct a formal post-implementation review of the system in accordance with the agency’s standard operating procedures. If these and other actions that HHS identified are effectively implemented to address our recommendations, CMS should be better positioned to meet the goals and objectives of its fraud prevention program. HHS also provided technical comments on the draft report, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to interested congressional committees, the Secretary of Health and Human Services, the Administrator of the Centers for Medicare and Medicaid Services, and other interested parties. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact us at (202) 512-6304 or melvinv@gao.gov, or (202) 512- 5154 or kingk@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. The objectives of our review were to (1) determine the status of implementation and use of the Centers for Medicare and Medicaid Services’ (CMS) Fraud Prevention System (FPS) within the agency’s existing information technology infrastructure, (2) describe how the agency uses FPS to identify and investigate potentially fraudulent payments, (3) assess how the agency’s use of FPS compares to private insurers’ and Medicaid programs’ practices, and (4) determine the extent to which CMS defined and measured benefits and performance goals for the system and has identified and met milestones for achieving those goals. To determine the status of the implementation and use of the predictive analytics system, we reviewed FPS program management and planning documentation and held discussions with officials responsible for developing and implementing the system, including the business process owner, information technology specialist, and contracting officer, and with users of the system. Specifically, to assess the extent to which FPS had been developed and implemented, we compared the functionality implemented to date to plans defined in project management artifacts such as statements of work, work breakdown structures, and system release notes. To determine the number of system users of FPS, we held discussions with CMS officials about the intended users of the system and obtained data describing the targeted user population and the actual number of users each month from July 2011, when the system was implemented, through April 2012. To assess the extent to which FPS had been integrated within CMS’s existing information technology infrastructure, we compared system documentation to agency modernization plans and other planning documents, such as project schedules and documents describing the system’s data flows and sources. To supplement this information, we discussed with agency officials their plans for and management of the FPS program. We also interviewed officials with the Office of Information Services and the Center for Program Integrity (CPI) to discuss the agency’s information technology modernization plan and the extent to which elements of the plan have been implemented, the use of agency systems as data sources for FPS, and how FPS is integrated into the existing IT infrastructure. Additionally, we viewed a demonstration of FPS given by CPI officials during our site visit to their offices. We focused our analysis on the extent to which CMS implemented and used the predictive analytics system within the existing IT infrastructure. To describe how the agency uses FPS to identify and investigate potentially fraudulent payments, we observed demonstrations of FPS during site visits to CMS and Zone Program Integrity Contractors (ZPIC)—the primary users who are contractors responsible for conducting fraud investigations in specific geographical zones and for following up on leads generated by the system—and interviewed CMS program integrity staff responsible for implementing FPS. We conducted site visits in two zones and interviewed officials from four other zones—including the legacy Program Safeguard Contractors that are being replaced by ZPICs—representing all fully operational program integrity contractors at the time of our audit work. The locations for the site visits were selected based on (1) whether the ZPIC had been fully implemented for more than a year and (2) if the ZPIC covered geographical areas that have been identified by CMS as having high levels of fraud risk. During these discussions we sought to, among other things, understand how the contractors use FPS, the benefits and challenges associated with their use of the system, and how it had been integrated with other tools and approaches used to detect potential fraud. We also reviewed relevant documents, such as the CMS Medicare Program Integrity Manual, statements of work for ZPICs, CMS guidance and directions to the contractors, and educational materials related to FPS. To assess how the agency’s use of FPS compares to private insurers’ and Medicaid programs’ practices, we examined the use of similar systems by private health insurers and Medicaid programs. To identify these users, we employed a methodology often referred to as “snowball sampling”: an iterative process whereby at each interview with knowledgeable stakeholders, we solicited names of insurers and Medicaid programs that were using predictive analytics until we had coverage of a broad range of users and perspectives. Our observations are based on interviews with five state Medicaid programs and nine private insurance companies. We selected a nonprobability sample of stakeholders to interview and, therefore, the information gathered from key stakeholders is not generalizable beyond the individuals we interviewed; however, the interviews provided insights into issues pertaining to all three objectives. While not all users employed sophisticated predictive analytics—including predictive modeling and social network analysis—at the time of our interviews, they all had experience with data analytics and were able to provide insights into process-oriented strategies for incorporating analytics into their antifraud efforts. Our understanding of predictive analytics and its use was also informed by trade journal articles and interviews with system vendors and health insurance and antifraud organizations. To determine the extent to which CMS defined and measured benefits and performance goals for the system and identified and met milestones for achieving those goals, we reviewed requirements established by the Office of Management and Budget (OMB) for agencies’ management of information technology investments and for reporting the status of those investments. We assessed efforts taken by CMS officials to meet OMB’s requirements. Specifically, we discussed with the FPS business owner and other program officials the steps they had taken and plan to take in efforts to define ways to measure financial and other quantifiable benefits of the system. We also discussed with them their approach to and processes for developing performance measures, targets, and milestones to determine the extent to which the system was producing outcomes that supported the agency’s fraud prevention strategies and goals. Additionally, we reviewed agency-wide strategic plans and program planning documents, and assessed the extent to which the system’s performance plans and objectives supported efforts to achieve the goals defined by these plans. We also examined reports submitted to OMB that included information about the system’s expected performance, and interviewed program officials about steps the agency had taken to achieve the goals and objectives. For each of the objectives, we assessed the reliability of the data we obtained from interviews with agency officials and users by comparing them to documents describing FPS’s program plans and status, information technology infrastructure, system design specifications, system usage reports, and performance goals and measures. We found the data sufficiently reliable for the purposes of this review. We conducted this performance audit from October 2011 to October 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contacts named above, Teresa F. Tucker, Assistant Director; Thomas A. Walke, Assistant Director; Neil J. Doherty; Michael A. Erhardt; Amanda C. Gill; Lee A. McCracken; Thomas E. Murphy; Monica Perez-Nelson; Kate F. Nielsen; and Eden Savino made key contributions to this report.
GAO has designated Medicare as a high-risk program, in part because its complexity makes it particularly vulnerable to fraud. CMS, as the agency within the Department of Health and Human Services (HHS) responsible for administering Medicare and reducing fraud, uses a variety of systems that are intended to identity fraudulent payments. To enhance these efforts, the Small Business Jobs Act of 2010 provided funds for and required CMS to implement predictive analytics technologies--automated systems and tools that can help identify fraudulent claims before they are paid. In turn, CMS developed FPS. GAO was asked to (1) determine the status of the implementation and use of FPS, (2) describe how the agency uses FPS to identify and investigate potentially fraudulent payments, (3) assess how the agency's use of FPS compares to private insurers' and Medicaid programs' practices, and (4) determine the extent to which CMS has defined and measured benefits and performance goals for the system. To do this, GAO reviewed program documentation, held discussions with state Medicaid officials and private insurers, and interviewed CMS officials and contractors. The Centers for Medicare and Medicaid Services (CMS) implemented its Fraud Prevention System (FPS) in July 2011, as required by the Small Business Jobs Act, and the system is being used by CMS and its program integrity contractors who conduct investigations of potentially fraudulent claims. Specifically, FPS analyzes Medicare claims data using models of fraudulent behavior, which results in automatic alerts on specific claims and providers, which are then prioritized for program integrity analysts to review and investigate as appropriate. However, while the system draws on a host of existing Medicare data sources and has been integrated with existing systems that process claims, it has not yet been integrated with the agency's payment-processing system to allow for the prevention of payments until suspect claims can be determined to be valid. Program officials stated that this functionality has been delayed due to the time required to develop system requirements; they estimated that it will be implemented by January 2013 but had not yet developed reliable schedules for completing this activity. FPS is intended by program integrity officials to help facilitate the agency's shift from focusing on recovering large amounts of fraudulent payments after they have been made, to taking actions to prevent payments as soon as aberrant billing patterns are identified. Specifically, CMS has directed its program integrity contractors to prioritize alerts generated by the system and to focus on administrative actions--such as revocations of suspect providers' Medicare billing privileges--that can stop payment of fraudulent claims. To this end, the system has been incorporated into the contractors' existing investigative processes. CMS has also taken steps to address challenges contractors initially faced in using FPS, such as shifting priorities, workload challenges, and issues with system functionality. Program integrity analysts' use of FPS has generally been consistent with key practices for using predictive analytics identified by private insurers and state Medicaid programs. These include using a variety of data sources; collaborating among system developers, investigative staff, and external stakeholders; and publicizing the use of predictive analytics to deter fraud. CMS has not yet defined or measured quantifiable benefits, or established appropriate performance goals. To ensure that investments in information technology deliver value, agencies should forecast expected financial benefits and measure benefits accrued. In addition, the Office of Management and Budget requires agencies to define performance measures for systems that reflect program goals and to conduct post-implementation reviews to determine whether objectives are being met. However, CMS had not defined an approach for quantifying benefits or measuring the performance of FPS. Further, agency officials had not conducted a post-implementation review to determine whether FPS is effective in supporting efforts to prevent payment of fraudulent claims. Until program officials review the effectiveness of the system based on quantifiable benefits and measurable performance targets, they will not be able to determine the extent to which FPS is enhancing CMS's ability to accomplish the goals of its fraud prevention program. GAO recommends that CMS develop schedules for completing integration with existing systems, define and report to Congress quantifiable benefits and measurable performance targets and milestones, and conduct a post-implementation review of FPS. In its comments, HHS agreed with and described actions CMS was taking to address the recommendations.
According to DOD, the department relies on over 2.5 million unclassified computer systems, 10,000 local area networks, and hundreds of long- distance networks for mission-critical operations. These systems and networks run on multiple hardware and software platforms consisting of interconnected mainframes, systems, and network operating systems that often operate over public, commercial telecommunication lines. Security over these systems and networks involves multiple DOD and private sector organizations and is a difficult undertaking because of the ever-increasing number of cyber threats and attacks occurring over the Internet. Daily, DOD identifies and records thousands of “cyber events,” some of which are determined to be attacks against systems and networks. These attacks may be perpetrated by individuals inside or outside the organization, including hackers, foreign-sponsored entities, employees, former employees, and contractors or other service providers. Although historically DOD focused most of its security efforts on protecting the confidentiality of classified and sensitive information, this focus evolved as unclassified DOD systems and networks became increasingly exposed to cyber threats and attacks because of their connections with the public telecommunications infrastructure. After the “Morris Worm” attack crippled about 10 percent of the computers connected to the Internet in 1988, DOD acted—through the Defense Advanced Research Projects Agency—to establish the CERT Coordination Center at Carnegie Mellon University to address computer security threats. In 1992, the Air Force established the first military CERT to help address computer security threats and attacks internally. In 1994, a hacker from the United Kingdom raised concerns by launching a series of attacks against critical DOD research systems, demonstrating a need for better cyber defenses. Following these events, the Navy and Army established CERTs in 1995 and 1996, respectively. During the 1990s, incident response organizations were also gradually being established throughout other agencies of the federal government. In 1996, the Federal Computer Incident Response Capability (FedCIRC) was established to assist federal civilian agencies in their incident handling efforts. Like DOD, civilian agencies continue to evolve and mature in their incident response capabilities. Even as greater attention has been paid to incident response, cyber threats and attacks continue to affect the operations of DOD and other federal systems and networks. Since 1998, a number of federal systems have been subjected to a series of recurring, “stealth-like” attacks, code-named Moonlight Maze, that federal incident response officials have attributed to foreign entities and are still investigating. More recently, the “ILOVEYOU” virus attack affected electronic mail and other systems worldwide. According to DOD officials, thousands of potential cyber attacks are launched against DOD systems and networks daily, though very few are successful in accessing computer and information resources. In 1999 and 2000, the Air Force, Army, and Navy recorded a combined total of 600 and 715 cyber attacks respectively, during which intruders attacked DOD systems and networks in a variety of ways. Table 1 summarizes the numbers of recent documented cyber attacks reported by the military services. DOD and other organizations rely on a range of incident response activities to safeguard their systems, networks, and information from attack. These activities involve the use of various computer security tools and techniques as well as the support of systems and technical specialists. Incident response activities can be grouped into four broad categories: Preventive activities—such as conducting security reviews of major systems and networks and disseminating vulnerability notifications— are used to identify and correct security vulnerabilities before they can be exploited. Detection activities rely on automated techniques, such as intrusion detection systems and the logging capabilities of firewalls, to systematically scan electronic messages and other data that traverse an organization’s networks for signs of potential misuse. Investigative and diagnostic activities involve (1) technical specialists who research cyber events and develop countermeasures and (2) law enforcement personnel who investigate apparent attacks. Event handling and response activities—responding to actual events that could threaten an organization’s systems and networks—involve technical and system specialists who review data generated by intrusion detection systems and determine what needs to be done. This includes providing appropriate internal and external officials with critical information on events under way and possible remedies for minimizing operational disruption. The objectives of our review were to (1) identify DOD’s incident response capabilities and how these capabilities are being implemented and (2) identify challenges to improving these capabilities. To do this, we worked at the DOD organizations primarily responsible for incident response activities at the departmentwide level and within the four services. Specifically, we worked at the U.S. Space Command in Colorado Springs, Colorado; the Joint Task Force for Computer Network Defense (JTF-CND) in Arlington, Virginia; the Defense Information System Agency’s DOD Computer Emergency Response Team (CERT) and Global Network Operations and Security Center in Arlington, Virginia; the Air Force’s Information Warfare Center and CERT in San Antonio, Texas, and Communication and Information Center, Rosslyn, Virginia; the Army’s Land Information Warfare Activity and CERT at Fort Belvoir, Virginia; the Marine Information Technology Operations Center in Quantico, Virginia; and the Navy’s Fleet Information Warfare Center and Computer Incident Response Team in Norfolk, Virginia. At these locations, we obtained and analyzed information on (1) policies, procedures, roles, and responsibilities for incident response, (2) intrusion detection and other incident response tools and databases, and (3) key oversight and incident reporting procedures. Technical reports and database description documents were obtained and reviewed. We also reviewed operations and strategic planning documents and reports on computer security events, incidents, and intrusions for January 1999 through December 2000. Finally, we met with senior DOD officials in the Office of the Secretary of Defense to discuss departmentwide information security programs, strategies, and plans. Our work was performed in accordance with generally accepted government auditing standards from April 2000 through January 2001. We did not verify the effectiveness of DOD’s incident response capabilities and did not evaluate incident response capabilities within DOD support agencies, such as DISA. We obtained written comments on a draft of this report from the Assistant Secretary of Defense for Command, Control, Communications, and Intelligence. These comments are reprinted in appendix I. DOD has taken important steps to highlight the threat to its networks and systems and to enhance its ability to respond to computer incidents. For example, in 1997, DOD conducted a military exercise known as Eligible Receiver that demonstrated that hostile forces could penetrate DOD systems and networks and further highlighted the need for an organization to manage the defense of its systems and networks. A series of computer attacks against DOD systems in early 1998 further highlighted the need for a single departmentwide focal point for incident response. In December 1998, DOD established JTF-CND as the primary department- level agent to coordinate and direct internal activities aimed at preventing and detecting cyber attacks, containing damage, and restoring computer functionality. The services—Air Force, Army, Marine Corps, and Navy— were directed to provide JTF-CND with tactical support through their CERTs and other supporting components. The U.S. Space Command assumed operational control over JTF-CND in October 1999. JTF-CND serves as the departmentwide focal point for incident response activities. In 1998, DOD also established the Defense-wide Information Assurance Program (DIAP) to promote integrated, comprehensive, and consistent information assurance activities across the department. “Information assurance” refers to the range of information security activities and functions needed to protect and defend DOD’s information and systems. While JTF-CND coordinates and oversees incident response activities on a day-to-day operational basis, DIAP’s responsibilities include coordinating DOD plans and policies related to incident response. DOD’s network of CERTs, JTF-CND, and other related organizations engage in a variety of preventive, detective, investigative, and response activities, as described in further detail below. DOD’s preventive activities are aimed at stopping cyber attacks or minimizing the likelihood that they will be successful in penetrating systems or networks through exploiting known vulnerabilities. These activities have included (1) vulnerability assessments of the security of DOD systems and networks, (2) using technical experts to try to surreptitiously gain access to systems and networks, thus exposing security weaknesses before adversaries can exploit them, and (3) alerting systems administrators to identified vulnerabilities. Conducting vulnerability assessments can help ensure that system and security software is properly installed and configured and that the proper configuration is maintained through any updates or other modifications. Upon request, the Air Force, Army, Navy, and National Security Agency conduct vulnerability assessments of DOD systems and networks using a variety of automated computer security assessment tools. These tools automatically check systems and networks for known security weaknesses and generate reports summarizing results. During 2000, the Air Force, Army, Navy, and National Security Agency completed over 150 assessments that identified hundreds of vulnerabilities for commands to address. Upon request, the services and the National Security Agency use groups of technical experts to play the role of hackers and attempt to penetrate DOD systems and networks by exploiting known security weaknesses in commonly used systems and software. These efforts help prepare military forces to defend against cyber attacks and are often conducted during military training exercises. In addition, DOD established a Joint Web Risk Assessment Cell (JWRAC), staffed by reservists, to continually review DOD web sites to identify sensitive information. According to DOD officials, during its first 6 months of operation, JWRAC reviewed about 10,000 Web pages and identified hundreds of discrepancies for corrective action. Even with these preventive efforts, new types of security vulnerabilities are being identified almost daily, and hackers are continually developing automated tools to take advantage of them. To keep its systems and networks current with the best available protection, such as up-to-date software patches, DOD depends on DISA’s Information Assurance Vulnerability Alert (IAVA) process, which distributes alerts, bulletins, and advisories on security vulnerabilities, as well as recommendations for repairing security weaknesses, to the military services and Defense agencies. Since the program began in 1998, 27 alerts on potentially severe vulnerabilities and about 46 bulletins and advisories on lower risk cyber threats and attacks have been distributed to the services and Defense agencies for corrective action. Through their CERTs, the Air Force, Army, Marines, and Navy also disseminate to component commands hundreds of technical notifications on vulnerabilities that may require corrective action. In the area of incident detection, DOD relies largely on automated capabilities to identify significant cyber events—including attacks against systems and networks—as quickly as possible. Computer security technologies (such as intrusion detection systems and firewalls located at key network nodes) identify, track, and, if warranted, block inappropriate electronic traffic. Automated systems and tools are also used to collect, analyze, and display data on cyber events and to help establish a baseline of network activity to better identify anomalies and patterns that may indicate ongoing or imminent cyber attacks. Currently, DOD reports that about 445 host-based and 647 network-based intrusion detection systems are in operation to help safeguard its over 2.5 million unclassified host systems and the networks supporting them. Host-based intrusion detection systems monitor individual computers or other hardware devices and are used to automatically examine files, process accounting information, and monitor user activity. Network-based intrusion detection systems examine traffic or transmissions from host- based systems and other applications traversing key locations on the network. Nearly all of these safeguard systems are based on commercial products, except for the Air Force’s 148 Automated Security Incident Measurement Systems and the Joint Intrusion Detection Systems managed by DISA. The Air Force is also developing the Common Intrusion Detection Director System to correlate data from its intrusion detection systems and other sources in near real time to better track network activity patterns and identify cyber attacks. The Army and Navy have similar initiatives under way to develop databases for correlating information from intrusion detection systems and other devices. In addition, the Defense Advanced Research Projects Agency is funding research to develop more sophisticated intrusion detection systems. Investigative and diagnostic activities involve the use of technical specialists to research cyber events and attacks, to develop appropriate technical countermeasures, and to coordinate information with law enforcement personnel responsible for investigating and prosecuting intruders. Several DOD organizations, including the National Security Agency and Air Force, have established teams to examine the software code used to execute viruses and other cyber attacks and to help identify technical countermeasures for stopping the attacks or preventing them from infiltrating systems and networks. The JTF-CND, Air Force, Army, Marines, and Navy also coordinate with law enforcement and counterintelligence agencies when investigating potential criminal activities associated with cyber incidents. In addition, JTF-CND is developing systems and procedures to better coordinate and exchange information with law enforcement and counterintelligence agencies. Finally, event handling and response activities involve disseminating information and providing technical assistance to system administrators so they can appropriately respond to cyber attacks. JTF-CND has been designated DOD’s focal point for sharing critical information on cyber attacks and other computer security issues with internal and external partners. The military services also rely on CERTs to provide information on cyber attacks and immediate technical assistance to system administrators in the event of computer attacks. CERTs have the capability to deploy personnel to affected locations if system administrators need help implementing corrective measures or containing damage and restoring systems and networks that may have been compromised. JTF-CND also has developed standard tactics, techniques, and procedures for responding to cyber incidents and sharing critical information on cyber threats and attacks. Further, it is developing standard policies for sharing information with external partners, such as the National Infrastructure Protection Center (at the Federal Bureau of Investigation) and the Federal Computer Incident Response Capability (at the General Services Administration). JTF-CND is also developing procedures to exchange critical information with the intelligence community and other Defense agencies. Although DOD has progressed in developing its incident response capabilities, it faces challenges in several areas, including departmentwide planning, data collection and integration, vulnerability assessment procedures, compliance reporting, component-level response coordination, and performance management. Addressing these challenges would help DOD improve its incident response capabilities and keep up with the dynamic and ever-changing nature of cyber attacks. Because the risk of cyber attack is shared by all DOD systems that are interconnected with each other and the public telecommunications infrastructure, it is important that incident response activities be well coordinated across the department. An attacker who successfully penetrates one DOD system is likely to use that system’s interconnections to attack other DOD computers and networks. Even if an attacker is at first unsuccessful in penetrating a particular system or network because it is well protected, such a person can go on to attack other systems and networks that may have vulnerabilities that are more easily exploited. For these reasons it is important that incident response activities be coordinated departmentwide to ensure that consistent and appropriate capabilities are available wherever they are needed. DOD incident response officials agreed that coordination was important and report that the department has begun coordinating activities of the military services as part of the Program, Planning, and Budgeting System process. However, DOD has not yet identified departmentwide priorities or funding requirements for incident response. Instead, each of the services annually determines its own incident response priorities and funding requirements; as a result, the resources committed to incident response vary substantially. For example, Air Force officials estimated that they would spend over $43 million for their Information Warfare Center and Computer Emergency Response Team in fiscal year 2000, whereas Navy officials estimated that they would spend less than $4 million on their corresponding activities. Given widely varying resource commitments and the lack of established departmentwide priorities, it is uncertain whether systems and networks are being consistently and appropriately protected from cyber attack across the department. According to DOD officials, it is difficult to identify departmentwide priorities, because no agreement has yet been reached on the core functions and characteristics of incident response teams among the multiple services and Defense agencies that currently field such teams. According to DOD officials, an effort is now under way at the department level to define those core functions and characteristics. Integrating critical data from heterogeneous systems throughout an organization is important for effective incident response because it helps to assess and address threats, attacks, and their impact on systems and networks. Sufficient information is needed to establish what events occurred and who or what caused them. As attacks become more sophisticated, obtaining this information can become more and more difficult, requiring more and better-integrated data. Attackers may go to great lengths to disguise their attacks by spreading them over long periods of time or going through many different network routes, so that it is harder for intrusion detection systems to notice that attacks are occurring. Because of the threat of these kinds of attacks, it is increasingly important to collect intrusion data from as many systems and sensors as possible. Although it has begun to develop several tools for tracking different kinds of incident data from across the department, DOD has only recently begun to implement key systems for integrating useful data from various intrusion detection systems and other heterogeneous systems, sensors, and devices for analysis. JTF-CND has taken steps to integrate intrusion data by sponsoring development of a Joint CERT Database to consolidate information on documented cyber attacks that have been collected individually by the services. According to DOD officials, the Joint CERT Database first became operational in January 2001. Work is also under way to develop a joint threat database as well as a database of law enforcement- related information. However, neither of these tools is yet operational. Integrating intrusion data from across the department is a significant challenge because many different systems are in use that collect different kinds of data. Each of the services has deployed different intrusion detection systems to track anomalous network activity, and databases designed to track different types of specific data elements have been developed to synthesize raw data for analysis. Further, key information, such as data on insider attacks, is not yet tracked departmentwide. To help overcome this difficulty, JTF-CND also launched a project to establish common terminology for incident response to help standardize reporting of cyber incidents and attacks throughout the department. However, the task force has not yet been able to bridge significant differences among the military services regarding how to classify and report computer incidents. For example, the Air Force currently does not report “probes” to JTF-CND because it does not consider these events harmful until its systems or networks are actually under attack. Internally, the Air Force identifies thousands of probes of its systems and networks daily and told us that reporting this information to JTF-CND would provide little insight on cyber attacks. However, the Army and Navy do report probes to JTF-CND. Experts believe data on probes can be used to assess the likelihood of an attack in the future. This is because potential intruders typically use a series of probes to gather technical information about systems so that they can tailor an attack to exploit the vulnerabilities most likely to be associated with those systems. Thus a series of probes against a system or systems may indicate that a more concerted attack against the same systems is likely in the near future. Although DOD has had procedures in place since 1986 for the military services to conduct vulnerability assessments of systems and networks and collect information on security weaknesses, no process has been developed, either at the department level or within the services, for prioritizing the conduct of vulnerability assessments. Instead, vulnerability assessments are generally conducted only when requested by component commanders or service-level audit agencies. Service officials agreed that there was no departmentwide process to identify which systems or networks faced the greatest risks and therefore should be assigned the highest priority for vulnerability assessments. Neither is there a mechanism to follow up on the results of these assessments to verify that security weaknesses have been corrected. Generally, the assessment teams do not verify that corrective action has been completed as recommended. The problem is compounded by the fact that, in some cases, component officials are not responsible for all the systems and network connections identified as having security vulnerabilities. No procedures are in place to ensure that the systems outside their responsibility are fixed. Furthermore, the information about vulnerabilities collected during these assessments is provided only to the affected components and not shared among the military services and Defense agencies. There is no process for ensuring that the results of these assessments are applied consistently and comprehensively to other similar systems and networks across the department. As a result, systems with the same vulnerabilities operating at other locations may not be addressed and thus may remain vulnerable. The DOD Office of the Inspector General (OIG) reported similar issues in 1997 and recommended that more be done to establish departmentwide priorities for conducting computer security reviews. Compliance with Information Assurance Vulnerability Alerts (IAVA) and other published guidance is critical because most successful attacks exploit well-known vulnerabilities. In 1999, for example, DOD reported that over 94 percent of its 118 confirmed cyber intrusions could have been prevented because they involved system access vulnerabilities that could have been remedied if organizations had followed recommendations already published through IAVAs and other security guidance. According to DOD officials, some of these fixes may have been completed but later inadvertently undone when systems were subsequently modified or upgraded. IAVAs are used to notify the military services and Defense agencies about significant computer security weaknesses that pose a potentially immediate threat and require corrective action. The services and Defense agencies are required to acknowledge receipt of the alerts and report on the status of compliance with recommended repairs within specified time frames. Also, DISA uses the IAVA process to disseminate technical bulletins and advisories about lower risk vulnerabilities and recommend ways to repair systems and networks. The military services, Defense agencies, and components are responsible for following recommendations in these notifications as they deem necessary. Although military components are required to report on the status of compliance with IAVAs, current status reports provide limited insight on the extent to which systems and networks are being repaired. The information provided by the military services is not complete and may not accurately reflect compliance across DOD. In December 2000, the OIG reported that the Marines and Navy were the only services providing required IAVA compliance information to DISA. In addition, based on information provided by the JTF-CND, corrective remedies specified in alerts, technical bulletins, and advisories issued as part of the IAVA process may not always be followed. Without full compliance and accurate reporting, DOD officials do not know whether critical systems remain vulnerable to known methods of attack. DOD officials are aware that the IAVA monitoring process as currently implemented is not adequate, and a draft revision to the existing IAVA policies and procedures is being developed. In December 2000, the U.S. Space Command hosted a conference to address compliance reporting problems and discuss possible ways to link IAVA compliance reporting with existing operational readiness reporting requirements. However, at the time of our review, no final action had been taken to improve the compliance reporting process. Coordinating responses to cyber attacks with internal and external partners, as well as law enforcement agencies, is important because it helps organizations respond to cyber attacks more promptly and efficiently, thus deterring cyber crime. Recognizing the need for this coordination, the Joint Chiefs of Staff established the Information Operations Condition (INFOCON) system in March 1999 as a structured, coordinated approach to react to and defend against attacks on DOD systems and networks. The INFOCON system defines five levels of threat and establishes procedures for protecting systems and networks at each level. These procedures were modeled after security requirements for bases, commands, and posts that require coordinated and heightened security when attacks are imminent or under way. The INFOCON system focuses on network-based protective measures and outlines countermeasures to unauthorized access, data browsing, and other suspicious activity, such as scanning and probing. Although the INFOCON system is a useful approach to standardizing incident response throughout DOD, the established measures provide only general guidance about the kinds of incident response activities that might be appropriate at each INFOCON level. Most decisions about what countermeasures to apply and how to apply them are left in the hands of systems administrators and other officials at individual DOD facilities. Lacking detailed guidance, the decision to apply countermeasures can be difficult for these officials in part because the countermeasures themselves may affect system performance. Inexperienced personnel may overreact and implement drastic countermeasures, resulting in self-inflicted problems, such as degraded system performance or communication disruptions. More detailed INFOCON guidance could outline operational priorities and other risk factors for consideration at each level to encourage consistent departmentwide responses to computer incidents. According to JTF-CND, the “ILOVEYOU” attack demonstrated problems in applying INFOCON procedures uniformly across the department and poor communications regarding the appropriate INFOCON level for responding to the cyber attack. Once the “ILOVEYOU” virus had emerged, it took DOD several hours to produce a departmentwide recommendation on the appropriate INFOCON level for responding to the attack. Individual commands independently chose a variety of different levels and responses. For example, some commands made few changes to their daily operational procedures, while others cut off all electronic mail communications and thus became isolated from outside contact regarding the status of the attack. The INFOCON system did not provide any specific guidance on the appropriate INFOCON level or procedures for responding to a virus attack. DOD recently organized a conference to examine ways to improve the INFOCON system, and DOD officials told us that revisions to the INFOCON procedures had been drafted that provide additional detail. However, at the time of our review, the revised procedures had not yet been issued. Further, according to a JTF-CND official, the revised procedures do not discuss the full range of system administrator actions that may be needed to address threats at each INFOCON level. The procedures also do not help systems administrators determine which systems are most in need of defensive actions to maintain support for critical operations. Establishing and monitoring performance measures for incident response is essential to assessing progress and determining whether security measures have effectively mitigated security risks. Leading organizations establish quantifiable performance measures to continually assess computer security program effectiveness and efficiency. DOD officials stated that some quantifiable measures have been established for incident response. For example, the Air Force, Army, Marines, and Navy identify the number and type of cyber incidents and attacks that occur annually and report this information to appropriate senior officials within DOD. In addition, the Deputy Secretary of Defense established a goal of sharing information on significant cyber incidents within 4 hours. Although progress has been made, DOD officials agreed that more could be done to improve incident response performance measures and goals. For example, DOD could track information on the time required to respond to cyber attacks and the costs associated with managing attacks. The Navy now collects some information on the staff hours used to manage cyber attacks, which could be helpful in establishing performance measures. This information also could be used to establish baselines for reporting and responding to various types of cyber attacks and could be linked to combat readiness and mission performance objectives. Space Command and JTF-CND officials indicated that some work was under way to establish performance parameters for incident response and to support joint military training requirements. Further, DOD conducts hundreds of computer security reviews of systems and networks annually but does not assess results from these evaluations to establish goals for improving computer security across the department. Information from these reviews could be used to identify patterns or security weaknesses across the Department and to establish targets to reduce security weaknesses within high-risk areas or for mission-critical systems and applications. DOD has established significant incident response capabilities at the military services and mechanisms for centrally coordinating information assurance activities and incident response capabilities through DIAP and JTF-CND, respectively. However, DOD faces challenges in improving the effectiveness of its incident response capabilities, including (1) coordinating resource planning and priorities for incident response across the department; (2) integrating critical data from heterogeneous systems, sensors, and other devices to better monitor cyber events and attacks; (3) establishing a departmentwide process to periodically and systematically review systems and networks on a priority basis for security weaknesses; (4) ensuring that components across the department consistently report compliance with vulnerability alerts; (5) improving the coordination of component-level incident response actions; and (6) developing departmentwide performance measures to assess incident response capabilities and thus better ensure mission readiness. Acting to address these challenges would help DOD better protect its systems and networks from cyber threats and attacks. We recommend that the Secretary of Defense direct the Assistant Secretary of Defense for Command, Control, Communications, and Intelligence and the U.S. Space Command to work through DIAP and JTF-CND to finalize a departmentwide incident response plan, including objectives, goals, priorities, and the resources needed to achieve those objectives; expedite the development and enhancement of a complete set of systems for integrating and analyzing useful data from intrusion detection systems and other systems used to monitor computer security weaknesses, including tracking data on insider attacks; standardize terminology for computer incidents to facilitate the integration of incident data across the department; establish a systematic, departmentwide process for prioritizing and conducting vulnerability assessments of high-risk systems and networks and capabilities needed to support mission-critical operations; evaluate and monitor results from vulnerability reviews to ensure that recommended repairs have been made and have been applied to all similar systems throughout DOD; establish procedures to ensure consistent and complete reporting on the status of repairs required in IAVAs across the department; link IAVA compliance reporting requirements to mission-critical systems and operations to increase awareness of the value of complying with technical bulletins and advisories distributed as part of the IAVA process; refine INFOCON procedures to clarify the kinds of actions that need to be taken at each INFOCON level, especially with regard to priority systems, such as mission-critical systems; and establish a performance-based management process for incident response activities to ensure that departmentwide goals as well as combat requirements are achieved, including establishing goals for (1) reducing the prevalence of known security vulnerabilities in systems and networks that support mission-critical operations and (2) timeliness in responding to known types of cyber attacks. In written comments on a draft of this report, which are reprinted in appendix I, the Assistant Secretary of Defense for Command, Control, Communications, and Intelligence stated that the department concurred with our draft report. In response to our second recommendation, DOD stated that the Joint CERT Database is now operational. We have clarified that this recommendation is to speed the development and enhancement of a complete set of systems for integrating and analyzing incident data, not just the Joint CERT Database. The department also provided technical comments that we have addressed as appropriate throughout the report. We are sending copies of this report to Representative Ike Skelton, Ranking Minority Member, House Committee on Armed Services; to Representative Curt Weldon, Chairman, and Representative Solomon P. Ortiz, Ranking Minority Member, Subcommittee on Military Readiness, House Committee on Armed Services; and to other interested congressional committees. We are also sending copies to the Honorable Donald H. Rumsfeld, Secretary of Defense; the Honorable Paul Wolfowitz, Deputy Secretary of Defense; and the Honorable Arthur L. Money, Assistant Secretary of Defense for Command, Control, Communications, and Intelligence and Chief Information Officer. This letter will also be available on GAO’s home page at http://www.gao.gov. If you or your staff have any questions about this report, please call me on (202) 512-3317. Major contributors to this report included John de Ferrari, Karl Seifert, John Spence, and Yvonne Vigil. The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Orders by mail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Orders by visiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders by phone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: fraudnet@gao.gov 1-800-424-5454 (automated answering system)
This report reviews the department of Defense's (DOD) implementation of computer incident response capabilities and identifies challenges to improving these. GAO found that during the last several years, DOD has taken several steps to build incident response capabilities and enhance computer defensive capabilities across the Department, including the creation of computer emergency response teams and incident response capabilities within each of the military services as well as the Defense Information Systems Agency and the Defense Logistics Agency. DOD also created the Joint Task Force-Computer Network Defense (JTF-CND) to coordinate and direct the full range of activities within the Department associated with incident response. GAO identified the following six areas in which DOD faces challenges in improving its incident response capabilities: (1) coordinating resource planning and prioritization activities; (2) integrating critical data from intrusion detection systems, sensors, and other devices to better monitor cyber events and attacks; (3) establishing departmentwide process to periodically review systems and networks for security weaknesses; (4) increasing individual unit compliance with departmentwide vulnerability alerts; (5) improving DOD's system for coordinating component-level incident response actions; and (6) developing departmentwide performance measures to assess incident response capabilities.
Mr. Chairman and Members of the Subcommittee: We are pleased to have this opportunity to discuss matters related to deceptive mail marketing practices, which have been used by various organizations and individuals to induce consumers to purchase goods and services or send money for misrepresented purposes. My statement will include a brief summary of our previous testimony on the extent and nature of problems that consumers experienced primarily with mailed sweepstakes material. Also, I will discuss our most recent efforts to obtain updated information that could indicate the extent and nature of problems that consumers may have experienced with various types of mailed material that have been used to deceive, mislead, or fraudulently induce them into purchasing goods or services. This type of mail, known as deceptive mail, includes sweepstakes and other types of mailed material, such as lotteries and chain letters. Finally, I will provide information on initiatives in which various federal agencies and other organizations have participated to address consumers’ problems with deceptive mail marketing practices and help educate consumers about potential problems that could occur with such practices. Our most recent work on deceptive mail was done in response to your November 1998, request as well as an October 1998, request from the Permanent Subcommittee on Investigations and the Subcommittee on International Security, Proliferation and Federal Services, Senate Committee on Governmental Affairs. We are also providing copies of our statement to the chairs of the two Senate subcommittees. Mr. Chairman, as we agreed, the primary objective for our most recent work was to obtain updated available information on the extent and nature of consumers’ problems with various types of deceptive mail. Also, we obtained updated available information on efforts by various federal, state, local, and nongovernmental organizations to address consumers’ deceptive mail problems and educate them about possible problems that could occur with deceptive mail marketing practices. In addition, through an outside contractor, we conducted a survey to obtain opinions from the U.S. adult population about specific types of deceptive mail. We did our work from November 1998 through July 1999 in accordance with generally accepted government auditing standards. We obtained comments on a draft of this testimony from the Federal Trade Commission (FTC) and the U.S. Postal Service, including the Postal Inspection Service and the Consumer Advocate. We included their comments where appropriate. We also arranged for the various state, local, and nongovernmental organizations that provided us information to review relevant sections of this testimony. We incorporated their technical comments where appropriate. Additional information about our approach is included in attachment I to this statement. As you are aware, Mr. Chairman, since the summer of 1998, much attention has been focused on consumers’ problems with deceptive mail. Various activities, including specific legislative proposals and hearings, have raised congressional and public awareness about problems that some consumers have experienced as a result of deceptive mail marketing practices. A recent example of such an activity was the May 1999 approval by the Senate Governmental Affairs Committee of proposed legislation entitled “Deceptive Mail Prevention and Enforcement Act” (S. 335), which was introduced in February 1999, by Senator Susan Collins. In her introductory remarks, Senator Collins indicated that the proposed legislation was generally designed to help ensure that organizations that used various types of promotional mailed material, such as sweepstakes, were as honest and accurate as possible in their dealings with consumers. Provisions in the proposed legislation (1) authorized financial penalties against organizations that did not comply with proposed requirements, (2) authorized specific law enforcement actions, including the issuance of subpoenas, that the Postal Inspection Service could use in combating deceptive mail marketing practices; and (3) provided assurance that the proposed legislation would not preempt state and local laws that were designed to protect consumers against deceptive mail marketing practices. For a congressional hearing held in September 1998, we provided testimony in which we discussed information about consumers’ problems with specific types of deceptive mail and some initiatives that were intended to help educate consumers about potential deceptive mail problems. We found that comprehensive data indicating the full extent of consumers’ problems with mailed sweepstakes material and cashier’s check look-alikes were not available. However, FTC and the Postal Inspection Service had some data on complaints that could indicate the nature of consumers’ problems with deceptive mail. A sample of complaints from FTC showed that in many instances, consumers were required to remit money or purchase products or services before being allowed to participate in sweepstakes. Information about specific Postal Inspection Service cases that had been investigated largely involved sweepstakes and cash prize promotions for which up-front taxes, fees, or insurance were required before consumers could participate in sweepstakes promotions. In our previous testimony, we discussed two initiatives that were intended to address consumers’ problems with deceptive mail. The initiatives included (1) Project Mailbox, which was established to help educate consumers and appropriately deal with organizations and individuals that attempted to defraud consumers through the use of mass mailings; and (2) a multi-state sweepstakes subcommittee that was designed to facilitate cooperation among states in dealing with companies that attempted to defraud consumers through the use of mailed sweepstakes material. With your permission, I would like to provide the Subcommittee a full copy of our previous testimony for inclusion into the record of today’s hearing. Comprehensive data that could indicate the full extent of consumers’ problems with deceptive mail were not available. Various officials from the agencies and organizations we contacted told us that such data were unavailable mainly because consumers oftentimes did not report their problems and no centralized database existed from which comprehensive data could be obtained. Due to the overall lack of comprehensive data, we contracted for a survey to obtain perspective on the extent to which consumers believed that they had received specific types of mailed material that appeared to them to be misleading or deceptive. Also, we identified two federal agencies—FTC and the Postal Inspection Service—that maintained some data that could provide insight into the nature of consumers’ problems with deceptive mail. However, these data may include some duplicative complaints because some consumers who filed complaints may have done so with both agencies. To obtain perspective on American consumers’ opinions about specific types of deceptive mail, we contracted with International Communications Research (ICR), a national market research firm, to perform a statistically generalizable sample of adults 18 years of age or older in the continental United States. The results of the survey, which was conducted in November 1998, indicated that 51 percent of the survey respondents believed that within the preceding 6 months, they had received mail involving sweepstakes or documents resembling cashier’s checks, known as cashier’s check look-alikes, that appeared to be misleading or deceptive. However, 45 percent of the respondents said they had not received such mail and the remaining 4 percent were not sure, did not remember, or did not know. Additional analysis of survey results indicated that the higher the educational levels of respondents, the more likely they were to believe that they had received these types of deceptive mail. The percentages of respondents who believed that they had received such mail were about: 43 percent for respondents with a high school education or less; 56 percent for those with some college education; and 62 percent for those with a completed college education or higher. A similar trend was identified for respondents and their income levels in that at higher income levels, respondents were more likely to believe that they had received such mail. The percentages by income level included about: 32 percent for respondents whose annual income was less than $15,000; 52 percent for respondents whose annual income ranged between $15,000 and $49,999; and 62 percent for respondents whose annual income was $50,000 or more. For our updated work efforts, various officials and representatives of the agencies and organizations from which we obtained information again believed that the most appropriate source of consumer complaint data would be FTC’s Consumer Information System (CIS). According to FTC officials, the purpose of CIS, which was first established around February 1997 and became fully operational in September 1997, was to collect and maintain various data related to consumers’ complaints. FTC officials told us that CIS data are used primarily by law enforcement organizations and officials to assist them in fulfilling their law enforcement duties. The CIS database contained a total of about 200 categories within which consumers’ complaints were included. The categories covered a wide range of topics such as (1) creditor debt collection, (2) home repair, (3) investments, (4) health care, and (5) leases for various products and services, such as automobiles and furniture. For the period October 1, 1997, through March 31,1999, our analysis indicated that CIS included a total of 48,122 consumer complaints for which the methods of initial contact with consumers were identified. Such methods included mail; telephone; fax; printed material, such as newspapers and magazines; and the Internet. Of the 48,122 complaints, the largest number, 18,143, or about 38 percent, indicated that consumers were initially contacted through the mail. Of the 18,143 complaints, we found that in 10,145, or about 56 percent, of these complaints, companies had requested individual consumers to remit money. The total amount of money requested by the companies was reported to be about $88.2 million. Also, our review of the 18,143 consumer complaints showed that 2,715, or about 15 percent, of the consumers reported that they had remitted money to the companies. The total amount of money these consumers said they had paid was about $4.9 million. The amounts of money individual consumers said that they had paid ranged from less than $1 to over $1 million. Of these 2,715 complaints, about: 50 percent were less than $100; 35 percent were between $100 and $999; 10 percent were between $1,000 and $4,999; and 5 percent were $5,000 or more. The largest reported amount of money paid by a consumer was $1,734,000. Available CIS information indicated that this complaint involved a consumer’s concerns about a credit bureau referring inaccurate information to a debt collection agency. In reviewing the 18,143 complaints in which consumers were initially contacted through the mail, we identified five CIS categories that included the highest number of consumer complaints, which totaled 10,776 complaints, or about 59 percent. The five categories included Telephone: pay per call/information services, which can involve consumer complaints about calls to publicly available telephone numbers, such as 1- 900 numbers, for which consumers incur per-minute charges in return for information or entertainment. Also, complaints can involve unauthorized charges on consumers’ telephone bills, also known as “cramming” (3,487 complaints). Telephone: carrier switching, also known as “slamming,” in which companies would switch consumers’ telephone services from one company to another without consumer authorization (1,051 complaints). Prizes/sweepstakes/gifts, which can oftentimes involve consumer complaints about mailed material that solicit advance fees for consumers to be able to participate in a sweepstakes or contest (2,859 complaints). Credit bureaus, which can generally involve consumer complaints about the methods by which such bureaus maintain and disseminate credit information (2,025 complaints). Third party debt collection, which can involve consumer complaints about methods used by various companies or individuals to collect debts owed by consumers (1,354 complaints). For the five CIS categories, we found that a total of 10,355 complaints, or about 96 percent, included comments that could provide insight into the nature of problems that consumers had experienced with deceptive mail. We randomly selected 20 consumer complaints from each of the 5 categories for a total of 100 complaints. A discussion of the types of comments in the five categories and some examples follow. Two of the five CIS categories involved problems that consumers reportedly experienced with mailed material that involved various telephone services, including pay-per-call and specific information services as well as slamming and cramming. Generally, consumers’ comments in these two categories focused on complaints about unauthorized actions by companies in providing various telephone services, including (1) switching telephone services from one company to another without consumer authorization, (2) charging consumers for services they never requested, and (3) charging for services that consumers claimed were cancelled. For the prizes/sweepstakes/gifts category, consumer comments focused on complaints about companies’ requirements for participating in sweepstakes. According to FTC, various requirements, such as advance payments, fees, or purchases of products, should not be required before consumers may participate in sweepstakes. Also, consumers complained about being required to call specific telephone numbers for which they were charged fees. In the credit bureaus category, the comments included consumers’ complaints about inaccurate information on their credit reports. Also, consumers expressed concerns about such issues as denial of credit and dissemination of credit information to companies and individuals without permission. For the third party debt collection category, consumer comments focused generally on harassment that consumers reportedly experienced from debt collectors. Such harassment included being called nasty names, receiving numerous telephone calls, and being treated without dignity. Also, some consumers disputed owing specific debts or the amounts of the debts. The Postal Inspection Service maintained two databases—the Fraud Complaint System (FCS) and the Inspection Service Data Base Information System (ISDBIS)—that included information related to consumers’ problems with deceptive mail. FCS was designed to collect and maintain consumer complaint information about various types of alleged fraudulent activities, including those involving deceptive mail marketing practices. ISDBIS was designed to be a case-tracking system that recorded information related to specific cases that postal inspectors used as they investigated specific organizations or individuals involved in various mailing activities that were allegedly intended to defraud consumers, businesses, and the federal government. To gain a better understanding of how consumer complaints about deceptive mail were included in FCS, we obtained information about the overall process through which consumers could file complaints with the Postal Service. According to Postal Inspection Service officials, if consumers have concerns or wish to file complaints about material that they have received through the mail, consumers may visit or call their nearby Postal Inspection Service offices or postal facilities, which included post offices, stations, or branches. If consumers’ concerns are related to mailed material that they believe is deceptive, misleading, or fraudulent, postal employees are expected to refer consumers to the Postal Inspection Service. The methods of these referrals generally include providing consumers with the telephone number or address of the appropriate local Postal Inspection Service office, the Internet website address of the Postal Inspection Service, or a Postal Inspection Service mail fraud complaint form. Also, Postal Inspection Service officials told us that in some cases, to provide additional assistance to consumers, postal employees may offer to forward the questionable mailed material directly to the Postal Inspection Service. We visited a total of 15 postal facilities to observe how postal employees referred consumers to the Postal Inspection Service. The facilities included post offices and stations in the metropolitan areas of Dallas, Texas; Los Angeles, California; and Washington, DC. At the facilities, we asked postal employees working at counters how to handle mail believed to be deceptive. At 8 of the 15 facilities we visited, postal employees appropriately referred us to the Postal Inspection Service. At the 7 remaining facilities, postal employees either referred us to organizations other than the Postal Inspection Service or were unable to provide any guidance. For example, two postal employees referred us to a national toll- free 1-800 number (i.e., 1-800-ASK-USPS). According to postal officials, consumers could reach the Postal Inspection Service through 1-800-ASK- USPS. We made three calls to 1-800-ASK-USPS to determine whether consumers could reach the Postal Inspection Service through this number. During one call, the responding customer service representative provided us with the telephone numbers of both the local consumer affairs office and Postal Inspection Service office. During the remaining calls, the representatives either provided us the telephone number for the local consumer affairs office or the address of the Direct Marketing Association (DMA), which we were told could remove consumers’ names from mailing lists. We obtained FCS data for an 18-month period, (i.e., October 1, 1997, through March 31, 1999). The data we obtained focused on two of the four complaint categories within FCS—fraud and chain letters—because postal officials told us that these categories were most likely to include relevant information about consumers’ problems concerning deceptive mail. Our analysis of FCS data indicated that the Postal Inspection Service had received 16,749 consumer complaints regarding fraud and chain letters. Complaints in the fraud category totaled 7,667, or about 46 percent, of the total complaints in these two categories, and 9,082 complaints, or about 54 percent, were included in the chain letter category. According to FCS data, no monetary losses were reported for the 9,082 complaints in the chain letter category. However, for the 7,667 complaints in the fraud category, a total of about $5.2 million in monetary losses was reported by consumers. These losses were reported in 2,976, or about 18 percent, of the 16,749 fraud and chain letter complaints. Also, the 2,976 complaints that cited losses amounted to about 39 percent of the 7,667 complaints in the fraud category. The remaining 4,691 fraud complaints, or about 61 percent, cited no monetary losses. For the 2,976 fraud complaints that cited monetary losses, the amounts of money individual consumers said that they had paid ranged from less than $1 to over $365,000. Of these complaints, about: 55 percent were less than $100; 29 percent were between $100 and $999; 15 percent were between $1,000 and $29,999; and 1 percent were $30,000 or more. The largest monetary loss reported by a consumer was $365,432. However, available FCS information was insufficient to describe the nature of the consumer complaint associated with this loss. Similarly, we attempted to determine the nature of other consumer complaints in the fraud and chain letter categories using a random sample of 50 complaints with comments from each category for a total of 100 complaints. For these complaints, we found that the comments were unclear or lacked sufficient detail to provide insight into the nature of consumers’ deceptive mail problems. We recently learned from a Postal Inspection Service official that additional fraud complaints were contained in a third FCS category called “consumer complaint program.” According to the official, for the period October 1, 1997, through March 31, 1999, the category included a total of about 48,000 complaints, which generally involved such matters as fraud, bad business practices, or misunderstandings between consumers and companies. Although the Postal Inspection Service was unable to specifically identify how many of these complaints involved fraud, officials determined that about 4,000, or about 8 percent, of these complaints were associated with active mail fraud investigations. The officials, however, could determine neither the number of investigations involved nor whether these complaints led to such investigations. We obtained information from ISDBIS that focused on fraud against consumers. For fiscal year 1998, our analysis identified a total of 1,869 ISDBIS cases, which included 1,333 cases that carried forward into fiscal year 1998 from fiscal year 1997, and 536 cases that were opened during fiscal year 1998. The cases involved various types of allegedly deceptive mail marketing practices, including investment schemes, lotteries, fraudulent charity solicitations, work-at-home schemes or plans, and advance fee loan schemes. By the end of fiscal year 1998, 576 cases had been closed, of which 293, or about 51 percent, involved four top deceptive mail marketing practices or schemes. The four were (1) lotteries, (2) telemarketing, (3) investment schemes, and (4) work-at-home plans. During fiscal year 1998, the Postal Inspection Service initiated various law enforcement actions resulting from investigative cases involving the four top deceptive mail schemes. According to ISDBIS data, a total of 911 enforcement actions were taken, which included arrests, convictions, and other actions. Of the total actions taken, 480, or 53 percent, involved arrests and convictions. Also, ISDBIS data for sweepstakes showed that a total of 43 actions were taken. For our most recent work, we obtained updated information on the two initiatives that we discussed in our previous testimony, namely Project Mailbox and the National Association of Attorneys General (NAAG) multi- state sweepstakes subcommittee. Also, we obtained updated information from various federal, state, and local agencies and nongovernmental organizations about their recent efforts to help educate and make consumers more aware of the potential problems that could result from deceptive mail marketing practices. These efforts involved activities that were initiated by various organizations, including FTC, the Postal Inspection Service, state Attorneys General offices, and nongovernmental organizations, such as the American Association of Retired Persons (AARP) and NAAG. Project Mailbox was established to help educate consumers and appropriately deal with organizations, companies, and individuals that attempted to defraud consumers through the use of mass mailings. In fiscal year 1998, FTC, the Postal Inspection Service, and Attorneys General offices for various states initiated 203 law enforcement actions that targeted specific organizations, companies, and individuals that allegedly attempted to deceive, mislead, or defraud consumers through various mail marketing practices. The practices included a wide range of schemes, including not only sweepstakes, prize promotions, lotteries, advance fee loans schemes, and government look-alike mail, but also such schemes as guaranteed scholarships, vacation and travel packages, and fraudulent charity solicitations. For the 203 law enforcement actions, FTC, the Postal Inspection Service, and various state Attorneys General offices provided us some information on 101, or about 50 percent, of the actions, which provided perspective on these actions. These federal and state organizations estimated that a total of about 841,000 consumers had purchased products and/or services from the organizations, companies, or individuals that were the targets of the law enforcement actions. Also, an estimated total of about $424 million was identified as sales to consumers or funds consumers had paid to the targeted organizations, companies, or individuals. We have no information on the extent to which deceptive mail problems may have been involved with the total number of consumers identified and the payments made. However, FTC, the Postal Inspection Service, and various state Attorneys General offices estimated that about 10,400 consumer complaints led to or initiated the 101 law enforcement actions. In February 1999, NAAG’s Subcommittee on Sweepstakes and Prize Promotion convened a hearing in Indianapolis, Indiana. The purpose of the hearing was to gather information about sweepstakes promotions and create consensus on the best approaches for deterring and punishing those who participate in fraudulent sweepstakes activities. Witnesses at the hearing included representatives of the direct mail marketing industry, individual consumers from various states, federal government representatives, and experts from the academic community. Based on information discussed at the hearing and lessons learned from years of investigations and litigation, the subcommittee generally recommended that the sweepstakes industry adopt specific voluntary practices to ensure that consumers are not misled. Some of the recommended practices included (1) clearly disclosing the odds of winning the sweepstakes or contest, (2) not representing or implying that ordering a product increases a consumer’s chance of winning, and (3) having a standard, simple, uniform means for entering sweepstakes both for consumers who place orders and those who do not. FTC, the Postal Inspection Service, and various state, local, and nongovernmental organizations have either completed or initiated efforts to help educate consumers and raise their awareness about problems that could result from deceptive mail. These efforts range from the establishment of a national toll-free hotline to the publication of consumer awareness articles. FTC has initiated or participated in activities to help consumers deal with deceptive mail marketing practices. For example, FTC: established, on July 7, 1999, a national toll-free hotline (i.e., 1-877-FTC- HELP or 1-877-382-4357) that consumers could use to file complaints on various topics, including deceptive mail. According to FTC, the hotline is intended not only to make FTC more accessible to consumers who wish to file complaints but also to make consumer complaint data available to law enforcement agencies in the United States and Canada. maintains a website through which consumers may obtain information that can help them address potential problems associated with deceptive mail. This information covers topics ranging from prize offers to magazine subscription scams to receipts of unordered merchandise. In addition, FTC officials told us that FTC has continued to work with other organizations, such as NAAG, to encourage these organizations to share consumer complaint information with FTC, so that more comprehensive data on consumer complaints can be centrally collected and maintained in FTC’s Consumer Information System (CIS). CIS fraud consumer complaint data are made available to various law enforcement organizations through FTC’s Consumer Sentinel website. According to Postal Inspection Service officials, the Inspection Service’s efforts to educate consumers are important to its continuing fight against deceptive mail marketing practices. These efforts range from national to local activities that are designed to help consumers avoid being victimized by deceptive mail marketing practices. For example, the Inspection Service: mailed out postcards in May 1993, to about 210,000 households in the United States, informing consumers that they had won prizes and asked consumers to call a telephone number. However, when consumers called the number, they reached the Inspection Service and were warned against responding to the postcards because similar solicitations are often used by companies to scam consumers. is developing another postcard mailing to alert consumers to potential problems that could be caused by deceptive mail and telemarketing and identify a national hotline through which consumers may file complaints. The postcards are to be distributed to about 114 million households nationwide in October 1999. distributed in December 1994, a video news release that was sent to various television news stations throughout the United States. The video included information on how consumers could identify whether elderly relatives were having problems in handling mailed material from organizations. is developing a video that will include information to help consumers avoid both problems with deceptive mail and other types of deceptive marketing practices via the telephone. The video is scheduled for distribution to about 16,000 public libraries around October 1999. In addition, according to Postal Service field officials, the Service has and continues to help educate consumers and raise their awareness about deceptive mail practices. In many instances, postal field personnel work with their local postal inspectors to prepare news releases and make presentations before consumer groups. Officials in the state and local organizations that we contacted cited the following examples of their efforts to help educate consumers about deceptive mail. Representatives from the Connecticut Office of the Attorney General have conducted half-day consumer education sessions for groups of senior citizens to provide them information about deceptive mail. Since January 1, 1999, the office has sponsored 4 sessions with about 1,000 consumers in attendance. Since January 1999, staff from Florida’s Division of Consumer Services have spoken to consumer groups, many of which involved senior citizens, about fraud-related issues. These efforts focused on telemarketing fraud, but have also involved discussions about deceptive mail, including sweepstakes. In April 1999, local consumer affairs staff from Montgomery County, Maryland, conducted an adult education class focusing on consumers’ rights and responsibilities, but information was also provided on sweepstakes and fake award notification letters. In the spring of 1999, the administrator of the Office of Consumer Affairs in Alexandria, Virginia, made a presentation on pyramid schemes received through the mail that pay commissions for recruiting distributors, not for making sales. The presentation was made to both staff in Alexandria’s Office of Aging and local consumers. Various nongovernmental organizations, including DMA, AARP, and Arizona State University, reported that to help educate consumers, these organizations offered conferences and seminars as well as distributed information on deceptive mail marketing practices. Representatives of the organizations identified several examples, which included DMA prepares and distributes action line reports on deceptive mail problems, as well as other marketing issues. These reports are distributed to approximately 800 to 900 consumer affairs professionals and press contacts who are encouraged to share the reports with consumers. A recent action line report, dated July 11, 1999, established a special Sweepstakes HelpLine, which is intended to help various caregivers, such as adult children, who care for elderly relatives; consumer affairs personnel; and social service professionals address problems some people may have with sweepstakes. AARP has conducted 26 training seminars throughout the United States that were attended by about 1,300 law enforcement professionals. The seminars were held during 1998 and provided the professionals with information on deceptive mail, including sweepstakes, prize promotions, and foreign lotteries. Arizona State University, in cooperation with AARP and the Office of the Arizona Attorney General, hosted a conference entitled “New Directions: Seniors, Sweepstakes and Scams.” The conference, which was held in October 1998, was designed for individuals who have been and continue to be involved in consumer education and awareness efforts. Among the conference attendees were representatives from FTC, the Postal Inspection Service, and NAAG. Information on deceptive mail marketing practices was presented and attendees were encouraged to share this information with consumers. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions you or the members of the Subcommittee may have. For future contacts regarding this testimony, please contact Bernard L. Ungar at (202) 512-8387. Individuals making key contributions to this testimony included Gerald Barnes, Anne Hilleary, Lisa Wright-Solomon, Anne Rhodes-Kline, and George Quinn. In developing the scope and methodology for our work, we first obtained a general description of the term “deceptive” as it could be applied to mailed material. According to FTC, mailed material would generally be considered deceptive if the material included a representation or practice or if the material omitted information that caused a consumer to be misled and eventually suffer some loss or injury, despite the fact that the consumer behaved reasonably under the circumstances. Both FTC and the Postal Inspection Service identified various types of mailed material that have been used to induce consumers to remit money, pay upfront fees, or purchase goods or services through deceptive means. However, in many cases, the promised goods or services were not delivered or were not of the quality that consumers may have reasonably expected to receive. Some examples included lotteries from foreign countries or from states that did not have authorized lotteries. chain letters that required consumers to remit payments to participants in the chain letter scheme for which substantial financial returns were promised but never delivered. mailed material that involved various types of consumer credit schemes, such as loans, credit repair offers, and credit card solicitations, for which advance fees were required. requests for charitable donations from organizations that were not legitimate charities. mailed material that looks as if it has been distributed or endorsed by a government agency, also referred to as government look-alike mail. In some instances, mailed material may be illegal in that it violates specific postal or other statutes. For example, chain letters that request money or other items of value and promise a substantial return to the participants are generally illegal. Such letters are considered a form of gambling and sending them through the mail violates section 1302 of Title 18 of the U.S.Code, the Postal Lottery Statute. To obtain updated information about the extent and nature of consumers’ problems with deceptive mail, as well as consumer education efforts, we attempted to contact the 17 federal, state, and local agencies and nongovernmental organizations that we contacted for our September 1998 testimony. In our earlier work, we identified these agencies and organizations as those which had been involved in dealing with consumers’ complaints about questionable or deceptive mail marketing practices involving mailed sweepstakes material and cashier’s check look-alikes. The 17 agencies and organizations included 2 federal agencies—FTC and the Postal Inspection Service—as well as other state and local government agencies and nongovernmental organizations such as state attorneys general offices for such states as Florida and West Virginia; local government offices that handled consumer protection issues; and various nongovernmental organizations including (1) American Association of Retired Persons; (2) National Consumers League, which established National Fraud Information Center; and (3) Direct Marketing Association. Based on our most recent work efforts, we obtained information from 12 of the 17 agencies and organizations, which are listed in attachment II to this statement. At the 12 agencies and organizations, we interviewed officials and reviewed documents to obtain available information about the extent and nature of consumers’ deceptive mail problems and consumer education efforts. Also, we obtained and analyzed consumer complaint data from FTC and Postal Inspection Service databases. In addition, during the course of our work, we obtained from FTC, the Postal Inspection Service, and 45 state attorneys general offices information on specific law enforcement actions involving organizations, companies, and individuals that attempted to defraud consumers through the use of deceptive mail. To obtain information about the consumer complaint process at the Postal Service, we interviewed postal headquarters officials in the Postal Inspection Service and the Postal Service’s Office of Consumer Advocate. Also, we interviewed postal officials at various field locations in different parts of the country who were knowledgeable about the consumer complaint process. Specifically, we spoke with consumer affairs and marketing officials in postal district offices and inspectors in Postal Inspection Service offices located in the metropolitan areas of Dallas, Texas; Los Angeles, California; and Washington, DC. In addition, to obtain insight into how the consumer complaint process was implemented, we visited 15 postal field facilities, including post offices and stations, that were located in the metropolitan areas of Dallas, Texas; Los Angeles, California; and Washington, DC. These locations were selected mainly because staff from our Dallas Regional Office, as well as headquarters staff, were available to conduct face-to-face meetings with appropriate postal field employees. In addition, we had an outside contractor conduct a survey to obtain opinions from the U.S. adult population about specific types of deceptive mail. Through the survey, we attempted to determine whether survey respondents had received any mail delivered by the U.S. Postal Service within the last 6 months involving sweepstakes or documents resembling cashier’s checks that the respondents believed were in any way misleading or deceptive. We contracted with International Communications Research (ICR) of Media, Pennsylvania, a national market research firm, to administer our survey question, which was worded as follows. “We would like to ask you a question concerning mail delivered by the U.S. Postal Service. Within the last 6 months, have you received any mail delivered by the U.S. Postal Service involving sweepstakes or documents resembling cashier’s checks that you believe were in any way misleading or deceptive?” A total of 1,014 adults (18 and older) in the continental United States were interviewed between November 18 and 22, 1998. The contractor’s survey was made up of a random-digit-dialing sample of households with telephones. Once a household was reached, one adult was selected at random using a computerized procedure based on the birthdays of household members. The survey was conducted over a 5-day period, including both weekdays and weekends, and up to four attempts were made to reach each telephone number. To ensure that survey results could be generalized to the adult population 18 years of age and older in the continental United States, results from the survey were adjusted by ICR to account for selection probabilities and to match the characteristics of all adults in the general public according to such demographic groups as age, gender, region, and education. Because we surveyed a random sample of the population, the results of the survey have a measurable precision or sampling error. The sample error is stated at a certain confidence level. The overall results of our survey question regarding the public’s opinion about misleading or deceptive mail are surrounded by 95 percent confidence levels of plus or minus 4 percentage points or less. The practical difficulties of conducting any survey may introduce nonsampling errors. As in any survey, differences in the wording of questions, in the sources of information available to respondents, or in the types of people who do not respond can lead to somewhat different results. We took steps to minimize nonsampling errors. For example, we developed our survey question with the aid of a survey specialist and pretested the question prior to submitting it to ICR. We did our work from November 1998 through July 1999, in accordance with generally accepted government auditing standards. We did not verify consumer complaint data obtained from FTC and Postal Inspection Service nor did we verify data provided by FTC, Postal Inspection Service, and state Attorneys General offices on specific law enforcement actions. Name of agency/organization Federal government agencies: Federal Trade Commission (FTC) U.S. Postal Inspection Service State government agencies (Offices of Attorneys General): Connecticut Florida Local government agencies: Citizen Assistance (Consumer Affairs) for City of Alexandria Consumer Affairs Division for Montgomery County Nongovernmental organizations: American Association of Retired Persons (AARP) Arizona State University (Gerontology Program) Direct Marketing Association (DMA) National Association of Attorneys General (NAAG) National Consumers League (NCL)/National Fraud Information Center (NFIC) U.S. Public Interest Research Group (USPIRG) Washington, D.C. Washington, D.C. Hartford, Connecticut Tallahassee, Florida Alexandria, Virginia Rockville, Maryland Washington, D.C. Tempe, Arizona Washington, D.C. Washington, D.C. Washington, D.C. Washington, D.C. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch-tone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed matters related to deceptive mail marketing practices, focusing on the extent and nature of consumers' problems with deceptive mail and the initiatives various federal agencies and other organizations have made to address deceptive mail problems and educate consumers. GAO noted that: (1) examples of deceptive mail include sweepstakes, chain letters, cashier's check look-alikes, work-at-home schemes, and fraudulent charity solicitations; (2) officials in various agencies and organizations said that comprehensive data on the full extent of consumers' deceptive mail problems were not available mainly because consumers often did not report their problems and no centralized database existed from which such data could be obtained; (3) however, data GAO collected from various sources suggested that consumers were having substantial problems with deceptive mail; (4) based on a GAO sponsored November 1998 statistically generalizable sample of the U.S. adult population, GAO estimates that about half of the adult population believed that within the preceding 6 months, they had received deceptive mailed sweepstakes material or cashier's check look-alikes; (5) officials from the Federal Trade Commission (FTC), Postal Inspection Service, and state Attorneys General offices estimated that in fiscal year (FY) 1998, about 10,400 deceptive mail complaints led to or initiated about 100 law enforcement actions; (6) for the period October 1, 1997, through March 31, 1999, FTC received over 18,000 deceptive mail complaints, of which about 2,700 reported consumer payments of about $4.9 million; (7) also, the Postal Inspection Service received over 16,700 complaints on fraud and chain letters, of which about 3,000 reported consumer fraud losses of about $5.2 million; (8) the Inspection Service also had over 1,800 open investigative cases on deceptive mail during FY 1998; (9) various federal agencies and other organizations have undertaken efforts to address consumers' deceptive mail problems and educate them about such problems; (10) for example, FTC established a national toll-free hotline for receiving deceptive mail and other complaints; (11) one joint effort was Project Mailbox, which involved such organizations as FTC, Postal Inspection Service, and various state Attorneys General; and (12) these organizations initiated over 200 law enforcement actions against companies and individuals that used the mail to allegedly defraud consumers.
The United States is divided into 94 federal judicial districts, each containing the federal trial courts, where criminal and civil cases are tried. Congress placed each of the 94 districts in 1 of 12 regional circuits, each containing a court of appeals, to which district court decisions may be appealed. Figure 2 is a map of the United States showing the geographical boundaries of the 94 district courts and the 12 regional circuit courts of appeals (including 11 numbered circuits and the District of Columbia Circuit.). There is also a Court of Appeals for the Federal Circuit with nationwide jurisdiction for specific types of cases, such as patent appeals. This court does not hear cases involving the federal sentencing guidelines. In 1984, to help ensure that similar crimes committed by similar criminals were punished with similar sentences, Congress, under the Sentencing Reform Act, established the U.S. Sentencing Commission (USSC) and directed that it develop a comprehensive sentencing scheme for federal crimes. USSC established guideline ranges for the length of federal prison sentences taking into account offender and offense characteristics to establish appropriate sentence terms. The sentencing guidelines cover more than 90 percent of all felony and Class A misdemeanor cases in the federal courts. The sentencing guidelines do not apply to Class B or C misdemeanors or infractions, offenses with a maximum prison exposure of 6 months or less. Applying USSC’s guidance, federal district judges in the 94 federal district courts determine the appropriate sentencing guideline range for an offender based on various factors related to (1) the offense and (2) the offender. The offense is assigned an offense level, which for drug offenses is based on several factors such as the quantity and type of drug involved and whether the offense involved violence. The offender is also assigned a criminal history category based on the number of criminal history points. Criminal history points reflect the severity of an offender’s prior criminal record. Taken in combination, the offense level and criminal history category correlate with a sentencing guideline range, which is expressed in months. In addition, for some drug offenses where a mandatory minimum sentence applies, the applicable mandatory minimum sentence supplants the lower end of the applicable guidelines sentencing range. For example, as shown in figure 3, a convicted offender whose offense of conviction is assigned an offense level of 25 and who has a “criminal history category I” should be sentenced between a maximum of 71 months and a minimum of 57 months under the sentencing guidelines unless a mandatory minimum greater than 57 months (e.g., 60 months) is required. A sentence less than 60 months falls below the applicable mandatory minimum. The guidelines also permit sentences that fall above or below an applicable guidelines range, often called upward or downward departures, respectively, in certain circumstances. As illustrated in figure 3, a sentence of more than 71 months would depart upward from the applicable guideline range while a sentence of less than 57 months would depart downward, falling below the lower end of the guideline range. At the request of the prosecution, the judge may depart downward because the defendant has provided substantial assistance to the government—what USSC designates as substantial assistance departures. But the guidelines also provide that a judge may depart downward if the court finds certain mitigating circumstances exist that were not adequately taken into consideration by USSC in formulating the guidelines that should result in a sentence below the guideline range. To assist sentencing courts, the guidelines list both encouraged departure factors (such as coercion or duress, diminished capacity, or aberrant behavior of non- violent offenders) and discouraged though permissible departure factors (such as age, physical condition, family responsibilities, or prior good works). Judges may also consider other, unmentioned factors that were not adequately considered by the guidelines (such as extraordinary rehabilitation after the offense but prior to sentencing). USSC designates consideration of encouraged, discouraged, and unmentioned factors as “other departures.” Judges are required to explain the reasons for departing from the guidelines. The recently enacted PROTECT Act of 2003 makes clear that the reasons must be specific, written, and provided to USSC. USSC maintains a database that records a variety of data on the offenders and offenses for which sentences are imposed. Judges must comply with USSC’s data collection needs by furnishing a written report of the sentence, and the PROTECT Act makes clear that specific sentencing documents must accompany that report. Included in these data is information on whether the sentences imposed fell within or outside the applicable sentencing guidelines range as determined by the sentencing judge. Information on the incidence of sentencing outside of the guideline ranges is used by USSC to identify areas where the sentencing guidelines may need adjustment. Congress, under the PROTECT Act, directed USSC to review the grounds of downward departures that are authorized by its sentencing guidelines, policy statements, and official commentary, and to promulgate amendments to ensure that the incidence of downward departures is substantially reduced. The required amendments to the sentencing guidelines are due October 27, 2003. As previously noted, various drug offenses carry a mandatory minimum.For such offenses, the mandatory minimum precludes judges from sentencing at a lower guideline range minimum or from granting a downward departure that might otherwise be available, unless one of two statutory provisions applies. First, a judge may impose a sentence below the applicable mandatory minimum if the government (the federal prosecutor) files a motion with the court for such sentencing relief because of the defendant’s “substantial assistance” in the investigation or prosecution of another person. The discretion to make such a motion rests solely with the prosecutor. Second, in the absence of a substantial assistance motion, the “safety valve” provision affords relief from any otherwise applicable mandatory minimum sentence for drug offenders who have minimal criminal history (i.e., no more than 1 criminal history point); were not violent, armed, or high-level participants; and provided the government with truthful information regarding the offense. In these cases, the court is directed by statute to impose a sentence pursuant to the sentencing guidelines without regard to a mandatory minimum. As incorporated in USSC’s sentencing guidelines, both the substantial assistance and the safety valve provision may affect sentencing for offenders whose offense of conviction does not carry a mandatory minimum sentence—that is, whose sentences are solely governed by the application of the sentencing guidelines. For such offenders, a substantial assistance motion permits the judge to depart downward from the applicable guidelines range. With respect to the safety valve, the sentencing guidelines provide offenders who are convicted of certain drug offenses and who meet the legislative safety valve requirements a 2-level decrease to their base offense level, for example, from level 25 to level 23. The majority of federal sentences fell within an applicable guideline range, but when sentences departed downward, or fell below a guideline range, they did so about as often due to substantial assistance as to other reasons. Of the 162,090 federal sentences from fiscal years 1999-2001 for which complete sentencing information was available, most were within the guideline ranges determined by the court (64 percent), and about an equal proportion of sentences departed downward due to substantial assistance (18 percent) as for other reasons (17 percent). Similar to federal sentences overall, of the 69,279 drug sentences for which complete departure information was available, we found that most sentences were within guideline ranges (56 percent). Unlike federal sentences overall, from fiscal years 1999 to 2001, federal drug sentences departed downward more frequently due to substantial assistance (28 percent) than other reasons (16 percent), as shown in table 1. Other reasons that drug sentences departed downward included early disposition, that is, fast track, programs initiated by prosecutors; plea agreements; and judges’ consideration of mitigating circumstances. See appendix IV for more information on the frequency of reasons cited for other downward departures. Prosecutors’ substantial assistance motions resulted in downward departing sentences that were on average 49 percent of the average lowest sentence drug offenders otherwise would have received under the guidelines. Other downward departures resulted in sentences that were on average 57 percent of the average lowest sentence drug offenders otherwise would have received under the guidelines. See appendix I for more detailed information on sentence reductions. The percentage of drug sentences that departed downward due to prosecutors’ substantial assistance motions or for other reasons varied across judicial circuits in fiscal years 1999–2001, as shown in figure 4. The percentages of drug sentences departing downward differed notably across the 94 districts, even in some cases among districts within the same circuit. Figure 5 shows the 94 judicial districts grouped according to the percent of sentences imposed in districts that departed downward due to substantial assistance and for other reasons. In 55 districts, more than 30 percent of sentences departed downward due to substantial assistance, while in only 5 districts more than 30 percent of the sentences departed downward due to other reasons. However, these percentage differences do not take into account differences in offender and offense characteristics that may contribute to differences among circuits and districts. Of 41,861 federal drug sentences included in our analysis that carried a mandatory minimum term of imprisonment, more than half (52 percent) fell below an otherwise applicable mandatory minimum sentence. These sentences were split equally among those that fell below an otherwise applicable mandatory minimum sentence due to substantial assistance (26 percent) and those that fell below for other reasons, such as the safety valve (26 percent). (See fig. 6.) Nearly all of the mandatory minimum drug sentences carried either a 5-year (48 percent) mandatory minimum or a 10-year minimum (49 percent). On average, prosecutors’ substantial assistance motions reduced drug offenders’ 5-year mandatory minimum sentence by 33 months. Sentences lowered for other reasons, such as the safety valve, that would otherwise be subject to a 5-year mandatory minimum were reduced by an average of 26 months. On average, prosecutors’ substantial assistance motions reduced drug offenders’ 10-year mandatory minimum sentences by 63 months, and sentences lowered for other reasons that would otherwise be subject to a 10-year mandatory minimum were reduced by an average of 52 months. See appendix I for more detailed information on sentence reductions. Figure 7 provides a summary of the number and percent of federal drug sentences that fell below a mandatory minimum or guideline range compared with sentences that did not carry a mandatory minimum. Almost all of the sentences (99 percent) that fell below a mandatory minimum due to substantial assistance also departed downward from an applicable guideline range, whereas only a quarter of sentences that fell below an otherwise applicable mandatory minimum for other reasons also departed downward from an applicable guideline range. These percentage differences do not take into account offender and offense characteristics that may contribute to differences among circuits and districts. In 7 of the 12 Circuits, more sentences fell below a mandatory minimum due to substantial assistance motions than for other reasons, as figure 8 shows. In addition to these differences among the circuits, across the 94 districts the percentage of sentences meeting or below an otherwise applicable mandatory minimum substantially varied. Figure 9 shows the 94 judicial districts grouped according to the percent of sentences imposed that fell below an otherwise applicable mandatory minimum due to substantial assistance and for other reasons. In 40 districts, more than 30 percent of sentences fell below an otherwise applicable mandatory minimum due to substantial assistance whereas in 16 districts more than 30 percent fell below for other reasons. Appendix II has more details on our analysis of sentences for which the offense of conviction carried a mandatory minimum sentence. The percentage differences among circuits and districts suggest that variation existed in the way courts sentenced offenders; however, as discussed earlier, these percentages do not take into account factors such as the offender and offense characteristics that may affect sentencing within a circuit or district. For example, in addition to the number of deportable aliens potentially affecting the percent of other downward departures, some circuits and districts, when compared with others, could sentence a greater proportion of offenders who possessed and shared information of the crime that assisted the government in the investigation or prosecution of others. Therefore, a larger percent of offenders in those circuits and districts could have received a decrease in their sentence due to substantial assistance. Recognizing that judicial circuits and districts differed in the types of offenders sentenced and the offenses for which the offenders were sentenced, our analysis adjusted for differences such as race, gender, offense, criminal history, and offense severity (see appendix I for a complete list of variables included in our analysis). Although these are the major factors that could affect the likelihood of an offender receiving a departure, they are not all-inclusive. We used adjusted odds ratios to estimate how circuits and districts vary in sentencing practices. An adjusted odds ratio in this case indicates whether a departure is statistically less likely, as likely, or more likely to occur in one circuit as in another. We can describe how much more likely or less likely a substantial assistance departure was to occur in one jurisdiction versus another; for instance, we can estimate that a substantial assistance departure was 20 percent less likely to occur or 3.6 times more likely to occur in one circuit compared with another. Our analysis focused on adjusted odds ratios since they provide us with our best estimates of differences across circuits and districts after taking into account the differences in drug cases handled across jurisdictions. Using adjusted odds ratios to estimate the likelihood of an offender receiving a substantial assistance departure, we can see that percentage differences can be misleading. In comparing percentages of substantial assistance sentences in the Eighth Circuit (37 percent) with other circuits, it appears that in 5 circuits –D.C. (30 percent), the Second (32 percent), Fourth (33 percent), Seventh (31 percent), and Eleventh Circuits (30 percent)— fewer offenders received substantial assistance departures. But taking into account differences of offenders and offense characteristics, adjusted odds ratios show that in D.C., the Fourth and Eleventh circuits, offenders were actually as likely to receive a substantial assistance motion as in the Eighth Circuit. Although the Second Circuit’s percentage of substantial assistance departures was lower than the Eighth Circuit’s, after adjusting for offender and offense characteristics, offenders in the Second Circuit were 1.4 times more likely to be granted a substantial assistance departure than offenders in the Eighth Circuit. In another example, the First Circuit appears to grant more other downward departures (10 percent) than the Eighth (8 percent), but the adjusted odds ratios imply that offenders in the First Circuit are actually 22 percent less likely to be granted an other downward departure than similar offenders in the Eighth Circuit. After adjusting for differences in drug offenses and offender characteristics, the likelihood of an offender receiving a lower sentence due to either a prosecutor’s substantial assistance motion or for other reasons varied substantially across the 12 U.S. circuits and the 94 U.S. district courts. For example, drug offenders sentenced in the Third Circuit from fiscal years 1999-2001 were over 3 times more likely to receive a substantial assistance departure at a prosecutor’s initiative than drug offenders sentenced in the First Circuit during that same time period. The likelihood a drug offender would be granted an other downward departure also varied substantially. Adjusting for differences in offender and offense characteristics, drug offenders sentenced in the Ninth Circuit from fiscal years 1999-2001 were over 18 times more likely to have received an other downward departure than similar drug offenders sentenced in the Fourth Circuit during that same time period. The likelihood of courts to impose a sentence that fell below a mandatory minimum due to substantial assistance or other reasons also varied substantially across the 12 U.S. circuits and the 94 U.S. district courts, even after taking into account offense and offender characteristics for drug offenses. See appendix III for more details about the statistical likelihood of drug sentences departing downward or falling below a mandatory minimum and the variation in those likelihoods across all circuits and districts. Our analysis shows variation—in some cases substantial differences— among circuits and districts in the likelihood that offenders convicted of drug offenses would receive substantial assistance or other downward departures in fiscal years 1999-2001. However, these differences, in and of themselves, may not indicate unwarranted sentencing departures or misapplication of the sentencing guidelines. Empirical data on all factors that could influence sentencing were not available, and so an analysis that could fully explain why sentences varied was not possible. USSC data were generally sufficient for our analyses of downward departures and mandatory minimum sentences across circuits and for most districts. USSC’s sentencing data are based on information from five documents usually produced for each case during the sentencing process. USSC requires that district courts send them these documents for each sentence imposed but principally relies on three of them—the Judgment and Commitment Order (J&C), Statement of Reasons (SOR), and Presentence Report (PSR)—to identify the length of sentences imposed, departures, and the reasons for departures. Nationally for drug cases, USSC received 96 percent or more of each of these documents from the district courts. The percentage of documents missing varied by circuit and districts within circuits. For example, among the circuits the percentage of missing SORs ranged from less than 1 percent to more than 7 percent. Among districts within the Ninth Circuit, the percentage of missing SORs ranged from less than 1 percent to more than 58 percent. Although USSC received most of the requested documents, some were missing key information or contained unclear information that was difficult to interpret. For instance, among the 12 circuits, departure data were missing for 1 percent to 7 percent of all drug sentences imposed in fiscal years 1999-2001. See appendix IV for more detailed information. In districts where the missing documents or information are concentrated analysis of departures could be affected. Missing or unclear data also limited our ability to determine when the safety valve was used as the basis for sentencing below a mandatory minimum. For example, in our preliminary analysis, we found that of the 11,256 federal drug sentences for which the offense of conviction carried a mandatory minimum and fell below that minimum, about 1,600 (14 percent) were coded by USSC as falling below the applicable mandatory minimum but not involving either the safety valve or substantial assistance. We discussed this issue with USSC. After reviewing the underlying documents used for coding these 1,600 sentences, USSC determined that over 900 sentences were miscoded. These miscoded sentences were recoded in a variety of ways, including some coded as involving the safety valve, some coded as involving substantial assistance, some coded as having a changed drug quantity that affected the applicable mandatory minimum, and some coded as missing safety valve information. USSC did not recode 681 sentences; these sentences remained coded as falling below a mandatory minimum but involving neither the safety valve nor substantial assistance. In addition, safety valve information was determined to be missing from 770 sentences for which the offense of conviction carried and fell below a mandatory minimum. AOUSC and USSC officials offered several explanations for missing documents and information or unclear information on documents that was difficult to interpret and code consistently. AOUSC officials noted that judges may not submit documents due to security concerns in cases where the record has been sealed or the offender placed in the witness protection program. Processing a high volume of drug cases could potentially affect document submission, they also noted. Of the four circuits with the highest volume of drug cases, two—the Fourth and the Ninth—also had the highest percentage of missing SORs, 7.4 percent and 6.6 percent, respectively. Although AOUSC developed a standard SOR form for judges to use, USSC officials said that judges submit information to USSC using a variety of forms and formats. In USSC’s view, this may contribute to missing information on documents (e.g., forms that do not prompt for an applicable mandatory minimum) or unclear information on the forms that is difficult to interpret. In addition to these explanations, USSC and AOUSC officials said no information had been provided to judges and other court officials on how sentencing documents are used by USSC to create its database or how to clearly and completely prepare forms such as the SOR to meet USSC’s data collection needs. USSC relies almost exclusively on the SOR to determine whether the sentence imposed departed from the guidelines range, met a mandatory minimum, or involved substantial assistance. Thus, missing, incomplete, or difficult to interpret information on that form can affect the completeness and accuracy of the data in USSC’s database. USSC has taken steps to reduce the number of missing documents and information, but opportunities for improvements exist. For instance, USSC sends an annual letter to the courts identifying those cases in which there appear to be missing documents. However, USSC officials said they do not inform courts of documents that, while received by USSC, contained missing or unclear information. In addition, USSC collaborates with the AOUSC and the Federal Judicial Center, the judiciary’s research and education body, to educate judges and court officials on how to apply the sentencing guidelines, but they do not offer programs or workshops on how to complete forms such as the SOR and other documents used by USSC. Although the AOUSC has also taken steps to improve the quality of sentencing data captured on the SOR, opportunities for improvements remain. At its September 2003 meeting, the Judicial Conference of the United States, the federal judiciary’s principal policymaking body, approved a new standard SOR form. The Conference designated the new form as the mechanism by which courts comply with the requirements of the PROTECT Act to report reasons for sentences to USSC. The form was revised in part because AOUSC officials and the Chair of the Judicial Conference’s Committee on Criminal Law stated that the previous SOR provided an imprecise measure of judicial discretion. It did not collect information on other downward departures that are initiated by the prosecution. The form was revised in consultation with USSC to better meet its data collection needs. However, a USSC official said that the new SOR does not specifically prompt for information on the application of the safety valve or whether the offense of conviction carried a mandatory minimum. In addition, judges will not be required to use this form although USSC believes that the most effective step to improving the completeness of the data the district courts report is for all courts to use a single, standard SOR. AOUSC officials said that while the Judicial Conference has endorsed the new form, they do not believe that the Conference has the authority to require judges to use the new SOR. However, AOUSC officials stated that with additional education they believe judges will see the benefits of the new form and routinely use it. The judiciary provided and USSC collected and interpreted sentencing information for the vast majority of the 72,283 drug sentences imposed during fiscal years 1999-2001. The small percentage of documents and sentencing information for drug cases that were lacking in USSC’s database did not affect the validity of our analyses at the national or circuit level or for the vast majority of districts. However, the missing data could limit analyses of sentencing practices in the few districts where missing data are most prevalent. Reducing missing, incomplete, or difficult to interpret information would improve USSC’s data on departures and reasons sentences fell below an applicable mandatory minimum. Unless the judiciary’s standard SOR is revised, judges are made aware of how to effectively complete the SOR, and data submitted more consistently to USSC by the courts, these data problems are likely to persist. More could be done to help reduce the number of documents that are missing, incomplete, or too difficult for USSC to interpret. Evaluating how USSC interprets sentencing data was beyond the scope of this work. However, we believe that without changes to the way USSC reports other downward departures, that is, distinguishing, rather than combining other downward departures initiated by the government with those initiated by judges, the benefits of improved data collection of sentencing practices may not be fully realized. As USSC and the AOUSC work together to collect and record information on federal sentences and provide additional education and information to judges, we recommend that both USSC and AOUSC continue to collaborate to develop educational programs and information for judges and other officers of the court to encourage the use of AOUSC’s standard SOR and more effective ways to complete the SOR and revise the standard SOR to better meet the data collection needs of USSC. We also recommend, resources permitting, that USSC, in addition to notifying courts of missing sentencing documents, notify the Chief Judge of each district of documents for drug cases that were received but contained information that was unclear, incomplete, or difficult to interpret. We requested comments on a draft of this report from USSC, the AOUSC, the Judicial Conference Committee on Criminal Law, and the Department of Justice (DOJ). We received written comments on October 10, 2003, from USSC and the Judicial Conference Committee on Criminal Law. Both generally agreed with our report and our recommendations, although the Criminal Law Committee was concerned that the variation among districts we found in the likelihood of other downward departures could mistakenly be attributed to judicial discretion. Their official comments are reproduced in Appendix V. We received oral technical comments from USSC and written technical comments from DOJ and the AOUSC that we incorporated where appropriate. DOJ, the Committee on Criminal Law, and USSC all noted that “fast-track” sentences—prosecutor-initiated programs to encourage early case disposition and reduce the burden on the courts—could account for some of the variation we found among districts in other downward departures. We have added new tables in appendix IV that show the reasons reported to USSC for other downward departures and the frequency with which each reason was cited. USSC generally agreed with our report and our recommendations. USSC stated that it is already working to develop more detailed sentencing documentation, submission procedures, and educational outreach to courts and court personnel. USSC noted that while our recommendations are helpful and consistent with their own thinking, implementation of such measures may exceed their current resources given the increasing volume of sentences to be processed and more detailed information for each sentence required by the PROTECT Act. The Judicial Conference Committee on Criminal Law generally agreed with our report and our recommendations. The Committee noted that it has taken significant steps to help USSC improve its data collection by revising the standard Statement of Reasons form and endorsing the standard form as the way to comply with PROTECT Act requirements. The Committee commented that our report did not address the extent to which judges themselves, absent a prosecutor’s request, have imposed sentences that fall below the sentencing guideline range. Further, the Committee noted that our report did not sufficiently distinguish downward departures that are due to judicial discretion from those that are due to prosecutorial discretion. As a result, according to the Committee, the category “other downward departures” invites confusion, and some may mistakenly attribute all such departures to judges. We state in our report on page 11 that other downward departures are attributable to prosecutors as well as judges. Additionally, data are not recorded, coded, or reported in ways that clearly delineate other downward departures due to judicial discretion from those due to prosecutorial discretion. In addition, DOJ, USSC, and the Committee on Criminal Law stated that our report did not sufficiently discuss the impact of early disposition or “fast track” programs on rates of other downward departures in those circuits and districts where such programs were in place. Fast-track programs in the southwest border districts provide lower sentences initiated by prosecutors for low-level drug trafficking offenses. DOJ noted that these programs were developed in response to a dramatic rise in immigration cases handled by federal prosecutors in districts along the southwestern border and were designed to enhance public safety and minimize the burden on the court system by processing these cases as quickly as possible. All of the agencies took the position that some circuits and districts departed downward more than others due to the greater prevalence in some circuits and districts of cases involving fast track programs. It may ultimately be useful to distinguish fast track departures from other downward departures, in the same way that we have distinguished substantial assistance departures from other downward departures. However, as currently coded in USSC database, fast track cases can be identified only when a judge explicitly lists fast track as a reason for a downward departure. Sentences citing fast track as a reason for departing downward occurred almost entirely in one district--the Southern District of California in which 2,171 sentences (58 percent of other downward departures imposed in this district) were recorded by USSC as departing due to the government’s fast track program. In all of the remaining 93 districts combined, only 9 sentences were recorded as departing downward due to fast track programs. Moreover, when we eliminate from our analyses those other downward departures that list "fast track" as a reason for departing, we obtain very similar results to those published in our report; that is, a greater likelihood of other downward departures occurring in the Southern District of California than in most other districts. We do not include these results in detail in our report because of our concern that fast track cases are not always reported by judges as such or coded by USSC in their database. If fast track departures are to be distinguished from other departures, then changes will need to be made in how such cases are currently reported to USSC. USSC is completing a report on departures, pursuant to a Congressional directive in the PROTECT Act, and will address the impact of fast track programs on departures in greater detail. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 2 days from the date of this report. At that time, we will send copies of this report to the AOUSC and Judicial Conference; DOJ; USSC; and the Federal Judicial Center. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact David Alexander at (202) 512-8777 or at Alexanderd@gao.gov or me at (202) 512-8777 or at jenkinswo@gao.gov. Major contributors to this report are listed in appendix V. Our objectives were to review and categorize all sentences imposed under the federal sentencing guidelines by federal district judges in fiscal years 1999 through 2001. Specifically, our objectives were to identify the percentage of federal sentences, and specifically, those for drug-related offenses, departing downward from the applicable guidelines range as determined by the court due to substantial assistance motions or other reasons; identify the percentage of federal drug sentences that fell below an applicable mandatory minimum due to substantial assistance motions or other reasons; compare the likelihood across judicial circuits and districts that offenders received downward departing sentences or sentences below a mandatory minimum; identify limitations, if any, of the U.S. Sentencing Commission’s (USSC) sentencing data for drug offenses. To meet these objectives, we obtained USSC sentencing data for fiscal years 1999 through 2001. During fiscal years 1999 through 2001, federal judges imposed sentences on 175,245 criminal offenders. Of this total, 11,584 sentences (6.6 percent) lacked information on whether there was a departure from the guidelines range, and for 1,046 sentences (0.6 percent) the guidelines were not applicable. An additional 167 sentences (0.1 percent) lacked information on the type of offender sentenced (drug versus non-drug), and 358 sentences (0.2 percent) lacked information on both departure status and type of offender. For the remaining 162,090 sentences (92.5 percent), table 2 shows the numbers and percents of departure and non-departure sentences for drug and non-drug sentences. Of the total of 175,245 offenders sentenced, 72,283 offenders (41 percent) were convicted of drug offenses, and of those, 69,279 offenders had complete information on departure status. Roughly 11 percent of the non- drug offenders and 28 percent of the drug offenders received sentences below the guidelines range due to substantial assistance, while 18 percent of the non-drug offenders and 16 percent of the drug offenders received sentences that departed downward for other reasons. Of the 69,279 drug sentences that had valid information for USSC’s departure variable, 42,145 sentences (61 percent) carried a mandatory minimum. Our analyses of mandatory minimum sentences excluded 284 of these sentences (0.6 percent of all mandatory minimum sentences) that lacked the valid sentence length information necessary to determine whether the sentence fell below the minimum. We had extensive discussions with knowledgeable USSC staff about the definitions and use of the data elements in our analysis. USSC takes many steps to ensure the reliability and completeness of the data it receives from districts. We did not independently validate the data in USSC’s database; however, we did assess the quality of USSC data in our analysis by testing and crosschecking selected data elements for internal consistency. We discussed any anomalies we found from these tests with knowledgeable USSC staff. On the basis of our tests and discussions with USSC officials, we determined that the data were sufficiently accurate for our reporting objectives. We defined sentences that fell above or below an applicable guideline range in accordance with USSC’s definition of “departures”—sentences imposed that fall outside the sentencing guidelines range established by the court. In USSC’s database sentences are coded into five categories: no departure, upward departure, downward departure, substantial assistance, and inapplicable. We distinguished sentences coded as “substantial assistance” as those that fell below sentencing guidelines due to “substantial assistance” or prosecutorial discretion. For sentences that fell below the guideline ranges for reasons other than substantial assistance, generally attributed to judicial discretion, we used those sentences coded as “downward departures.” See appendix IV for a description of other reasons, which include the government’s early disposition or “fast track” programs, cited by judges for downward departures. Sentences for which departure information was not available or coded “inapplicable” were deleted from our analysis. We defined sentences that fell below an applicable mandatory minimum using USSC’s recorded information for sentence length. For convictions where a mandatory minimum was recorded, those sentences with recorded lengths that fell below the length of time stipulated by the mandatory minimum were defined as “falling below the mandatory minimum.” We identified 41,861 drug sentences that carried a mandatory minimum and had valid sentence length data. Of those sentences we designated as falling below an applicable mandatory minimum, we identified those sentences that involved a substantial assistance motion and those that fell below the mandatory minimum for other reasons, such as the safety valve. If a sentence fell below and involved a substantial assistance motion, we interpreted that sentence as a “substantial assistance sentence” that fell below a mandatory minimum due to prosecutorial discretion. If a sentence otherwise fell below, we interpreted it as falling below for “other reasons.” Most of these sentences (9,384 or 87 percent of them) involved offenders that qualified for the safety valve provision allowing judges to grant sentences below the mandatory minimum. The data do not indicate that all of these sentences involve the safety valve, which may be the result of coding errors or insufficient available data. We also reviewed the types of documents USSC staff used to identify departures, the reasons for those departures, and the potential effect of missing or unclear documentation on the interpretation of the departure data in USSC’s database by district. We also interviewed officials at USSC and the Administrative Office of the U.S. Courts (AOUSC), and the Chair of the Judicial Conference Committee on Criminal Law. We analyzed sentencing data using both descriptive statistics and multivariate analytic methods. For fiscal years 1999-2001, we used USSC’s data to identify for each circuit and district the total number and percent of sentences that fell above or within an applicable guideline range and below a guideline range for substantial assistance or for other reasons. We also identified all sentences that fell below an applicable mandatory minimum due to substantial assistance or for other reasons for each circuit and district. We provide these numbers and percentages in appendix II. The simple differences in the percentage of drug sentences that fall below the guidelines range or below the mandatory minimum may not, without some adjustment, provide an appropriate basis for making comparisons across circuits and districts. Characteristics of the offenses and offenders sentenced can vary from one circuit or district to the next, and these differences may affect the number or percent of sentences that fall below applicable guideline ranges or a mandatory minimum. Judges in some circuits or districts, for example, may sentence a greater proportion of offenders who possessed and shared information of the crime that assisted the government in the investigation or prosecution of others or whose offenses were less serious. Differences in these characteristics might produce differences in sentences that have little to do with the exercise of discretion. Therefore, the unadjusted differences in the percent of sentences below the guidelines range or mandatory minimum might result from judges sentencing different offenders, rather than from judges sentencing offenders differently. Table 3 shows the offense and offender characteristics we considered in the multivariate analyses we conducted to adjust for such differences and re-estimate differences across circuits and districts after taking them into account. It also provides the numbers and percentages of all drug offenders or drug offenses that possessed each of these characteristics. Our primary focus in this report involved understanding how departure sentences vary across circuits and districts. Some of this variability across circuits and districts in the percentages of substantial assistance departures and other downward departures in the sentencing of drug offenders results from differences in the characteristics of offenses and offenders sentenced across circuits and districts. Moreover, the prosecution has sole authority to initiate a downward departure for substantial assistance, and all offenders are potentially eligible for such consideration. If an offender receives a substantial assistance departure, USSC codes the case as a substantial assistance departure and does not reflect any other downward departures that the judge may have granted in that particular case. Because of this coding convention, the percentage of other downward departures is partly a function of the percentage of downward departures for substantial assistance, which result from prosecutorial motions. To understand how these percentages are derived, it is useful to consider the following two hypothetical districts and the numbers of sentences of each type in each district shown in table 4. The prosecution makes its selection for substantial assistance motions after screening the total universe of 100 offenders sentenced in each district. In our hypothetical example, the prosecutor offered and the court accepted substantial assistance motions for 20 percent of 100 offenders in district A and 50 percent of 100 offenders in district B—or 30 percent less in district A. Because USSC’s coding convention distinguishes substantial assistance cases from other downward departures, the universe of offenders who could be coded as receiving other downward departures is equal to the number of offenders who did not receive substantial assistance departures. In district A this would be 80 offenders (100 minus 20 substantial assistance departures) and in district B it would be 50 (100 minus 50 substantial assistance departures). Using this universe of offenders for our calculation, we would conclude that district A involves 10 percent fewer other departures than district B (40/80=50 percent versus 30/50=60 percent). While we offer percentages in some of the following tables that, following standard procedures, are based on the total number of offenders, we also use odds and odds ratios to describe the likelihoods of sentences departing. These odds and odds ratios have the advantage of utilizing the appropriate universe of offenders in making comparisons across circuits and districts. In order to make a fair comparison of sentencing patterns across circuits and districts, we used logistic regression analysis to estimate the likelihood that sentences would fall below an applicable guideline range or a mandatory minimum, before and after adjusting for differences in offender and offense characteristics across circuits and districts. Our adjusted estimates of the differences in likelihoods across circuits and districts involved controlling for the following offender and offense characteristics: Offender: gender, race, education, citizenship, and criminal history category score. Offense: type of drug involved; type and severity of offense; whether the offense was eligible for mandatory minimum sentence; whether a gun was involved in commission of the offense; whether the defendant was convicted after trial or entered a guilty plea; and whether the safety valve was applied. Because they are somewhat more amenable to adjustment for offense and offender characteristics, we use odds and odds ratios, rather than percentages and percentage differences, to estimate the likelihood of sentences falling below a guideline range and the variability in those likelihoods across circuits and districts. We first calculated the odds on substantial assistance falling below a guideline range among all sentences, and then calculated the odds on other downward departures for those sentences that did not involve departures for substantial assistance. In both cases odds were compared across circuits and districts by taking their ratios. In our simple two district example above, the odds on substantial assistance departures in districts A and B would be 20/80=0.25 and 50/50=1.0, respectively, and the odds ratio of 1.0/0.25=4.0 indicates that the likelihood of substantial assistance departures were 4 times as great in district B as in district A. The odds on other downward departures, excluding the substantial assistance departures, would be 40/40 =1.0 in district A and 30/20=1.5 in district B, and the ratio of 1.5/1.0=1.5 indicates that downward departures are 1.5 times as likely in district B as in district A. We conducted four regression analyses. First, we conducted two regression analyses that estimated the likelihoods that drug sentences fell below an applicable guideline range due to either prosecutors’ substantial assistance motions or for other reasons before and after controlling for offense and offender characteristics. Second, we conducted two regression analyses that estimated the likelihoods that drug sentences that carried a mandatory minimum fell below an otherwise applicable mandatory minimum due to either substantial assistance or other reasons before and after controlling for offense or offender characteristics. Our work was limited to drug sentences imposed during fiscal years 1999- 2001, and excluded drug cases in that year that lacked information on whether the case departed from the guidelines (4 percent of all drug cases). We also excluded cases for which there was insufficient information to indicate whether the sentence involved was below a mandatory minimum (0.7 percent of all mandatory minimum drug cases), and were unable to identify whether 7 percent of the cases that fell below the mandatory minimum, and did not involve substantial assistance, involved the use of the safety valve provision. Further, our ability to control for differences in the likelihood of sentences departing from the guidelines, or falling below a mandatory minimum, was also restricted to a reasonably small number of characteristics for which we had data and was affected by the amount of missing data on those characteristics. Empirical data on all factors that could influence sentencing were not available, and so an analysis that could fully explain why sentences varied was not possible. Our analyses were also limited to determining whether sentences fell below a guideline range or a mandatory minimum, and we did not investigate whether there were differences across circuits or districts in how far below a guideline range minimum or a mandatory minimum the sentences fell. Nationwide, the average (mean) minimum sentence length, under the guidelines, for drug sentences that departed downward for substantial assistance reasons was 108 months (or about 9 years). Those sentences were reduced as a result of the substantial assistance motion, on average, by 53 months, and the resulting sentence was, on average, 49 percent of the average lowest sentence drug offenders otherwise would have received under the guidelines. The average minimum sentence length under the guidelines for drug sentences that departed downward for reasons other than substantial assistance was 60 months (or 5 years). Those sentences were reduced, on average, by 22 months, and the resulting sentence was, on average, 57 percent of the average lowest sentence drug offenders otherwise would have received under the guidelines. Nearly all of the mandatory minimum drug sentences were for 5 years (48 percent) or 10 years (49 percent). The 5-year mandatory minimum sentences that were reduced for substantial assistance were reduced by an average of 33 months, resulting in an average sentence that was 45 percent of the mandatory minimum. The sentences lowered for other reasons, (primarily the safety valve), that would otherwise be subject to a 5-year mandatory minimum were reduced by an average of 26 months, resulting in an average sentence that was 57 percent of the mandatory minimum. The 10-year mandatory minimum sentences that were reduced for substantial assistance were reduced by an average of 63 months, resulting in an average sentence that was 47 percent of the mandatory minimum. The sentences lowered for other reasons, (primarily the safety valve), that would otherwise be subject to a 10-year mandatory minimum were reduced for other reasons by an average of 52 months, resulting in an average sentence that was 57 percent of the mandatory minimum. We also did not attempt to determine, for those sentences that fell within the guideline range, across circuits and districts whether sentences fell more frequently at the lower or higher end of the guideline range. However, overall, 72 percent of drug sentences that were within the guidelines range and did not depart were at the bottom of the range. This appendix provides information on the percent of federal drug sentences that fall below an applicable guideline range or an otherwise applicable mandatory minimum. We show in tables 5 and 6 the variability across circuits and districts in the percentages of drug sentences that were (1) above the guidelines range, (2) within the guidelines range, (3) below the range due to substantial assistance, and (4) below the range for other reasons. We then show in tables 7 and 8 the variability across circuits and districts in the percentages of mandatory minimum drug cases that resulted in sentences (1) at or above a mandatory minimum sentence, (2) below a mandatory minimum due to prosecutorial motions for substantial assistance, and (3) below the mandatory minimum for other reasons. Table 5 shows that the percentage of upward departures from the sentencing guidelines for drug cases in fiscal years 1999–2001 was similar and exceedingly small across all 12 circuits. However, the percentages of within range sentences and downward departures varied substantially across circuits. The percentage of all drug sentences that were within the guidelines range varied from 34 percent in the Ninth Circuit to 69 percent in the First and Fifth Circuits. The percentage of drug sentences that involved downward departures for substantial assistance varied from 18 percent in the Ninth Circuit to 45 percent in the Third Circuit, and the percentage that resulted in downward departures for other reasons varied from 4 percent in the Fourth Circuit to 47 percent in the Ninth Circuit. The Ninth Circuit was the only circuit in which the percentage of departures for other reasons exceeded 20 percent and the only circuit in which departures for other reasons were more common than departures for substantial assistance. Table 6 reveals that the percentages of sentences within the guidelines range and the percentages of sentences departing downward from them notably differed across the 94 districts, even in some cases among districts within the same circuit. There were 6 districts in which the percentage of sentences departing upward from the guidelines exceeded 1 percent of all cases—Wisconsin Western (1.3 percent), Iowa Northern (1.3 percent), California Northern (1.5 percent), Guam (1.2 percent), Oklahoma Eastern (1.4 percent), and Georgia Middle (1.1). The percentage of sentences within the guidelines range varied substantially, from 17 percent in the California Southern District to 90 percent in the Illinois Southern District. Fewer than 10 percent of the sentences departed downward for substantial assistance in Puerto Rico (7 percent), Rhode Island (9 percent), Virginia Eastern (8 percent), West Virginia Northern (9 percent), Illinois Southern (6 percent), and Oklahoma Eastern (5 percent). At the same time, the percentage of cases departing downward for substantial assistance exceeded 50 percent in 16 districts, and was highest in the North Mariana Islands (71 percent), New York Northern (68 percent), North Carolina Western (62 percent), and Idaho (62 percent) Districts. Sentences departing downward for other reasons represented only 3 percent or less of all sentences in 24 districts but over 20 percent of the sentences in 10 districts; these other downward departures were especially common in New York Eastern (32 percent), Oklahoma Eastern (39 percent), Arizona (58 percent), and California Southern (70 percent) Districts. While the percentages of other downward departures were fairly similar and involved 10 percent or fewer of all cases in the various districts in the Sixth and Seventh Circuits, the range in the percentage of other downward departures were sizable across the districts in the Second Circuit (5 percent to 32 percent), Ninth Circuit (none to 70 percent), and Tenth Circuit (3 to 39 percent). Table 7 shows the differences across circuits in the percentages of mandatory minimum drug sentences between 1999 and 2001 that were at or above an applicable mandatory minimum, below an otherwise applicable mandatory minimum due to substantial assistance, and below an otherwise applicable mandatory minimum for other reasons. The percentage of mandatory minimum sentences that were at or above an applicable mandatory minimum sentence ranged from 35 percent in the Ninth Circuit to 64 percent in the Fourth Circuit. The percentage of sentences that fell below an otherwise applicable mandatory minimum due to substantial assistance motions ranged from 19 percent in the First Circuit to 40 percent in the Third Circuit. The percentage of mandatory minimum sentences that fell below an otherwise applicable mandatory minimum sentence for other reasons ranged from 12 percent in the Fourth Circuit to 38 percent in the Ninth Circuit. Table 8 provides these same percentages, classified by districts rather than circuits, and shows that variability in the sentencing of mandatory minimum offenders is considerable across the 94 districts. The percentage of sentences falling below an otherwise applicable mandatory minimum for substantial assistance reasons was very different across districts, ranging from less than 10 percent of all mandatory minimum sentences in 7 districts to over 50 percent in 9 districts. The percentage of sentences falling below an otherwise applicable mandatory minimum for other reasons also varied greatly across districts, from less than 10 percent of all mandatory minimum sentences in 11 districts to 50 percent or more in 3 districts. This appendix provides odds and odds ratios to describe the differences across circuits and districts in sentences falling below a guideline range or an otherwise applicable mandatory minimum for substantial assistance and other reasons, both before and after controlling for differences in offender and offense characteristics. In the left columns of tables 9 and 10, we show the odds on substantial assistance departures across circuits and districts and ratios indicating differences across circuits and districts, before and after we adjust for characteristics of offenses and offenders. In the right columns of tables 9 and 10, we show the odds on other downward departures across circuits and districts and ratios indicating differences between them, before and after we adjust for characteristics of offenses and offenders. In the comparisons across circuits, we used the Eighth Circuit as the reference category, so the odds ratios reflect how much more or less likely other circuits were than that circuit to depart in sentencing offenders. In comparisons across districts, the Minnesota District was used as the reference category. The offense and offender characteristics we controlled for were described earlier in appendix I. The ratios that estimate differences before and after adjusting for these characteristics were derived from logistic regression models. We focus on adjusted ratios in the following discussion, since they provide us with our best estimates of differences across circuits and districts after taking into account the differences in the drug cases handled across jurisdictions. Table 9 shows that both the odds on substantial assistance departures and other downward departures varied substantially across circuits. After adjusting for differences across circuits in offense and offender characteristics, the odds on substantial assistance departures were significantly greater in three circuits than the Eighth Circuit. In the Third Circuit, for example, substantial assistance departures were 2.2 times as likely as in the Eighth Circuit. Four circuits were not significantly different from the Eighth Circuit in terms of the likelihood of sentences departing for substantial assistance, and in the remaining 4 circuits substantial assistance departures were significantly less likely. In the First Circuit, for example, substantial assistance departures were less likely by a factor of 0.64, or 36 percent less likely, than in the Eighth Circuit. The fact that some circuits are less likely than the Eighth Circuit while others are more likely than the Eighth Circuit to depart for substantial assistance implies that some differences between other circuits are larger than those explicitly indicated by these ratios. For example, these ratios imply that substantial assistance departures in the Third Circuit are 2.2/0.64=3.4 times as likely as in the First Circuit. Table 9 also shows that for those sentences that do not involve substantial assistance departures, (and again after adjusting for offense and offender characteristics), other downward departures are significantly more likely in 4 circuits than in the Eighth Circuit, significantly less likely in 6 circuits than in the Eighth Circuit, and no different in the other one. The fact that other downward departures are 6.87 times more likely in the Ninth Circuit than in the Eighth Circuit, but less likely by a factor of 0.37 in the Fourth Circuit than in the Eighth Circuit, implies that such departures are 6.87/0.37 = 18.6 or 19 times as likely in the Ninth Circuit as in the Fourth Circuit. Table 10 shows that both the adjusted odds ratios on substantial assistance departures and other downward departures also varied substantially and significantly across districts. Substantial assistance departures were significantly more likely in 41 districts than in the Minnesota District. In the small Northern Mariana Islands District, for example, substantial assistance departures were 10 times more likely than in the Minnesota District, and in the large New York Northern District, they were 5 times more likely. Twenty-three districts were not significantly different from the Minnesota District in terms of the likelihood of sentences departing for substantial assistance, and in the remaining 29 districts substantial assistance departures were significantly less likely. In the Illinois Southern District, for example, substantial assistance departures were less likely by a factor of 0.11, which implies that the likelihood of substantial assistance departures were 9 times higher in the Minnesota District than they were there. Other districts, these odds imply, were even more disparate from one another. For example, these ratios imply that substantial assistance departures in the New York Northern District were 5.5/0.11=49.5 or 50 times more likely than in the Illinois Southern District. Table 10 also shows that for those sentences that do not involve substantial assistance departures, other downward departures are significantly more likely in 7 districts than in the Minnesota district, significantly less likely in 62 districts than in the Minnesota district, and no different in the remaining 23 districts. The fact that other downward departures are 15 times more likely in the California Southern District than in the Minnesota district, but less likely by a factor of 0.09 in the South Carolina District than in the Minnesota District, implies that such departures are 15/0.09 = 167 times as likely in the California Southern District as in the South Carolina District. Tables 11 and 12 pertain to mandatory minimum sentences and show that substantial and often significant variation in the likelihood of sentences falling below an otherwise applicable mandatory minimum exists even after controls for differences in offense and offender characteristics across circuits and districts. If we focus on the adjusted ratios in table 11 first, which estimate the differences among circuits after controls, we find that there were some circuits in which the odds on sentences falling below an otherwise applicable mandatory minimum due to substantial assistance were significantly higher than in the Eighth Circuit, and others in which those odds were significantly lower. The same is true of the likelihood of sentences falling below an otherwise applicable mandatory minimum for reasons other than substantial assistance. The adjusted ratios in table 11 suggest that the biggest difference in the likelihood of mandatory minimum sentences falling below an otherwise applicable mandatory minimum due to substantial assistance involved the Third and First Circuits (such sentences were 2.29/0.67=3.4 times more likely in the former circuit than in the latter), while the biggest difference in the likelihood of mandatory minimum sentences falling below an otherwise applicable mandatory minimum for reasons other reasons, such as the safety valve, involved the D.C. vs. the Fourth and Sixth Circuits (such sentences were 2.55/0.94=2.7 times more likely in the former circuit than in the latter two). Table 12 shows, similarly, that in many districts judges were much more likely than in the Minnesota district to issue sentences below a mandatory minimum to offenders facing a mandatory minimum, both for reasons of substantial assistance and for other reasons; and, at the same time, judges in many other districts were less likely to do so, overall. Overall, the data the U.S. Sentencing Commission (USSC) has received from district courts and judges were generally sufficient for our analyses of downward departures and mandatory minimum sentences across circuits and for most districts. Missing data due to missing sentencing documents or information posed few limitations for our analysis. However, opportunities for improvement exist. Under the authority of the Sentencing Reform Act of 1984, USSC required courts to forward to it the following five sentencing documents in every guidelines case. The PROTECT Act of 2003 codifies this data collection requirement: the Judgment and Commitment Order (J&C); the Statement of Reasons (SOR); the Pre-sentence Report (PSR); any written plea agreements, if applicable; and all indictments or other charging documents. Under the PROTECT Act, courts are to send to USSC a “Report of Sentence” enclosing the required sentencing documents within 30 days of a judgment, and the Chief Judge in every district is to ensure that their courts do so. Of the five sentencing documents submitted by district courts, USSC officials told us they rely primarily on the J&C, SOR, and PSR to obtain the sentencing information that USSC staff code into USSC database. From the J&C, USSC obtains data on the sentence, including the number of months of any imprisonment, the statute of conviction, and whether any mandatory minimum sentence applied. USSC officials also said that they rely almost exclusively on the SOR to obtain data on the basis for the sentence, such as whether the sentence imposed fell within or outside the applicable sentencing guidelines range as determined by the court, the reason(s) for any departure, and whether a substantial assistance motion or safety valve adjustment was used. If the SOR is missing, USSC coding procedures have required document analysts to record the departure status as missing, although other documents, such as the plea agreement, may have information that indicate whether and why the sentence departed. USSC is initiating some changes in its coding procedures as discussed below. From the PSR, which is drafted by a district probation officer, USSC obtains demographic and other background information about offenders, an initial sentencing recommendation according to the guidelines, and other sentencing information such as whether the offense of conviction had a mandatory minimum (should this information not be noted in the J&C), and whether the safety valve could potentially be applied (in certain limited circumstances where this information has not been recorded in a SOR). Our analysis shows that district courts provided these five sentencing documents to USSC for the great majority of drug sentences imposed in fiscal years 1999-2001. Of 72,283 drug sentences imposed during this period, district courts submitted between 96 and 99 percent of the three key sentencing documents– the J&C (99 percent), SOR (96 percent), and PSR (98 percent)—from which USSC obtains sentencing data. According to USSC data, a lower percentage of plea agreements (89 percent) and indictments (87 percent) were submitted during this time period. During the period of our review, USSC did not primarily rely on these two documents for departure information. Table 13 shows, by circuit, the percentage of each type of sentencing document USSC did not receive in fiscal years 1999-2001. Among the 12 circuits, the rate of missing SORs—the principal document used to determine the reason for a sentencing departure—ranged from less than 1 percent to about 7 percent. Two of the 4 circuits in which the highest number of drug sentences were imposed were also missing the highest percent of their SORs–the 9th Circuit at 6.6 percent and the Fourth Circuit at 7.4 percent. A circuit’s average can mask wide differences among the districts within the circuit. For example, the percentages of missing SORs among districts in the Ninth Circuit ranged from less than 1 percent to 58 percent and in the Fourth Circuit from less than 1 percent to 20 percent. USSC reviews the documents it receives from the district courts and annually sends a letter to each district court identifying the cases in which documents appear to be missing. Additionally, in its annual report, USSC discloses the overall document submission rate for all criminal cases for the J&C, SOR, and PSR documents. USSC also attempts to identify guidelines cases for which the courts may not have submitted any sentencing documents. By linking data from a database maintained by AOUSC with the data on cases in its database, USSC develops a list of cases for which it has not yet received documentation. USSC sends this list of cases to the relevant district courts and asks them to review the list and forward any documents USSC should have received. In addition to missing sentencing documents, the documents USSC received in fiscal years 1999-2001 had missing information or information that was difficult to interpret. As shown in table 14, among the circuits imposed in fiscal years 1999-2001. In addition, for 4 percent to 15 percent of sentences, information was missing on whether the safety value was used as the basis for sentencing below a mandatory minimum. Nationally, of the 72,283 federal drug sentence imposed in fiscal years 1999-2001, 3,004 (4 percent) were coded as missing information necessary to determine whether the sentence departed from an applicable guideline range. Of these, 2,118 sentences were missing information because the SOR had not been received, and for 570, the SOR was received but did not include departure information. Missing or unclear data also limited our ability to determine when the safety valve was used as the basis for sentencing below an otherwise applicable mandatory minimum. For example, in our preliminary analysis, we found that of the 11,256 federal drug sentences for which the offense of conviction carried a mandatory minimum and fell below that minimum, about 1,600 (14 percent) were coded by USSC as falling below the applicable mandatory minimum but not involving either the safety valve or substantial assistance. We discussed this issue with USSC. After reviewing the underlying documents used for coding these 1,600 sentences, USSC determined that over 900 sentences were miscoded. These miscoded sentences were recoded in a variety of ways, including some coded as involving the safety valve, some coded as involving substantial assistance, some coded as having a changed drug quantity that affected the applicable mandatory minimum, and some coded as missing safety valve information. USSC did not recode 681 sentences; these sentences remained coded as falling below a mandatory minimum but involving neither the safety valve nor substantial assistance. In addition, safety valve information was determined to be missing from 770 sentences for which the offense of conviction carried and fell below a mandatory minimum. A USSC official said that there is no specific prompt on the SOR asking for information on the application of the safety valve or whether the offense of conviction carried a mandatory minimum. On the basis of our analysis, missing or incomplete sentencing information is unlikely to affect analyses nationally or by circuit but could affect the analysis of departures in districts where the missing documents or information are concentrated. Missing or incomplete sentencing information may also affect USSC’s records for individual judges and thus USSC’s ability to provide accurate judge-specific sentencing analysis were Congress to request this information under the auspices of the PROTECT Act. USSC officials told us that they have not generally followed-up with district courts to obtain information that is missing from submitted documents or is unclear (e.g., whether the safety valve provision was the basis for a sentence below an applicable mandatory minimum). USSC staff does not use information from one document to substitute for missing or unclear information in another document. As a result of coding issues we identified during this review, USSC plans to implement new quality control and review procedures for sentences where information on the SOR is missing, incomplete, or unclear. These include identifying common errors for coding staff, using technology to develop automatic edit checks for apparently contradictory coding information for a sentence (e.g., those below an applicable mandatory minimum whose reason for departure is not substantial assistance or the safety valve), and having a staff attorney review sentences in which the coding supervisor is unable to determine the appropriate coding. Officials from USSC, the AOUSC, and the Judicial Conference Committee on Criminal Law cited several reasons that sentencing documents or information on sentencing documents were missing. First, USSC and AOUSC officials told us that some judges do not provide all the documents, in part because judges may be unclear whether documents under court seal or that pertain to individuals in the federal witness protection program are to be forwarded to USSC. Second, USSC relies almost exclusively on the SOR to determine whether the sentence departed, met a mandatory minimum, or involved substantial assistance. If the SOR is missing, USSC’s coding procedures require document analysts to record the departure status as missing, even if other documents, such as the plea agreement, suggests that a departure may by recommended by the government. As a result, incomplete information prevents USSC from collecting some sentencing data, as illustrated below by two examples drawn from drug sentences imposed during fiscal years 1999-2001: In one case, the SOR did not indicate the reason the court sentenced the offender to 97 months—a sentence below the applicable 10-year (120 month) mandatory minimum. Without this information on the SOR, under the coding conventions used, USSC document analysts could not record substantial assistance as the reason that the sentence of 97 months fell below a mandatory minimum even though the plea agreement (prepared by the parties) and the PSR (prepared by the probation officer) indicated that a substantial assistance motion was to have been made. In another case, the SOR stated that the court was crediting the offender for time served but failed to state the specific amount of time being credited. Unable to determine the amount of time being credited, and thus the sentence length being imposed, USSC document analysts could not determine whether the sentence departed or met an applicable mandatory minimum. Third, judges report the information using different versions of the SOR forms that can make consistent interpretation more difficult. For example, some jurisdictions provide one-page, single-spaced narratives that report the sentence and, in rare cases, others provide a transcript of the sentencing hearing instead of an SOR. According to USSC officials, interpreting multiple forms that report sentencing information in different ways and in different locations complicates the process of coding sentencing data such as departure status and use of the safety valve and may lead to missing sentencing information. USSC officials stated the single most effective step towards improving the completeness of data the courts report and USSC’s ability to code it would be the increased use of a standard SOR. The Judicial Conference at its September 2003 meeting accepted revisions to the standard SOR. The Conference designated the revised form as the mechanism by which courts comply with the requirements of the PROTECT Act to report reasons for sentences to USSC. The Committee plans to encourage judges to use it through education about the benefits of its use, but the Chair of the Committee stated that the Committee does not believe it has the authority to require the use of the new SOR. Officials from AOUSC and the Committee said they believe that with additional education judges will routinely use the new standard SOR, resulting in more useful and higher quality data reported to USSC. Last, according to officials from AOUSC and the Criminal Law Committee, judges and other court officials lack an awareness of how to complete the SORs with a level of detail that would allow USSC to collect sentencing information. The Committee official said that education for judges and other court officials is needed on how to properly complete the SOR. In addition, no feedback mechanism is in place to inform judges that information on the SOR was incomplete or unclear to USSC and, therefore, cases are coded as missing sentencing information. Although USSC contacts the courts to request missing sentencing documents be submitted, it does not provide a similar list of documents that contained information coded as missing. Without knowing which cases are coded as missing sentencing information, judges cannot clarify or complete information needed by USSC. While USSC and the Federal Judicial Center offer programs and workshops on application of the guidelines to judges and other court officials, no education programs are provided on how to complete the SOR in ways that provide clear, complete information. Officials from USSC, AOUSC, and Criminal Law Committee said that education on how to apply increasingly complex guidelines has been their focus, not educating judges and other officials to correctly complete the SOR. Officials also said that in the future it would be possible to provide programs at judicial workshops or through the Federal Judicial Center that educates judges and other court officials on how to provide clear, complete reports on sentencing. The category “other downward departures” generally thought to represent judicial discretion may also reflect downward departures resulting from prosecutorial discretion and initiative. In this report we classified departures as either “substantial assistance” or “other downward” departures. Substantial assistance departures can be viewed as a measure of prosecutorial discretion because only the prosecutor has the authority to initiate and recommend to the court that an offender be given a reduced sentence for substantial assistance to the prosecution. Neither the judge nor defense counsel may do so. The remaining departures, “other downward departures,” are generally considered to be an indication of judicial discretion. AOUSC officials suggested, however, that the category “other downward departures” provides an imprecise measure of judicial discretion. For example, AOUSC officials noted that some departures classified in USSC database as other departures may actually arise from agreements, particularly plea bargains, that either were initiated or supported by the government. We did not confirm this statement with federal prosecutors. USSC documents in its database up to three reasons judges provide for an other downward departure. According to USSC database for drug sentences in fiscal years 1999-2001, the first reason provided for an other downward departure in 18 percent of the sentences was the government’s fast track programs; in 16 percent, plea agreement; and in 4 percent, deportation. Tables 15, 16, and 17 detail for drug sentences the number and percent of other downward departures associated with the first, second, and third reasons provided for those departures. In addition to the persons named above, the following persons made key contributions to this report: William W. Crocker, III, Christine Davis, Barbara Hills, David Makoto Hudson, E. Anne Laffoon, William Sabol, Doug Sloane, Wendy Turenne.
Created in 1984, the United States Sentencing Commission (USSC) was charged with developing the federal sentencing guidelines to limit disparities in sentencing among offenders with similar criminal backgrounds found guilty of similar crimes. Judges determine a specific sentence based on an applicable sentencing guideline range, such as 57 to 71 months, provided in the guidelines. Judges may impose sentences that fall anywhere within the range, above it (upward departures), or below it (downward departures). For some offenses, Congress established mandatory minimum sentences. Judges may also sentence below the minimum in certain circumstances. We examined the differences in drug offense departures from sentencing guidelines and mandatory minimum sentences among federal courts and the documents the USSC used to record and analyze sentences. Generally, downward departures are defined as (1) substantial assistance departures, made at the prosecutor's request because the offender provided substantial assistance to the government; and (2) other downward departures made for other reasons, such as a plea agreement, a judge's consideration of mitigating factors, or early disposition, i.e., "fast track" programs initiated by prosecutors for low-level drug trafficking offenses. Of federal sentences for drug-related offenses in fiscal years 1999-2001, the majority (56 percent) was within applicable guideline ranges. Downward sentencing departures were more frequently due to prosecutors' substantial assistance motions (28 percent) than for any other reasons (16 percent). For federal drug sentences that carried a mandatory minimum term of imprisonment, more than half of the drug sentences imposed fell below a mandatory minimum. Of these, half fell below a minimum due to prosecutors' substantial assistance motions and half due to other reasons. After adjusting for differences in offense and offender characteristics among judicial circuits and districts, our analysis showed variations among certain circuits and districts in the likelihood an offender received a substantial assistance departure, other downward departure, or a sentence falling below a mandatory minimum. However, these variations did not necessarily indicate unwarranted sentencing departures or misapplication of the guidelines because data were not available to fully compare the offenders and offenses for which they were convicted. For drug sentences nationally, USSC receives 96 percent or more of the three key documents, including the statement of reasons (SOR), used to record sentence length and departures. For a small percentage of drug cases in USSC's database, information is missing, incomplete, or too difficult for USSC to interpret, principally affecting sentencing analyses in districts where the missing or incomplete data are most prevalent.
As described at the beginning of this report, DOD recognized the need for additional base closures and realignments following the 1995 closure round and made repeated efforts to gain congressional authorization for an additional closure round. Congress authorized an additional round for 2005 with the passage of the National Defense Authorization Act for Fiscal Year 2002. The 2002 Act essentially extended the authority of the Defense Base Closure and Realignment Act of 1990, which had authorized the 1991, 1993, and 1995 rounds, with some modifications for the 2005 base closure round. In a memorandum dated November 15, 2002, the Secretary of Defense issued initial guidance outlining goals and a leadership framework for the 2005 BRAC round. In doing so, he noted that “At a minimum, BRAC 2005 must eliminate excess physical capacity; the operation, sustainment and recapitalization of which diverts scarce resources from defense capability.” However, specific reduction goals were not established. At the same time, the Secretary’s guidance for the 2005 round depicted the round as focusing on more than the reduction of excess capacity. He said that “BRAC 2005 can make an even more profound contribution to transforming the Department by rationalizing our infrastructure with defense strategy.” He further noted that “A primary objective of BRAC 2005, in addition to realigning our base structure to meet our post-Cold War force structure, is to examine and implement opportunities for greater joint activity.” Toward that end, the Secretary indicated that organizationally the 2005 BRAC analysis would be two pronged. Joint cross-service teams would analyze common business-oriented functions, and the military departments would analyze service-unique functions. The Secretary of Defense established two senior groups to oversee and guide the BRAC 2005 process from a departmental perspective. The first was the Infrastructure Executive Council (IEC), which was designated the policy-making and oversight body for the entire process, and the second, a subordinate group, was the Infrastructure Steering Group (ISG), created to oversee the joint cross-service analyses and integrate that process with the military departments’ own service-unique analyses. Each of the military departments also established BRAC organizations, which had oversight from senior leaders. Likewise, each of the joint cross-service teams, under the purview of the ISG, was led by senior military or civilian officials, with representation from each of the services and relevant defense agencies. DOD’s BRAC leadership structure is shown in figure 1. DOD developed a draft set of 77 transformational options that once approved, were expected to constitute a minimum analytical framework upon which the military departments and joint cross-service groups would conduct their respective BRAC analyses. Because of a lack of agreement among the services and OSD, the draft options were never formally approved, but they remained available for consideration by analytical teams and were referenced by some groups in support of various BRAC actions being considered. (See app. XV for a list of the draft transformational options.) To some extent, the analyses and recommendations of each of the services and joint cross-service groups were also influenced by various guiding principles or policy imperatives developed by the respective service or joint cross-service groups, such as the need to preserve a particular capability in a particular location. The legislation authorizing the 2005 BRAC round, enacted as part of the fiscal year 2002 Defense Authorization Act, required DOD to give priority to selection criteria dealing with military value and added elements of specificity to criteria previously used by DOD in prior BRAC rounds. Subsequently, The Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005 codified the entire selection criteria and added the word “surge” to one previously used criterion related to potential future contingencies and mobilization efforts. In large measure, the final criteria closely followed the criteria DOD employed in prior rounds, with greater specificity added in some areas, as required by Congress. Figure 2 shows DOD’s selection criteria for 2005, with changes from BRAC 1995 denoted in bold. To ensure that the selection criteria were consistently applied, OSD established a common analytical framework to be used by each military service and joint cross-service group. Each service and group adapted this framework, in varying degrees, to its individual activities and functions in evaluating facilities and functions and identifying closure and realignment options. Despite the diversity of bases and cross-service functions analyzed, each of the groups was expected to first analyze capacity and military value of its respective facilities or functions, and then to identify and evaluate various closure and realignment scenarios and provide specific recommendations. Scenarios were derived from data analysis and transformational options, as well as from goals and objectives each group established for itself as it began its work. Figure 3 depicts the expected progression of that process. An initial part of the process involved an overall capacity analysis of specific locations or functions and subfunctions at specific locations. The analysis relied on data calls to obtain certified data to assess such factors as maximum potential capacity, current capacity, current usage, excess capacity, and capacity needed to meet surge requirements. The military value analysis consisted of assessments of operational and physical characteristics of each installation, or specific functions on an installation related to a specific joint cross-service group’s area of responsibility. These would include an installation’s or function’s current and future mission capabilities, physical condition, ability to accommodate future needs, and cost of operations. This analysis also relied on data calls to obtain certified data on the various attributes and metrics used to assess each of the four military value criteria and permit meaningful comparisons between like installations/facilities with reference to the collective military value selection criteria. DOD officials used these data to develop comparative military value scores for each installation/facility or for categories of facilities serving like functions. The scenario development and analysis phase focused on identifying various realignment and closure scenarios for further analysis. These scenarios were to be derived from consideration of the department’s 20- year force structure plan, capacity analysis, military value analysis, and transformational options; applicable guiding principles, objectives, or policy imperatives identified by individual military services or joint cross- service groups; and military judgment. Each component had available for its use an optimization or linear programming model that could combine the results of capacity and military value analyses and other information to derive scenarios and sets of alternatives. The model could be used to address varying policy imperatives or objectives, such as minimizing the number of sites, minimizing the amount of excess capacity, or maximizing the average military value. A BRAC review group could also direct variations that would, for example, eliminate as much excess capacity as possible while maintaining an average military value at least as high as the original set of sites. OSD policy guidance has historically specified that priority consideration be given to military value in making closure and realignment decisions, but that priority was specifically mandated by the legislation authorizing the 2005 BRAC round. At the same time, historic practice and the 2005 authorizing legislation both required consideration of additional issues included in selection criteria 5 through 8, detailed below: Criterion 5—costs and savings: This criterion consists of measures of costs and savings and the payback periods associated with them. Each component assessed costs using the Cost of Base Realignment Actions (COBRA) model that was used in each of the BRAC rounds since 1988. Appendix XIII summarizes improvements that have been made to the model over time and more recently for the 2005 round. Criterion 6—economic impact: This criterion measures the direct and indirect impacts of a BRAC action on employment in the communities affected by a closure or realignment. Appendix XIV provides a more complete description of how economic impact was assessed and the changes made to improve the assessment for this round. Criterion 7—community infrastructure: Selection criterion 7 examines “the ability of the infrastructure of both the existing and potential receiving communities to support forces, missions, and personnel.” The services and joint cross-service groups considered information on demographics, childcare, cost of living, employment, education, housing, medical care, safety and crime, transportation, and public utilities of the communities impacted by a BRAC action. Criterion 8—environmental impact: Selection criterion 8 assesses “the environmental impact, including the impact of costs related to potential environmental restoration, waste management, and environmental compliance activities” of closure and realignment recommendations. In considering this criterion, the services and joint cross-service groups focused mainly on potential environmental impacts while acknowledging, when appropriate, known environmental restoration costs associated with an installation recommended for closure or realignment. Waste management and environmental compliance costs were factored into criterion 5. However, under OSD policy guidance, environmental restoration costs were not considered in the cost and savings analyses for evaluating individual scenarios under criterion 5. DOD is obligated to restore contaminated sites on military bases regardless of whether they are closed, and such costs could be affected by reuse plans that cannot be known at this time but would be budgeted for at a later time when those plans and costs are better identified. Each of the military departments produced reports with closure and realignment recommendations, as did each of the joint cross-service groups, the results of which are summarized in appendixes III through XII. Figures 4 and 5 show, respectively, the 33 major closures and 30 major realignments that have been recommended by DOD where plant replacement values exceed $100 million for major base closures and net losses of 400 or more military and civilian personnel for major base realignments. While the 2005 BRAC round, like earlier BRAC rounds, was chartered to focus on United States domestic bases, DOD separately had under way a review of overseas basing requirements that had implications for the domestic BRAC process. In a September 2004 report to Congress, the Under Secretary of Defense for Policy provided an update on DOD’s “global defense posture review.” It noted that once completed, the changes stemming from the review would result in the most profound reordering of United States military forces overseas as the current posture has been largely unchanged since the Korean War. The report noted that over the next 10 years, it is planned that up to 70,000 military personnel would return to the United States, along with approximately 100,000 family members and civilian employees. It further noted that a net reduction of approximately 35 percent of overseas sites—bases, installations, and facilities—is planned. DOD had indicated that the domestic BRAC process would be used in making decisions on where to relocate forces returning to the United States from overseas bases. Separately, Congress in 2003 mandated the creation of a special commission to evaluate, among other things, the current and proposed overseas basing structure of the United States military forces. The Commission’s observations are included in its May 2005 report. Among other things, the Commission cited the need for appropriate planning to ensure the availability of community infrastructure to support returning troops and to mitigate the impact on communities. The recommendations proposed by the Secretary of Defense would have varying degrees of success in achieving DOD’s BRAC 2005 goals of reducing infrastructure and achieving savings, furthering transformation objectives, and fostering joint activity among the military services. While DOD proposed a record number of closure and realignment actions, exceeding those in all prior BRAC rounds combined, many proposals focus on the reserve component bases and relatively few on closing active bases. Projected savings are almost equally as large, as all prior BRAC rounds combined, but about 80 percent of the projected 20-year net present value savings (savings minus up-front investment costs) are derived from only 10 percent of the recommendations. While we believe the recommendations overall would achieve savings, up-front investment costs of about $24 billion are required to implement all recommendations to achieve DOD’s overall expected savings of nearly $50 billion over 20 years. Much of these saving are related to eliminations of jobs currently held by military personnel but are not likely to result in end-strength reductions, limiting savings available for other purposes. Some proposed actions represent some progress in emphasizing transformation and jointness, but progress in these efforts varied without clear agreement on transformational options to be considered, and many recommendations tended to foster jointness by consolidating functions within rather than across military services. The BRAC 2005 round is different from previous base closure rounds in terms of number of actions, projected implementation costs, and estimated annual recurring savings. While the number of major closures and realignments is just a little greater than individual previous rounds, the number of minor closure and realignments, as shown in table 1, is significantly greater than those in all previous rounds combined. The large increase in minor closures and realignments is attributable partly to actions involving the Army National Guard, Army Reserve, Air National Guard, and vacating leased space. The costs to implement the proposed actions are $24.4 billion compared to a $22 billion total from the four previous rounds through 2001, the end of the 6-year implementation period for the 1995 BRAC round. The increase in costs is due partly to significant military construction and moving costs associated with Army recommendations to realign its force structure, and to recommendations to move activities from leased space onto military installations. For example, the Army projects that it will need about $2.3 billion in military construction funds to build facilities for the troops returning from overseas. Likewise, DOD projects that it will need an additional $1.3 billion to build facilities for recommendations that include activities being moved from leased space. Time will be required for these costs to be offset by savings from BRAC actions and this in turn affects the point at which net annual recurring savings can begin to accrue. Finally, the projected net annual recurring savings are $5.5 billion compared to net annual recurring savings of $2.6 billion and $1.7 billion for the 1993 and 1995 rounds respectively. The increased savings are partly attributable to significant reductions in the number of military positions and business process reengineering efforts. DOD projects that the proposed recommendations would reduce excess infrastructure capacity, indicating that the plant replacement value of domestic installations would be reduced by about $27 billion, or 5 percent. However, the projected reductions in plant replacement value did not account for the $2.2 billion in domestic military construction projects associated with relocating forces from overseas. On the other hand, reductions in leased space are not considered in the plant replacement value analysis, since leased space is not government owned. DOD estimates that its recommendations will reduce about 12 million square feet of leased space. DOD projects that its proposed recommendations will produce nearly $50 billion in 20-year net present value savings, with net annual recurring savings of about $5.5 billion. There are limitations associated with the savings claimed from military personnel reductions and we believe there is uncertainty regarding the magnitude of savings likely to be realized in other areas given unvalidated assumptions regarding expected efficiency gains from business process reengineering efforts and projected savings from sustainment, recapitalization, and base operating support. Table 2 summarizes the projected one-time cost, the cost or savings anticipated during the 6-year implementation period for the closure or realignment, the estimated net annual recurring savings, and the projected 20-year net present value costs or savings of DOD’s recommendations. Table 2 also shows the Navy, Air Force, and joint cross-service groups all projecting net savings within the 6-year implementation period, as well as significant 20-year net savings. In contrast, because of the nature of the Army’s proposed actions and costs, such as providing infrastructure for troops returning from overseas and the consolidation and recapitalization of reserve facilities, the Army does not achieve net savings either during the implementation period or within 20 years, based on recommendations included in its BRAC report. Notwithstanding these projected savings, we identified limitations or uncertainties about the magnitude of savings likely to be realized. As figure 6 shows, 47 percent of the net annual recurring savings can be attributed to projected military personnel reductions. About 40 percent ($2.1 billion) of the projected net annual recurring savings can be attributed to savings from operation and maintenance activities, which include terminating or reducing property sustainment and recapitalization, base operating support, and civilian payroll. Furthermore, about $500 million of the “other” savings is based on business process reengineering efforts, but some of the assumptions supporting the expected efficiency gains have not been validated. Much of the projected net annual recurring savings (47 percent) are associated with eliminating positions currently held by military personnel; but rather than reducing end-strength levels, DOD indicates the positions are expected to be reassigned to other areas, limiting dollar savings available for other uses. For example, although the Air Force projects net annual recurring savings of about $732 million from eliminating about 10,200 military positions, Air Force officials stated the active duty positions will be reinvested to relieve stress on high demand career fields and the reserve positions to new missions yet to be identified. Likewise, the Army is projecting savings from eliminating about 5,800 military positions, but it has no plans to reduce its end-strength. Finally, the Navy is projecting it will eliminate about 4,000 active duty military positions, which a Navy official noted will help it achieve the end-strength reductions already planned. As we noted during our review of DOD’s process during the 1995 BRAC round, since these personnel will be assigned elsewhere rather than taken out of the force structure, they do not represent dollar savings that can be readily reallocated outside the personnel accounts. Without recognition that these are not dollar savings that can be readily applied elsewhere, this could create a false sense of savings available for use in other areas traditionally cited as a beneficiary of BRAC savings, such as making more funds available for modernization and better maintenance of remaining facilities. DOD is also projecting savings from the sustainment and recapitalization of facilities that are scheduled to be demolished, as well as from facilities that might remain in DOD’s real property inventory when activities are realigned from one base to another. For example, the Industrial Joint Cross-Service Group is claiming about $20 million in annual recurring savings from the recapitalization of facilities at installations responsible for destroying chemical weapons at three locations recommended for closure. However, the Army had already expected to demolish these chemical destruction facilities upon completing the destruction of the chemical weapons at each site and the Army has not identified future missions for these installations. As a result, we do not believe it is appropriate for the Industrial Joint Cross-Service Group to claim any recapitalization savings related to these installations. Likewise, DOD is projecting savings from the recapitalization and sustainment of facilities in cases where functions or activities would be realigned from one base to another. However, it is not clear to what extent the proposed realignments would result in an entire building or portion of a building being vacated, or if entire buildings are vacated, whether they would be declared excess and removed from the military services’ real property inventory. Our analysis shows that the supply and storage group’s recommendations project about $100 million in sustainment and recapitalization savings from realigning defense distribution depots. The group estimates its recommendations will vacate about 27 million square feet of storage space. Supply and storage officials told us their goal is to vacate as much space as possible by re-warehousing inventory and by reducing personnel spaces, but they do not have a specific plan for what will happen to the space once it is vacated. In addition, until these recommendations are ultimately approved and implemented, DOD will not be in a good position to know exactly how much space is available or how this space will be disposed of or utilized. As a result, it is unclear as to how much of the estimated $100 million in annual recurring savings will actually occur. Collectively, the issues we identified suggest the potential for reduced savings that are likely to be realized in the short term during the implementation period, which could further reduce net annual recurring savings realized in the long term. The short-term impact is that these reduced savings could adversely affect DOD’s plans for using these BRAC savings to help offset the up-front investment costs required to implement the recommendations and could further limit the amount of savings available for transformation and modernization purposes. DOD projected net annual recurring savings in the “other” category as shown in figure 6 include about $500 million that is based on business process reengineering efforts. Our analysis indicates that four recommendations—one from the Industrial Joint Cross-Service Group and three from the Supply and Storage Joint Cross-Service Group—involve primarily business process reengineering efforts. However, the expected efficiency gains from these recommendations are based on assumptions that are subject to some uncertainty and have not been validated. For example, our analysis indicates that $215 million, or 63 percent, of the estimated annual recurring savings from the Industrial Joint Cross-Service Group recommendation to create fleet readiness centers within the Navy is based on business reengineering efforts that would result in overhead efficiencies. Although the data suggest there is the potential for savings, we believe the magnitude of the savings is somewhat uncertain because the estimates are based on assumptions that have undergone only limited testing. Realizing the full extent of the savings would depend on actual implementation of the recommended actions and modifications to the Navy’s supply system. The industrial group and the Navy assumed that combining depot and intermediate maintenance levels would reduce the time needed for an item to be repaired at the intermediate level, which in turn would reduce the number of items needing to be kept in inventory, as well as the number of items being sent to a depot for repair. These assumptions, which were the major determinant of the realignment savings, were reportedly based on historical data and pilot projects and have not been independently reviewed or verified by the Naval Audit Service, the DOD Inspector General, or us. Furthermore, our analysis indicates that $291 million, or about 72 percent, of the net annual recurring savings expected from the Supply and Storage Joint Cross-Service Group’s three recommendations are also based on business process reengineering. In the COBRA model, the savings are categorized as procurement savings and are based on the expanded use of performance-based logistics and reductions to duplicate inventory. Supply and storage group staff said that these savings accrue from reduced contract prices because the Defense Logistics Agency (DLA) will have increased buying power since it is responsible for purchasing many more items that before were purchased by each of the services. In addition, savings accrue from increased use of performance-based agreements, a key component of performance-based logistics. The group estimates DLA can save 2.8 cents on each contract dollar placed on performance-based agreements. In addition, savings result from reductions in the amount of stock that must be held in inventory. Supply and storage staff said that these savings are attributable to reductions in the cost of money, cost of stock losses due to obsolescence, and cost of storage. Together the group estimates these factors save about 17 percent of the estimated value of the acquisition cost of the stock that is no longer required to be held in inventory. These savings estimates, for the most part, are based on historical documentation provided by DLA, which time did not allow us to validate. The extent to which these same savings will be achieved in the future is uncertain. As noted above, how these actions are implemented could also affect savings. We are concerned that this is another area that could lead to a false sense of savings and lead to premature reductions in affected budgets in advance of actual savings being fully realized, as has sometimes occurred in past efforts to achieve savings through business process reengineering efforts. We are also concerned that it could exacerbate a problem we have previously identified regarding past BRAC rounds involving the lack of adequate systems in place to track and update savings resulting from BRAC actions—the focus of our recommendation for the Secretary of Defense. These concerns are reinforced by limitations in DOD’s financial management systems that historically have made it difficult to fully identify the costs of operations and provide a complete baseline from which to assess savings. While furthering transformation was one of the BRAC goals, there was no agreement between DOD and its components on what should be considered a transformational effort. As part of the BRAC process, the department developed over 200 transformational options for stationing and supporting forces as well as for increasing operational efficiency and effectiveness. The OSD BRAC office narrowed this list to 77 options, but agreement was not reached within the department on these options, so none of them were formally approved. Nonetheless, each service and joint cross-service group was permitted to use the transformational options as appropriate to support its candidate recommendations. Appendix XV has a list of these 77 draft options. Collectively, these draft options did not provide a clear definition of transformation across the department. The options ranged from those that seemed to be service specific to those that suggested new ways of doing business. For example, some transformational options included reducing the number of Army Reserve regional headquarters; optimizing Air Force squadrons; and co-locating various functions such as recruiting, military and civilian personnel training, and research, development and acquisition and test and evaluation, across the military departments. In contrast, some options suggested consideration of new ways of doing business, such as privatizing some functions and establishing a DOD agency to oversee depot-level reparables. While the transformational options were never formally approved, our analysis indicates that many of DOD’s recommendations reference one or more of the 77 transformational options. For example, 15 of the headquarters and support activities group recommendations reference the option to minimize leased space and move organizations in leased space to DOD-owned space. Likewise, 37 of the Army reserve component recommendations reference the option to co-locate guard and reserve units at active bases or consolidate guard and reserve units that are located in proximity to one another at one location. Conversely, a number of the scenarios that were initially considered but not adopted reference transformational options that could have changed existing business practices. For example, the education and training group developed a number of scenarios—privatizing graduate education programs and consolidating undergraduate fixed and rotary wing pilot training—based on the draft transformational options, but none were ultimately approved by the department. DOD’s recommendations make some progress toward the goal of fostering joint activity among the military services, based on a broad definition of joint activity. We found that for DOD’s recommendations, joint activity included consolidating some training functions within the same service, co- locating like organizations and functions on the same installation, and moving some organizations or functions closer to installations in order to further opportunities for joint training. Although the recommendations achieve some progress in fostering jointness, we found other instances where DOD ultimately adopted a service-centric solution even though the joint cross-service groups proposed a joint scenario. Table 3 shows the major recommendations that foster joint activity. While the proposal to create joint bases by consolidating common installation management functions is projected to create greater efficiencies, our prior work suggests that implementation of these actions may prove challenging. The joint-basing recommendation involves one service being responsible for various installation management support functions at bases that share a common boundary or are in proximity to one another. For example, the Army would be the executive agent for Fort Lewis, Washington, and McChord Air Force Base, Washington, combined as Joint Base Lewis-McChord. However, as evident from our recent visit to both installations and discussions with base officials, concerns over obstacles such as seeking efficiencies at the expense of the mission, could jeopardize a smooth and successful implementation of the recommendation. In some cases, the joint cross-service groups proposed scenarios that would have merged various support functions among the services, but a service solution was adopted by DOD. For example, the Headquarters and Support Activities Joint Cross-Service Group proposed to (1) consolidate civilian personnel offices under a new defense agency as DOD implements the national security personnel system, and (2) co-locate all military personnel centers in San Antonio, Texas, in anticipation of a standard military personnel system being implemented across the department. However, in both cases, DOD decided to consolidate military and civilian personnel centers within each service. Likewise, the Education and Training Joint Cross-Service Group proposed scenarios to consolidate undergraduate fixed wing training activities between the Air Force and the Navy and rotary wing training activities between the Navy and the Army to eliminate excess capacity. However, the proposals were not adopted because the Navy and the Air Force expressed concerns that this recommendation would result in significant permanent change of station costs for the services, specifically the cost of students traveling to designated training locations. Based on our analytical work, we believe DOD established and generally followed a logical and reasoned process for formulating its list of BRAC recommendations. The process was organized in a largely sequential manner with a strong emphasis on ensuring that accurate data were obtained and used. OSD established an oversight structure that allowed the seven individual joint cross-service groups to play a larger, more visible role in the 2005 BRAC process compared to BRAC 1995. Despite some overlap in data collection and other phases of the process, these groups and the military services generally followed the sequential BRAC process designed to evaluate and subsequently identify recommendations within their respective areas, with only the Army using a separate but parallel process to evaluate its reserve components. DOD also incorporated into its analytical process several key considerations required by the BRAC legislation, including the use of certified data, basing its analysis on its 20- year force structure plan and emphasizing its military value selection criteria, which included homeland defense and surge capabilities. In addition, DOD’s Inspector General and the military service audit agencies helped to ensure the data used during the BRAC process were accurate and reliable. DOD provided overall policy guidance for the BRAC process, including a requirement that its components develop and implement internal control plans to ensure the accuracy and consistency of their data collection and analyses. These plans also helped to ensure the overall integrity of the process and the information upon which OSD considered each group’s recommendations. The BRAC recommendations, for the most part, resulted from a data-intensive process that was supplemented by the use of military judgment as needed. The process began with a set of sequential steps by assessing capacity and military value, developing and analyzing scenarios, then identifying candidate recommendations, which led to OSD’s final list of BRAC recommendations. Figure 7 illustrates the overall sequential analytical process DOD generally employed to reach BRAC recommendations. It must be noted, however, that while the process largely followed the sequential process established by the department, initial difficulties associated with obtaining complete and accurate data in a timely manner added to overlap and varying degrees of concurrency between data collection efforts and other steps in the process. During the 2005 BRAC process, the seven individual joint cross-service groups played a larger, more visible role compared to their role during the 1995 BRAC round. Our analysis indicates that many, although not all, actions proposed by these groups were accepted by OSD and the military services. Based on lessons learned, OSD empowered these groups in 2005 to suggest BRAC recommendations directly to a senior-level group that oversaw the BRAC 2005 analysis. Moreover, we noted a closer coordination between these groups, the military services, and OSD than existed during the 1995 round. OSD’s efforts to integrate the process among these seven joint cross-service groups with the military services’ own efforts led to increased discussions, greater visibility, and more influence for the cross- service recommendations than in prior BRAC rounds. To assist in the process for analyzing and developing recommendations, the military services and joint cross-service groups used various analytical tools. These tools helped to ensure a more consistent approach to BRAC analysis and decision making. For example, all of the groups used the DOD- approved COBRA model to calculate costs, savings, and return on investment for BRAC scenarios and, ultimately for the final 222 BRAC recommendations. As noted in appendix XIII, the COBRA model was designed to provide consistency across the military services and the joint cross-service groups in estimating BRAC costs and savings. DOD has used the COBRA model in each of the previous BRAC rounds and, over time, has improved upon its design to provide better estimating capability. In our past and current reviews of the COBRA model, we found it to be a generally reasonable estimator for comparing potential costs and savings among various BRAC options. Furthermore, the military services and joint cross-service groups generally used a consistent process to assess and formulate BRAC recommendations, with one minor exception involving the Army reserve components. The Army created a separate yet parallel approach in reviewing its reserve components for several reasons, although it generally followed the BRAC process. With respect to its reserve components, the Army did not perform a military value rank-ordering of these various installations across the country, but instead assessed the relative military value that could be obtained by consolidating various facilities into a joint facility in specific geographical locales to support, among other things, reserve component training, recruiting, and retention efforts. This approach provided an opportunity for the Army reserve components to actively participate in the BRAC process along with the voluntary participation of the states. The Army reported that consulting with the states was crucial to ensure the support of the state governors and staff Adjutants General for issues related to recommendations that affected the National Guard. The Army’s recommendations affected almost 10 percent of the Army’s 4,000 reserve components’ facilities. More specifically, the Army recommended 176 Army Reserve closures with the understanding that the state governors will close 211 Army National Guard facilities with the intent of relocating their units into 125 new Armed Forces Reserve Centers. The Army reports that 38 states and Puerto Rico voluntarily participated in the BRAC process. The Air Force and the Navy also reviewed their reserve components’ installations but did so within the common analytical structure established by OSD, yet with some differences in approach in involving affected stakeholders in the process. For example, the Air Force did not involve state officials or its State Adjutants General as it analyzed and developed its BRAC recommendations. However, senior Air National Guard and Reserve leadership were in attendance as voting members of the Air Force’s Base Closure Executive Group, a senior deliberative body for the BRAC process. The Navy also reviewed its reserve components, including the Marine Corps Reserves, within the BRAC process, and worked closely with representatives from the Navy and Marine Corps reserve components to consolidate units within active duty installations or armed forces reserve centers without affecting recruiting demographics. DOD also incorporated into its analytical process the legal considerations for formulating its realignment and closure recommendations. As required by BRAC legislation, DOD based its recommendations on (1) the use of certified data, (2) its 20-year force structure plan, and (3) military value criteria as the primary consideration in assessing and formulating its recommendations. DOD collected capacity and military value data that were certified as to their accuracy by hundreds of persons in senior leadership positions across the country. These certified data were obtained from corporate databases and from hundreds of defense installations. DOD continued to collect certified data, as needed, to support follow-up questions, cost calculations, and to develop recommendations. In total, DOD projects that it collected over 25 million pieces of data as part of the BRAC process. Given the extensive volume of requested data from the 10 separate groups (3 military departments and 7 joint cross-service groups), we noted that the data collection process was quite lengthy and required significant efforts to help ensure data accuracy, particularly from joint cross-service groups that were attempting to obtain common data across multiple military components, which, because of the diverse nature of the functions and activities, do not always use the same data metrics. In some cases, coordinating data requests, clarifying questions and answers, controlling database entries, and other issues led to delays in the data-driven analysis DOD originally envisioned. As such, some groups had to develop strategy-based proposals. As time progressed, however, these groups reported that they obtained the needed data, for the most part, to inform and support their scenarios. The DOD Inspector General and the service’s audit agencies played an important role in ensuring that the data used in the BRAC analyses were accurate and certified by cognizant senior officials. As congressionally mandated, each of the military services and the seven joint cross-service groups considered DOD’s 20-year force structure plan in its analyses. DOD based its force structure plan for BRAC purposes on an assessment of probable threats to national security during a 20-year period beginning with fiscal year 2005. DOD provided this plan to Congress in March 2004, and as authorized by the statute, it subsequently updated it 1 year later in March 2005. Based on our analysis, updates to the force structure affected some ongoing BRAC analyses. For example, the Industrial Joint Cross-Service Group reassessed its data pertaining to overhauling and repairing ships based on the updated force structure outlook and decided that one of its two smaller shipyards—Naval Shipyard Pearl Harbor or Naval Shipyard Portsmouth—could close. Ultimately, the Navy decided to close the Portsmouth shipyard in Maine. In addition, the Navy told us it recalculated its capacity based on updates to the force structure plan and determined that there was no significant change to its orginial analysis. The other groups, such as those examining headquarters and support activities, education and training, or technical functions, considered updates to the defense 20-year force structure and determined the changes would have no impact on their ongoing analyses or the development of recommendations. DOD gave primary consideration to its military value selection criteria in its process. Specifically, military value refers to the first four selection criteria in figure 2 and includes an installation’s current and future mission capabilities, condition, ability to accommodate future needs, and cost of operations. The manner in which each military service or joint cross- service group approached its analysis of military value varied according to the unique aspects of the individual service or cross-service function. These groups typically assessed military value by identifying multiple attributes or characteristics related to each military value criterion, then identifying qualitative metrics and measures and associated questions to collect data to support the overall military value analysis. For example, figure 8 illustrates how the Technical Joint Cross-Service Group linked several of its military value attributes, metrics, and data questions to the mandated military value criteria. Quantitative scoring plans were developed by each military service or joint cross-service group assigning relative weights to each of the military value criteria for use in evaluating and ranking facilities or functions in their respective areas. Appendixes III through XII highlight the use and linkages of military value criteria by each service and joint cross-service group. As noted earlier, based on congressional direction, there was enhanced emphasis on two aspects of military value—an installation’s ability to serve as a staging area for homeland defense missions and its ability to meet unanticipated surge. Homeland defense: Each of the three military services considered homeland defense roles in its BRAC analysis and coordinated with the U.S. Northern Command—a unified command responsible for homeland defense and civil support. In October 2004, the U.S. Northern Command contacted the Chairman of the Joint Chiefs of Staff, requesting to play a role in ensuring that homeland defense received appropriate attention in the analytical process. Our analysis shows that all three military departments factored in homeland defense needs, with the Air Force recommendations having the most impact. According to Air Force officials, the U.S. Northern Command identified specific homeland defense missions assigned to the Air Force, which they incorporated into its decision-making process. Navy officials likewise discussed the impact of potential BRAC scenarios on its maritime homeland defense mission with U.S. Northern Command, U.S. Strategic Command, and the U.S. Coast Guard. In this regard, the Navy decided to retain Naval Air Station Point Mugu, California, was influenced, in part, because the U.S. Coast Guard wanted to consolidate its West Coast aviation assets at this installation for homeland defense purposes. According to Army officials, most of the their role in supporting homeland defense is carried out by the Army National Guard. The U.S. Northern Command reviewed the recommendations and found no unacceptable risk to the homeland defense mission and support to civil authorities. Surge: DOD left it to each military service and joint cross-service group to determine how surge would be considered in the their analysis. Generally, all the groups considered surge by retaining a certain percentage of infrastructure, making more frequent use of existing infrastructure, or retaining difficult-to-reconstitute assets. For example, the Technical Joint Cross-Service Group set aside 10 percent of its facility infrastructure for surge, while the Industrial Joint Cross-Service Group factored in additional work shifts in its analysis. The military services retained difficult-to-reconstitute assets as the primary driver to satisfying the statutory requirement to consider surge capability. Both the Army and Navy gave strong consideration to infrastructure that would be difficult to reconstitute, such as large tracts of land for maneuver training purposes or berthing space for docking ships. For example, the Navy has a finite number of ships and aircraft and would likely have to increase operating tempo to meet surge needs. The Air Force addressed surge by retaining sufficient capacity to absorb temporary increases in operations, such as responding to emergencies or natural catastrophic events like hurricane damage, and the capacity to permanently relocate all of its aircraft stationed overseas in the United States if needed. Congress also mandated four other criteria to be considered in the analytical process: cost and savings of the BRAC recommendations, economic impact on affected communities, impact on communities’ infrastructure, and environmental impact. The extent these other mandated considerations influenced recommendations varied. For example, high cost was the primary reason the Army decided not to develop a recommendation to restation troops returning from overseas to installations with large tracts of undeveloped land that could potentially accommodate these moves, such as Yuma Proving Ground, Arizona, or Dugway Proving Ground, Utah. Despite these installations having the capacity to provide large training ranges, they do not have existing infrastructure to immediately house 3,000 to 5,000 troops required for the Army’s new modular combat brigades. Initially, the Army assessed the possibility of building new infrastructure at these locations, but Army BRAC officials told us it would be too costly given that the Army’s COBRA analysis showed that at Yuma, for example, it would cost about $2 billion to build the required infrastructure. As a result, the Army decided to place units returning from overseas at installations currently used to base other operational units, notwithstanding limitations in existing training capacities. Although there was heavy reliance on data for completing analyses, military judgment was also a factor throughout the entire process, starting with an analytical framework to base analysis of the 20-year force structure plan and ending with the finalized list of 222 recommendations submitted to the BRAC Commission. Military judgment also played a role in decisions on how military value selection criteria would be captured as attributes, with associated values or weights. Military judgment was also applied in deciding which proposed scenarios or actions should move forward for additional analysis. Generally, military judgment was exercised at this stage to delete or modify a potential recommendation for reasons such as strategic importance, as shown in the following examples: Naval Shipyard Pearl Harbor, Hawaii, which has a lower military value than other shipyards, was eliminated from closure consideration because the shipyard was considered to have more strategic significance in the Pacific Ocean area compared to other alternatives. Tripler Army Medical Center, Hawaii, which has a lower military value than some other bases, was eliminated from closure consideration because it is the only defense medical center of significant size in the Pacific Ocean area. Naval Station Everett, Washington, which has a lower military value than some other bases, was eliminated from closure consideration because of strategic reasons regarding the number and the locations of the Navy’s aircraft carriers on the West Coast and in the Pacific. Grand Forks Air Force Base, North Dakota, which has a lower military value than some other bases, was eliminated from closure consideration because of the belief that a strategic presence was needed in the north central United States. Even though Grand Forks Air Force Base was retained for strategic reasons, Minot Air Force Base is also located in North Dakota and is not affected by any BRAC recommendations. The oversight roles of the DOD Inspector General and the military services’ audit agency staff, given their access to relevant information and officials as the process evolved, helped to improve the accuracy of the data used in the BRAC process. The DOD Inspector General and most of the individual service audit agencies’ reports generally concluded that the extensive amount of data used as the basis for BRAC decisions was sufficiently valid and accurate for the purposes intended. In addition, with limited exceptions, these reports did not identify any material issues that would impede a BRAC recommendation. The DOD Inspector General and the services’ audit agencies played an important role in ensuring that the data used in the BRAC analyses were accurate and certified by cognizant senior officials. Their frontline roles and the thousands of staff days devoted to reviewing the massive data collection efforts associated with the BRAC process added an important aspect to the quality and integrity of the data used by military services and joint cross-service groups. Through extensive audits of the capacity, military value, and scenario data collected from field activities, these audit agencies notified various BRAC teams of data discrepancies for corrective action. The audit activities included validation of data, compliance with data certification requirements employed throughout the chain of command, and examination of the accuracy of the analytical data. While the auditors initially encountered problems with regard to data accuracy and the lack of supporting documentation for certain questions and data elements, most of these concerns were resolved. In addition, the auditors worked to ensure certified information was used for BRAC analysis. These audit agencies also reviewed other facets of the process, including the various internal control plans, the COBRA model, and other modeling and analytical tools that were used in the development of recommendations. Appendix XVI lists these organizations’ audit reports related to BRAC 2005 to the extent they were available at the time this report was completed. Overall, these organizational audit agencies reported the following: The Naval Audit Service reported that it visited 214 sites, covering 45 data calls, and audited over 8,300 questions. It concluded that the data appeared reasonably accurate and complete and the Navy complied with statutory guidance and DOD policies and procedures. The Air Force Audit Agency officials told us they visited 104 installations, reviewed over 11,110 data call responses at 126 Air Force locations, 8 major commands, the Air National Guard, and Headquarters Air Force, and concluded that data used for Air Force BRAC analysis were generally reliable. The Army Audit Agency reported that it visited 32 installations and 3 leased facilities and reviewed for accuracy over 2,342 responses. It concluded that the data was reasonably accurate and that the Army BRAC office had a sound process in place to collect certified data. DOD Inspector General officials told us they visited about 1,550 sites covering 29 defense agencies and organizations and reviewed over 15,770 responses. We were told that these responses were generally supported, complete, and reasonable. The DOD Inspector General also evaluated the validity, integrity, and documentation of data used by the seven joint cross-service groups and found they generally used certified data for the BRAC analysis. We closely coordinated with the DOD Inspector General and the three service audit agencies to maximize our individual and collective efforts and avoid duplication. As part of this coordination, we observed their audit efforts at selected military installations to verify the scope and quality of coverage they provided throughout the process and to give us insights into potential issues having broader applicability across the entire process. We also observed the work of these audit agencies to better familiarize ourselves with the types of issues being identified and resolved, with a view toward determining their materiality to the overall process. We identified issues regarding DOD’s recommendations, and other actions considered during the selection process that may warrant further attention by the BRAC Commission. Many of the issues relate to how costs and savings were estimated while others relate to potential impacts on communities surrounding bases that stand to gain or lose missions and personnel as a result of BRAC actions. Further, we are highlighting candidate recommendations that were presented during the selection process by either the military services or the joint cross-service groups to senior DOD leadership within the IEC that were projected as having the potential to generate significant savings, and which were substantially revised or deleted from further consideration during the last few weeks or days of the selection process. Additional discussion of issues targeted more specifically to the work and recommendations of the military services and joint cross-service groups is included in appendixes III through XII. We identified a number of issues, most of which apply to a broad range of DOD’s recommendations, that may warrant further attention by the BRAC Commission. In addition to the issue previously discussed regarding military personnel eliminations being claimed as savings to the department, other issues include (1) instances of lengthy payback periods (time required to recoup up-front investment costs), (2) inconsistencies in how DOD estimated costs for BRAC actions involving military construction projects, (3) uncertainties in estimating the total costs to the government to implement DOD’s recommended actions, and (4) potential impacts on communities surrounding bases that are expected to gain large numbers of personnel if DOD’s recommendations are implemented. Many of the 222 recommendations DOD made in the 2005 round are associated with lengthy payback periods, which, in some cases, call into question whether the department would be gaining sufficient monetary value for the up-front investment cost required to implement its recommendations and the time required to recover this investment. Our analysis indicates that 143, or 64 percent, of DOD’s recommendations are associated with payback periods that are 6 years or less while 79, or 36 percent, of the recommendations are associated with lengthier paybacks that exceed the 6-year mark or never produce savings. DOD officials acknowledge that the additional objectives of fostering jointness and transformation have had some effect on generating recommendations with longer payback periods. Furthermore, our analysis shows that the number of recommendations with lengthy payback periods varied across the military services and the joint cross-service groups, as shown in table 4. As shown in table 4, the Army has five recommendations and the education and training group has one recommendation that never payback, as described below: Army realignment of a special forces unit from Fort Bragg, North Carolina, to Eglin Air Force Base, Florida; Army realignment of a heavy brigade from Fort Hood, Texas, to Fort Army realignment of a heavy brigade to Fort Bliss, Texas, and infantry and aviation units to Fort Riley, Kansas; Army reserve component consolidations in Minnesota; Army reserve component consolidations in North Dakota; and Education and Training Joint Cross-Service Group’s establishment of Joint Strike Fighter aircraft training at Eglin Air Force Base, Florida. According to Army officials, their five recommendations have no payback because, in part, they must build additional facilities to accommodate the return of about 47,000 forces currently stationed overseas to the United States as part of DOD’s Integrated Global Presence and Basing Strategy initiative (see app. III for further discussion of the restationing initiative). According to the education and training group, its one recommendation with no payback period is due to the high military construction costs associated with the new mission to consolidate initial training for the Joint Strike Fighter aircraft for the Navy, the Marine Corps and the Air Force. Similarly, the Army has nearly 50 percent of the total number of DOD recommendations with payback periods of 10 years or longer. Our analysis of Army data shows that these lengthy paybacks are attributable to many of the recommendations regarding the reserve components. These recommendations typically have a combination of relatively high military construction costs and relatively low annual recurring savings, which tend to lengthen the payback period. We also identified some portions of DOD’s individual recommendations that are associated with lengthy payback periods for certain BRAC actions but are imbedded within larger bundled recommendations. The following are a few examples: A proposal initially developed by the Headquarters and Support Activities Joint Cross-Service Group to move the Army Materiel Command from Fort Belvoir, Virginia, to Redstone Arsenal, Alabama, had more than a 100-year payback period with a net cost over a 20-year period. However, the proposal did not include some expected savings that, if included, would have reduced the payback period to 32 years. Concurrently, the group developed a separate proposal to relocate various Army offices from leased and government-owned office space onto Fort Sam Houston, Texas, which would have resulted in a 3-year payback period. The headquarters group decided to combine these two stand-alone proposals into one recommendation, resulting in an expected 20-year net present value savings of about $123 million with a 10-year payback. Many of the individual Air Force proposals involving the Air National Guard and Air Force Reserve had payback periods ranging from 10 to more than 100 years. These individual proposals were subsequently revised by combining them with other related proposals to produce recommendations that had significant savings, minimized the longer payback periods, and linked operational realignment actions. We found that this change occurred in the realignment of Lambert-St. Louis International Airport Air Guard Station, Missouri, which originally had a 63-year payback period and resulted in a 20-year net present value cost of about $22 million. However, this realignment is now a part of the closure of Otis Air National Guard Base, Massachusetts, and the realignment of Atlantic City Air Guard Station, New Jersey. The combined recommendation results in a 20-year net present value savings of $336 million and a 3-year payback period. While the military services used the COBRA model to estimate the costs for military construction projects needed to implement BRAC recommendations, we found some inconsistencies in how they estimated some costs associated with these projects. While the impact of these inconsistencies on savings is likely not as great as others noted in this report, it nevertheless contributes to the overall imprecision of the cost estimates of DOD’s recommended actions. One area of inconsistent accounting involves the relative amounts of estimated support costs—such as the cost of connecting a new facility to existing water, sewage, and electrical systems—associated with military construction projects across the services. In its estimates, the Army considered these additional support costs as one-time costs whereas the Navy and the Air Force included them in the cost of the military construction projects for each project. By including these support costs in the cost of each project, the Navy and Air Force generally generated higher relative recurring costs than the Army for the recapitalization of facilities over time. Specifically, the Army increased its military construction cost estimates by 18.5 percent to account for the connection of the projected new facilities’ utilities. The Air Force, on the other hand, increased its construction costs for support services from 8 to 40 percent, depending on the type of facility, while the Navy included support costs at only two locations. According to the Special Assistant to the Secretary of the Navy for BRAC, the Navy assigned teams to review all proposed military construction projects by location to determine any support costs necessary for connection of utilities. Our analysis shows that had the Army used the same methodology as the Navy and the Air Force, the Army would incur about $66 million in additional recapitalization costs for all of its proposed military construction projects. The services were also inconsistent in considering the costs associated with meeting DOD’s antiterrorism force protection standards in their estimated costs for military construction projects. The Air Force increased the expected costs of its military construction projects by 2.3 percent, or about $18 million, to meet DOD’s standards. Air Force officials noted that these funds would provide enhancements such as security barriers and blast proof windows. The Army and the Navy, on the other hand, did not include additional costs to meet the department’s standards in their proposed military construction projects. If the Army and the Navy estimated costs similarly to the Air Force, the cost of their proposed military construction projects would have increased by about $146 million and $25 million, respectively. DOD’s cost and savings estimates for implementing its recommendations do not fully reflect all expected costs or savings that may accrue to the federal government. The BRAC legislation requires that DOD take into account the effect of proposed closure or realignment on the costs of any other activity of the department or any other federal agency that may be required to assume responsibility for activities at military installations. While the services and joint cross-service groups were aware of the potential for these costs, estimated costs were not included in the cost and savings analysis because it was unclear what actions an agency might take in response to the BRAC action. One such agency was the U.S. Coast Guard, which currently maintains some of its ships or various units at several installations that are slated to close. Navy BRAC officials briefed the U.S. Coast Guard about its recommendations prior to the list being published, but the Air Force did not meet with the Coast Guard. The U.S. Coast Guard was still in the process of evaluating various responses to take as a result of the proposed BRAC actions and did not complete its analysis in time for it to be included in this report. Further, as noted earlier, estimated costs for the environmental restoration of bases undergoing closure or realignment are not included in DOD’s cost and savings analyses. Such costs would be difficult to fully project at this point without planned reuse of the unneeded property being known. Consistent with the prior BRAC rounds, DOD excluded estimates for base environment restoration actions from its costs and savings analysis and in determining payback periods, on the premise that restoration is a liability that the department must address regardless of whether a base is kept open or closed and therefore should not be included in the COBRA analysis. Nevertheless, DOD did give consideration to such costs in addressing selection criterion 8, and included available information on estimated restoration costs as part of the data supporting its BRAC recommendations. DOD estimates that the restoration costs to implement its major closures would be about $949 million, as shown in table 5. (See fig. 4 in the Background section for a map of DOD’s major base closures.) Based on the data provided, the Army would incur the largest share of estimated restoration costs due to the closure of several ammunition plants and chemical depots. The largest expected costs for any one location across DOD, about $383 million, would be for restoration at Hawthorne Army Depot, Nevada. While the DOD report does not specifically identify the potential for some additional restoration costs at its installations, available supporting documentation does identify some additional costs. For example, the Army estimated the range restoration at Hawthorne Army Depot could cost from about $27 million to $147 million, which is not included in the estimates in table 5. Further, the Army recognizes that additional restoration costs could be incurred at six additional locations that have ranges and chemical munitions, but these costs have not yet been determined. Our prior work has shown that environmental costs can be significant, as evidenced by the nearly $12 billion in total cost DOD expected to incur when all restoration actions associated with the prior BRAC rounds are completed. Service officials told us that the projected cost estimates for environmental restoration are lower, in general, because the environmental condition of today’s bases is much better than the condition of bases closed during the prior BRAC rounds, primarily because of DOD’s ongoing active base environmental restoration program. Nonetheless, our prior work has indicated that as closures are implemented, more intensive environmental investigations occur and additional hazardous conditions may be uncovered that could result in additional, unanticipated restoration and higher costs. Finally, the services’ preliminary estimates are based on restoration standards that are applicable for the current use of the base property. Because reuse plans developed by communities receiving former base property sometimes reflect different uses for the property this could lead to more stringent and thus more expensive restoration in many cases. Based on experiences from prior BRAC rounds, we believe other costs are also likely to be incurred, although not required to be included in DOD’s cost and savings analysis but which could add to the total costs to the government of implementing the BRAC round. These costs include transition assistance, planning grants, and other assistance made available to affected communities by DOD and other agencies. DOD officials told us that such estimates were not included in the prior rounds’ analyses and that it was too difficult to project these costs, given the unknown factors associated with the number of communities affected and the costs that would be required to assist them. Additionally, as we reported in January 2005, in the prior four BRAC rounds, DOD’s Office of Economic Adjustment, the Department of Labor, the Economic Development Administration within the Department of Commerce, and the Federal Aviation Administration provided nearly $2 billion in assistance through fiscal year 2004 to communities and individuals, and according to DOD officials, these agencies are slated to perform similar roles for the 2005 round. However, while the magnitude of this assistance is unknown at this time, it is important to note that assistance will likely be needed in this round, as contrasted with prior rounds, for not only those communities that surround bases losing missions and personnel but also for communities that face considerable challenges dealing with large influxes of personnel and military missions. For example, DOD stated in its 2005 BRAC report that over 100 actions significantly affect local communities, triggering federal assistance from DOD and other federal agencies. Also, as discussed more fully later, the number of bases in the 2005 BRAC round that will gain several thousand personnel from the recommended actions could increase pressure for federal assistance to mitigate the impact on community infrastructure, such as schools and roads, with the potential for more costs than in the prior rounds. Finally, the BRAC costs and savings estimates do not include any anticipated revenue from such actions as the sale of unneeded former base property or the transfer of property to communities through economic development conveyances. The potential for significant revenue may exist at certain locations. For example, the Navy sold some unneeded property from prior round actions in California at the former El Toro Marine Corps Air Station for about $650 million and the former Tustin Marine Corps Air Station for $208.5 million. The extent to which sales will play a role in the disposal of unneeded property arising from the 2005 BRAC round remains to be seen. The recommended actions for the 2005 BRAC round will have varying degrees of impact on communities surrounding bases undergoing a closure or realignment. While some will face economic recovery challenges as a result of a closure and associated losses of base personnel, others, which expect large influxes of personnel due to increased base activity, face a different set of challenges involving community infrastructure necessary to accommodate growth. In examining the economic impact of the 222 BRAC recommendations as measured by the percentage of employment, DOD data indicate that most economic areas across the country are expected to be affected very little but a few could face substantial impact. Almost 83 percent of the 244 economic areas affected by BRAC recommendations fall between a 1 percent loss in employment and a 1 percent gain in employment. Slightly more than 9 percent of the economic areas had a negative economic impact of greater than 1 percent, but for some of these areas, the projected impact is fairly significant, ranging up to a potential direct and indirect loss of up to nearly 21 percent. Almost 8 percent of the economic areas had a positive economic impact greater than 1 percent. Appendix XIV provides additional detail on our economic analyses. Of those communities facing potential negative economic impact, six communities face the potential for a fairly significant impact. They include communities surrounding Cannon Air Force Base, New Mexico; Hawthorne Army Depot, Nevada; Naval Support Activity Crane, Indiana; Submarine Base New London, Connecticut; Eielson Air Force Base, Alaska; and Ellsworth Air Force Base, South Dakota, where the negative impact on employment as a percent of area employment ranges from 8.5 percent to 20.5 percent. Our prior work has shown that a variety of factors will affect how quickly communities are able to rebound from the negative economic consequences of closures and realignments. They include such factors as the trends associated with the national, regional, and local economies; natural and labor resources; effective planning for reuse of base property; and federal, state, and local government assistance to facilitate transition planning and execution. In a series of reports that have assessed the progress in implementing closures and realignments in prior BRAC rounds, we reported that most communities surrounding closed bases have been faring well in relation to key national economic indicators—unemployment rate and the average annual real per capita income growth rates. In our January 2005 report for example, we further reported that while some communities surrounding closed bases were faring better than others, most have recovered or are continuing to recover from the impact of BRAC, with more mixed results recently, allowing for some negative impact from the economic downturn nationwide in recent years. The 2005 round, however, also has the potential to significantly affect a number of communities surrounding installations, which are expected to experience considerable growth in the numbers of military, civilian, and civilian support personnel. These personnel increases are likely to place additional demands on community services, such as providing adequate housing and schools, for which the communities may not have adequate resources to address in the short term. The total gains can be much more than just those personnel with the consideration of accompanying families. Table 6 shows that 20 installations are expected to realize gains of over 2,000 military, civilian, and mission support contractor personnel for an aggregate increase of more than 106,000 personnel. As shown in table 6, most of the gaining installations are Army installations with the gains attributable to a number of actions, including the return of large numbers of personnel from overseas locations under DOD’s integrated global presence and basing strategy and the consolidation of various activities, such as combat-support related activities at Fort Lee, Virgina. Fort Belvoir, Virginia, has the largest expected growth, due in large measure to some consolidation of various activities from lease space in the Washington, D.C. area. The challenges facing communities surrounding gaining bases can be many, including increased housing demand, increased demands for roads and utilities, and adequate schools. These challenges can be formidable as communities may be faced with inadequate resources to address concerns in these areas as follows: Housing: If history is any indication, while some of the personnel transferring into a base may live on-base, the majority may not, as the military services are turning more to housing privatization. Installation officials at Fort Riley, Kansas, told us about concerns about the nearby availability of housing (within a 20-mile radius) to support the expected influx of military and civilian personnel and their families transferring to the base. For those installations where adequate housing is not available in the surrounding communities existing housing privatization projects would need to be revised and expedited to provide for additional units. Fort Bliss, Texas, officials told us that they expect the need to accelerate their existing housing privatization efforts, but would require additional funds to do so. Currently, housing privatization has taken place or is in the process of taking place at several of these installations and similar efforts may be needed there as well. Schools: Effects on bases with the greatest gain in personnel resulting from BRAC vary between whether dependents attend schools operated on base by DOD (Fort Benning, Fort Bragg, and Marine Corps Base Quantico as shown in table 6) or schools operated by local educational agencies. We recently reported on challenges likely to be faced by both DOD operated schools and those operated by local educational agencies in the post BRAC environment at these and other locations. Recently, in visiting selected bases affected by the BRAC recommendations, installation officials told us that while local educational authorities should be able to absorb additional students into their school systems, they are more concerned about the potential shortage of teachers. Another concern is that make-shift trailers or temporary modular facilities might be used. For example, while Kings Bay, Georgia, officials told us that the local school system should be able to accomodate the increase of students, it may need to resort to the use of portable classrooms. All installations that are expected to gain more than 2,000 personnel have local community-administrated school systems with the exceptions of Fort Benning, Fort Bragg, and Marine Corps Base Quantico which have DOD-administrated school systems. If additional capacity is required at these three locations, additional military constructions funds would likely be needed. Other infrastructure: Installation officials we spoke to also expressed some concern for the increased demand for various community services, such as health care, transportation, and utilities to accommodate personnel increases. Fort Carson, Colorado, officials told us that with its expected personnel increases, the local community will need more TRICARE providers to meet the expected demand. In other cases, such as at Fort Belvoir, Virgina, discussion has ensued regarding the need for increased mass transit capability, which may involve requests for millions of dollars in federal grant assistance. As previously noted, it is likely that these concerns may increase federal governmental expenditures that are not included in the BRAC cost and savings analyses. We also identified several candidate recommendations that were presented by the military services or joint cross-service groups to the IEC—DOD’s senior BRAC leadership group—that were substantially revised or deleted from further consideration during the last few weeks of the BRAC section process. In aggregate, based on projected savings, these actions reduced the overall potential for estimated net annual recurring savings by nearly $500 million and estimated 20-year net present value savings by over $4.8 billion, as shown in table 7. Each of the cases highlighted in the table is described in additional detail below. The educational and training group proposed to privatize graduate education, which enabled the Navy to recommend the closure of the Naval Postgraduate School, Monterey, California. The proposed closure supported DOD’s draft transformational option to privatize graduate- level education. Navy officials, however, stated that they believed professional military education was more important than ever given the world climate. During the IEC deliberations, Navy officials expressed concern about the loss of such a unique graduate military education facility and the effect on international students who participate in the school’s programs. Further, in the IEC meeting the Navy stated its belief that all education recommendations should be withdrawn because education is a core competency of the department and relying on the private sector to fulfill that requirement is too risky. The IEC agreed and disapproved the recommendation. The Medical Joint Cross-Service Group recommended that the Uniformed Services University of the Health Sciences associated with the National Naval Medical Center in Bethesda, Maryland, be closed, citing that educating physicians at the site was more costly than alternative scholarship programs (about triple the cost) and that the department could rely on civilian universities to educate military physicians. We also reported previously that the university is a more costly way to educate military physicians. The IEC, subsequently disapproved the recommendation, citing that education is a core competency for the department, and therefore it was considered too risky to rely on the private sector to provide this function. Also, a DOD official indicated that, with the recommended action to realign Walter Reed Army Medical Center to Bethesda, Maryland, it would be highly desirable to have a military medical college associated with this medical facility in order for it to be a world-class medical center. The Technical Joint Cross-Service Group, through the Army, proposed that the Natick Soldier Systems Center, Massachusetts, be closed and technical functions relocated to Aberdeen Proving Ground, Maryland, to create an integrated command, control, communications, and computers, intelligence, surveillance, and reconnaissance center. In its presentation to the IEC, the Army noted that the cost for this recommendation was high, but it would generate greater efficiencies and faster transition from research and development through the acquisition and fielding phases of the technology. Although the ISG initially raised no concerns and approved the recommendation, the IEC disapproved it in the last week of the BRAC selection process, citing the high cost of the recommendation. The closure of the Adelphi Laboratory Center, Maryland, was originally part of the recommendation to close Fort Monmouth, New Jersey, and, along with Natick Soldier Systems Center, was part of the Army’s plan for an integrated command, control, communications, and computers, intelligence, surveillance, and reconnaissance center. An Army official told us that, as with the closure of Natick, no concerns were originally raised and the recommendation was approved by the ISG, but the IEC later removed it from the recommendation that includes the closure of Fort Monmouth because of high cost. The proposed closure of Carlisle Barracks, Pennsylvania—home of the Army War College—was initiated by the Education and Training Joint Cross-Service Group and was aimed at creating synergy between the college and Army’s Command and General Staff College at Fort Leavenworth, Kansas. The IEC approved the proposed recommendation when it was initially briefed, but later rejected it, based on the Army’s argument that among other things, the Army War College’s proximity to Washington, D.C., provides access to key national and international policymakers and senior military and civilian leaders within DOD. The Education and Training Joint Cross-Service Group recommended the closure of the Air Force Institute of Technology at Wright-Patterson Air Force Base, Ohio. The group recommended that graduate-level education be provided by the private sector and that all other functions of the institute be relocated to Maxwell Air Force Base, Alabama. However, the IEC disapproved the recommendation based on the risk involved in relying on the private sector for education requirements, given that education is a core competency of the department. The Industrial Joint Cross-Service Group recommended transferring the workload of the Marine Corps’ depot maintenance facility in Barstow, California, which enabled the Department of the Navy to recommend closure of the Marine Corps Logistics Base. The Marine Corps raised concerns over the impact that the closure would have on Marine Corps deployments from the West Coast. The IEC decided to downsize the base and retain the depot, citing the Marine Corps’ concerns. While the Navy recommended closure of the Naval Air Station Brunswick, Maine, the IEC revised this to a realignment. Navy officials stated that the senior Navy leadership had been reluctant to give up the Navy’s remaining air station in the Northeast region, but found the potential savings significant enough to recommend closure. Navy officials stated that the IEC relied on military judgment to retain access to an airfield in the Northeast. Nonetheless, all aircraft and associated personnel, equipment, and support as well as the aviation intermediate maintenance capability will be relocated to another Navy base. The Navy is maintaining its cold weather-oriented Survival, Evasion, Resistance and Escape School, a Navy Reserve Center, and other small units at the air station. While the Air Force had proposed to close Grand Forks Air Force Base, North Dakota, the IEC revised this to a realignment a week before OSD released its recommendations. The Air Force reported in its submission to the BRAC Commission that over 80 percent of the base’s personnel are expected to be eliminated or realigned under the revised proposal. The revision to keep the base open was made based on military judgment to keep a strategic presence in the north central United States, with a possible unmanned aerial vehicle mission for the base. Even though Grand Forks Air Force Base was retained for strategic reasons, Minot Air Force Base is also located in North Dakota and is not affected by any BRAC rcommendation. The closure of Rome Laboratory, New York, was originally part of a Technical Joint Cross-Service Group recommendation to consolidate the Defense Research Laboratories. No concerns were originally raised about the closure, and it was approved by the IEC. However, the IEC subsequently decided to realign rather than close the laboratory to address strategic presence and cost concerns. The realignment of Rome has a higher 20-year net present value savings than the closure proposal because the closure would have required more military construction and transfers of military and civilian personnel and equipment. While we believe DOD’s overall recommendations, if approved and implemented would produce savings, there are clear limitations associated with the projected savings, such as the lack of military end-strength reductions and uncertainties associated with other savings estimates. DOD’s recommendations would provide net reductions in space and plant replacement value, which would reduce infrastructure costs once up-front investment costs have been recovered but the extent some projected space reductions will be realized is unclear. Other DOD savings estimates are based on what might be broadly termed business process reengineering efforts and other actions, where savings appear likely, but the magnitude of savings has not been validated and much will depend on how the recommended actions are implemented. Nevertheless, the savings could prove difficult to track over time. As a result, DOD’s projections may create a false sense of the magnitude of the savings, with fewer resources available for force modernization and other needs than might be anticipated, and there may be the potential for premature budget reductions. Given problems in tracking savings from previous BRAC rounds, and the large volume of BRAC actions this round that are more oriented to realignments and business process reengineering than closures, we believe it is of paramount importance that DOD put in place a process to track and periodically update its savings estimates. Despite a fundamentally sound overall process, we identified numerous issues regarding DOD’s list of recommendations that may warrant further attention by the BRAC Commission, as noted in this report and appendixes III through XII. These include those recommendations having lengthy payback periods, some with limited savings relative to investment costs, and potential implementation difficulties. Given the large number of such items for the Commission’s consideration, we are not addressing them as individual recommendations but simply referring our report in its entirety for the Commission’s consideration. We recommend that the Secretary of Defense take appropriate steps to establish mechanisms for tracking and periodically updating savings estimates in implementing individual recommendations, with emphasis both on savings related to the more traditional relignment and closure actions as well as those related more to business process reengineering. Cognizant officials of the military services and joint cross-service groups reviewed drafts of the report providing us with informal comments, permitting us to make technical changes, as appropriate, to enhance the accuracy and completeness of the report. Subsequently, we similarly provided complete drafts of the report to cognizant OSD officials, obtaining and incorporating their comments as appropriate. In providing oral comments on a draft of this report, the Deputy Under Secretary of Defense for Installations and Environment concurred with our recommendation. We are sending copies of this report to Members of Congress; the Secretaries of Defense, the Army, the Navy, and the Air Force; the Commandant of the Marine Corps; the Director, Office of Management and Budget; and the Chairman, Defense Base Closure and Realignment Commission. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me on (202) 512-5581 or holmanb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix XVII. Prior to the release of the Department of Defense’s (DOD) base realignment and closure (BRAC) recommendations on May 13, 2005, we monitored the BRAC process in a real-time environment beginning in October 2003. We sought to assure ourselves that DOD followed an objective and consistently applied process in which we could observe logical decision making leading to defensible and well-documented proposed closure and realignment recommendations. During this period, we abided by an agreement with DOD to not disclose details of the process due to the sensitivity of the information. Following the release of the recommendations, we continued our analyses of the process and recommendations. With the unprecedented large number of recommendations and the finalization of many of these occurring in the final weeks of the process, along with the limited time available for us to report our results following DOD’s May 13, 2005, release of the recommendations, we were not able to review all recommendations in detail. We focused more of our attention on cross-cutting issues than on implementation issues of individual recommendations, but did review individual recommendations as time permitted. Further, because of time constraints, we had only limited opportunities to gain further insight into some of the recommendations from officials at bases affected by the recommendations. We performed our work primarily at the Office of the Secretary of Defense (OSD), the military services’ base closure offices, and the offices of seven joint cross-service groups that were established by OSD to develop cross- service recommendations. While we did not attend deliberative meetings, we had access to minutes of meetings and relevant documentation and met periodically with key staff and senior leadership to gain an understanding of each phase of the process and to provide them with the opportunity to address our concerns as the process was unfolding. We also visited selected bases following the public disclosure of the Secretary’s recommendations to gain further insights into potential issues regarding specific recommendations. Those bases included the Anniston Army Depot, Alabama; Fort Bliss, Texas; Fort Carson, Colorado; Fort Sam Houston, Texas; Fort Lewis, Washington; Fort Riley, Kansas; Lackland Air Force Base, Texas; McChord Air Force Base, Washington; Marine Corps Air Station Cherry Point, North Carolina; Naval Shipyard Portsmouth, Maine; Naval Submarine Base Kings Bay, Georgia; Naval Submarine Base New London, Connecticut; and Red River Army Depot, Texas. We also met with officials of the U.S. Coast Guard to discuss the impact of BRAC actions on their operations since they are tenants on several bases recommended for closure or realignment. We relied on DOD’s Office of the Inspector General, Army Audit Agency, Naval Audit Service, and Air Force Audit Agency to validate the data used by the military services and joint cross-service groups in their decision-making processes. We met with staff of these audit agencies periodically to discuss the results of their work as well as to observe their data validation efforts at selected locations across the country. The DOD Inspector General and service audit agencies issued reports that generally concluded that the extensive amount of data used as the basis for BRAC decisions was sufficiently valid and accurate for the purposes intended. In addition, with limited exceptions, these reports did not identify any material issues that would impede a BRAC recommendation. Where questions existed, we made further assessments and were able to satisfy ourselves that issues raised would have limited, if any, impact on the department’s recommendations. Based on the audit agencies’ extensive validation efforts and our observation of their work, we believe the data are sufficiently reliable for the purposes of this report. To determine the extent to which DOD achieved its BRAC goals, we interviewed key officials and collected and analyzed relevant documentation generated by OSD, the military departments, and the joint cross-service groups. We reviewed the Secretary of Defense’s November 2002 memorandum that initiated the 2005 BRAC process and highlighted DOD’s goals and obtained DOD officials’ views on the degree to which the goals were accomplished. With respect to DOD’s goal of reducing excess capacity, we initially reviewed the capacity analysis reports of the services and joint cross-service groups to gain insight into the relative amounts of excess capacity within the department. We subsequently reviewed major recommendations to determine the extent to which these recommended actions would reduce infrastructure and excess capacity. In this regard, we also assessed the changes in the overall defense infrastructure’s plant replacement value—a measure used by the department to determine the cost to replace an existing facility with a facility of the same size at the same location, using today’s standards—by reviewing supporting documentation for the recommendations. We also analyzed the aggregated estimated costs and savings associated with reducing DOD’s unnecessary infrastructure, as depicted in the Cost of Base Realignment Actions (COBRA) analyses for the 222 recommendations proposed by the department, and compared these estimates with similar data from the prior BRAC rounds to determine similarities and differences in sources of costs and savings and thereby identify potential areas for further review. With respect to DOD’s costs and savings estimates, we examined selected supporting documentation to determine the basis for the estimates and identified key elements, such as base operating support, personnel compensation, or recapitalization of facilities, those estimates comprised. We also performed a qualitative analysis of DOD’s performance in addressing its other BRAC goals—transforming the infrastructure and fostering jointness—by examining DOD’s proposed recommendations and seeking views from key officials on the relative success of achieving these initiatives. We also compared the justification narratives supporting individual recommendations for closures and realignments against draft transformation options developed by the department, although not formally adopted, that were nonetheless used by the individual military services and joint cross-service groups. Our efforts in addressing this and other objectives were facilitated by remote access to selected automated databases and tracking systems, which gave us near real-time access to relevant briefings and other documents, permitting us to broadly track the evolution of the BRAC process and identify issues for further consideration. To address whether DOD’s selection process for developing recommendations was logical and reasoned, we focused on key aspects of the BRAC process, including capacity and military value analyses. In doing so, we sought to determine whether DOD’s selection process was objective and in compliance with key considerations of BRAC legislation. Our monitoring of the process from the start permitted us to assess the extent to which the process followed was logical, sequential, reasoned, and well documented. Our monitoring permitted us to determine to what extent a logical and sequential flow existed among all phases of DOD’s selection process from the point at which data were collected and analyzed through the compilation of the final recommendations. We reviewed the services’ determinations of which installations to consider in the BRAC process and analyzed the services’ and joint cross-service groups’ excess capacity analyses and military value evaluation plans and analyses to determine if they were developed in a reasoned fashion and supported by appropriate documentation. In reviewing military value analyses, we reviewed specific attributes established by the services and joint cross-service groups and examined the linkage between the groups’ methodologies and the military value selection criteria (i.e., criteria 1 through 4) to determine if these mandated selection criteria were addressed. Regarding the development of recommendations, our focus was to determine whether the recommendations were developed in a logical and reasoned manner. We reviewed, among other things, the extent to which the services and joint cross-service groups (1) considered various alternative proposals for closure or realignment, (2) assessed proposed recommendations using military value as the predominant decision-making factor, and (3) considered the remaining four selection criteria as mandated by law. To address issues regarding DOD’s recommendations, we focused more of our attention on cross-cutting issues than on implementation issues of individual recommendations, but did review individual recommendations as time permitted. We reviewed recommendation justification packages that included particulars on the benefits of implementing the recommendations from an operational perspective, the estimated costs and savings associated with implementing the recommendations, and their degree of conformity to the mandated selection criteria. We discussed perceived benefits with key officials and reviewed appropriate supporting documentation. We also examined financial aspects of the recommended actions, including expected up-front investment costs to implement the actions, length of payback periods, net present value savings or costs over a 20-year period, and annual recurring savings or costs. In examining the expected costs and savings as generated by DOD’s COBRA model, we further examined assumptions and specific calculations regarding specific recommendations to determine the relative reasonableness of the estimates, given the data available to the services and the joint cross- service groups using the COBRA model. Further, we examined and discussed with DOD officials the economic and community impact for selected closure and realignment actions, including both adverse impacts associated with closing bases as well as challenges facing bases and surrounding communities that stand to receive large influxes of military personnel, civilian personnel, or both. Additionally, we reviewed potential recommendations that were approved by either the services or joint cross- service groups but ultimately rejected by senior leadership, the Infrastructure Executive Council, during the last few weeks of the BRAC process. We examined the merits of these proposals as presented by the services or joint cross-service groups in terms of addressing DOD’s BRAC goals. We further reviewed the rationale offered by senior leadership in its decisions to reject or substantially revise the offered proposals. Because of time limitations and complexities introduced by DOD in weaving together the unprecedented 837 closures and realignment actions across the country into 222 recommendations, we focused more on evaluating major issues affecting more than one recommendation than on implementation issues of individual recommendations. However, as time permitted, we did visit several selected installations, as noted above, to better gauge the operational and economic impact of the proposed recommendations. Installations visited were selected on a judgment basis because of our desire to have additional information on issues of concern, such as those related to costs and savings, potential operational implications, and potential economic impact. They included a number of bases with industrial-type activities because of concerns in prior rounds about how well the BRAC process and the COBRA model deal with such issues and other aspects of those facilities that permitted us to address other issues of concern. We conducted our work from October 2003, as DOD’s process was beginning, through June 2005, shortly after the Secretary of Defense announced his proposed base closures and realignments, in accordance with generally accepted government auditing standards. The following terms were used by DOD during the 2005 BRAC process. Annual recurring savings: Savings that are expected to occur annually after the costs of implementing a BRAC action have been offset by savings. Candidate recommendation: A scenario that a joint cross-service group or military department has formally analyzed against all eight selection criteria and which it recommends to the Infrastructure Steering Group and Infrastructure Executive Council respectively for approval by the Secretary of Defense. A joint cross-service group candidate recommendation must be approved by the Infrastructure Steering Group, the Infrastructure Executive Council, and the Secretary of Defense before it becomes a DOD recommendation. A military department candidate recommendation must be approved by the Infrastructure Executive Council and the Secretary of Defense before it becomes a DOD recommendation. Certified data: P.L. 101-510, section 2903 (c)(5) requires specified DOD personnel to certify to the best of their knowledge and belief that information provided to the Secretary of Defense or the 2005 Defense Base Closure and Realignment Commission concerning the realignment or closure of a military installation is accurate and complete. Closure: All missions of the installation have ceased or have been relocated. All personnel positions (military, civilian, and contractor) have either been eliminated or relocated, except for personnel required for caretaking, conducting any ongoing environmental restoration, and disposing of base property. COBRA: An analytical tool used to calculate the costs, savings, and return on investment of proposed realignment and closure actions. Force structure plan: Numbers, size, and composition of the units that comprise U.S. defense forces, for example, divisions, air wings, aircraft, tanks, and so forth. Infrastructure Executive Council (IEC): One of two senior groups established by the Secretary of Defense to oversee and operate the BRAC 2005 process. The IEC, chaired by the Deputy Secretary of Defense, composed of the Secretaries of the military departments and their chiefs of services, the Chairman of the Joint Chiefs of Staff, and Under Secretary of Defense (Acquisition, Technology, and Logistics), was the policy-making and oversight body of the entire BRAC 2005 process. Infrastructure Steering Group (ISG): The subordinate of two senior groups established by the Secretary of Defense to oversee the BRAC 2005 process. The ISG, chaired by the Under Secretary of Defense (Acquisition, Technology, and Logistics), and composed of the Vice Chairman of the Joint Chiefs of Staff, the Service Vice Chiefs, Deputy Under Secretary of Defense (Installations and Environment), and the Military Department Assistant Secretaries of Defense (Installations and Environment), provided oversight to joint cross-service group analyses of common business and support functions and ensured the integration of that process with the military departments’ and defense agencies’ specific analyses of all other functions. Losing installation: An installation from which missions, units, or activities would cease or be relocated pursuant to a closure or realignment recommendation. An installation can be a losing installation for one recommendation and a receiving installation for a different recommendation. Military installation: A base, camp, post, station, yard, center, homeport facility for any ship, or other activity under the jurisdiction of the Department of Defense, including any leased facility. The term does not include any facility used primarily for civil works, river and harbor projects, flood control, or other projects not under the primary jurisdiction or control of the Department of Defense. Military value: Referring to one or more of the first four BRAC selection criteria, which are collectively referred to as the military value criteria and are expected to receive priority consideration in the analytical process that results in recommendations for the closure or realignment of military installations within the United States. Net present value: In the context of BRAC, net present value is taking into account the time value of money in calculating the value of future cost and savings. Payback period: The time required for cumulative estimated savings to exceed the cumulative estimated costs incurred in net present value terms as a result of implementing BRAC actions. Realignment: Includes any action that both reduces and relocates functions and civilian personnel positions, but does not include a reduction in force resulting from workload adjustments, reduced personnel or funding levels, or skill imbalances. Receiving installation: An installation to which missions, units, or activities would be relocated pursuant to a closure or realignment recommendation. An installation can be a receiving installation for one recommendation and a losing installation for a different recommendation. Scenario: A proposal that has been declared for formal analysis by a military department or joint cross-service group deliberative body. The content of a scenario is the same as the content of a proposal. The only difference is that it has been declared for analysis by a deliberative body. Once declared, a scenario was registered at the ISG by inputting it into the ISG BRAC Scenario Tracking Tool. Surge: A term incorporated in one of the military value selection criteria for the 2005 BRAC round: “the ability to accommodate contingency, mobilization, surge, and future total force requirements.” The term is not otherwise defined and application of the term can vary by specific operational or support categories. Transformation: According to the department’s April 2003 Transformation Planning Guidance document, transformation is “a process that shapes the changing nature of military competition and cooperation through new combinations of concepts, capabilities, people, and organizations that exploit our nation’s advantages and protect against our asymmetric vulnerabilities to sustain our strategic position, which helps underpin peace and stability in the world.” The Army generally followed the common analytical framework established by the Office of the Secretary of Defense (OSD) for reviewing its active component installations and followed a separate parallel process for its reserve components installations. Compared to prior rounds, the Army’s process produced a record number of 56 recommendations, with 44 of them directed to its reserve components and 12 directed to the active component, recognizing that many of the individual recommendations contain multiple closure and realignment actions. The 44 reserve components recommendations involved realignment or closure actions that could have been approved outside of the BRAC process, but the Army and DOD decided to include them as part of DOD’s efforts to aid transformation through the base realignment and closure process. Unlike the other military services and joint cross-service groups, the Army’s recommendations, while producing estimated net annual recurring savings of nearly $500 million after 2011, are not expected to achieve overall net savings over the 20-year period typically used to measure net savings from BRAC actions. Over this 20-year period, the Army expects to incur a net present value cost over $3 billion, which is due primarily to the very large up-front costs in a few recommendations that are necessary to return forces to the United States under DOD’s Integrated Global Presence and Basing Strategy. However, the financial outlook for the Army improves if joint cross-service recommendations involving Army bases are considered—these separately reported actions are expected to produce $10.7 billion in net present value savings over a 20-year period. Payback periods—the time required for savings to offset closure costs—for the active component recommendations are projected to average 2.5 years with a range of immediate to no payback, and average 12.3 years with a range of immediate to more than 100 years for the reserve components. We believe some of the Army’s recommendations may warrant additional attention from the BRAC Commission due to the likelihood of overstated savings projections associated with military personnel eliminations, uncertainties regarding overseas restationing of forces to the United States and other ongoing force structure changes, challenges facing communities surrounding bases that are gaining large numbers of personnel, the bundling of various recommendations, various unknowns associated with implementing the reserve components’ recommendations, and issues regarding the proposed closure of the Red River Army Depot in Texas. The Army Audit Agency, which performed audits of the data used in the process, concluded that the data were sufficiently reliable for use in BRAC. The Army established a Senior Review Group, headed by the Vice Chief of Staff of the Army and the Under Secretary of the Army and comprising senior Army military and civilian personnel, responsible for assessing potential recommendations for consideration by the Secretary of the Army, who in turn was to forward recommended actions to the Infrastructure Executive Council (IEC) for approval. This group was supported by The Army Basing Study Group, headed by the Deputy Assistant Secretary of the Army for Infrastructure Analysis, which was responsible for collecting and analyzing data and developing recommendations. In addition, subject matter experts and representatives from the Army’s major commands provided expertise and input throughout the BRAC process. The Army’s broadly stated goals for BRAC 2005 were to enhance the capabilities of a transforming Army while aligning its infrastructure to meet its post-Cold War force structure and eliminating excess physical capacity to provide ready combat power to Combatant Commanders. Some key planning and strategy documents provided guidance in the pursuit of Army goals. The Army Stationing Strategy, for example, provided an overall vision, principles, and goals relative to future basing decisions while DOD’s Strategic Planning Guidance helped to define objectives regarding soldiers’ well-being. In further defining its goals, the Army identified the capabilities and missions that its installations require to support its forces in the future. With these needs in mind, the Army set out numerous objectives, such as: locate Army forces and materiel (at critical installations) to enhance relocate forces in accordance with the Integrated Global Presence and reshape installations to support home station mobilization and reshape reserve components infrastructure to improve efficiency of mobilization and demobilization; and provide sufficient area and facilities (with varied terrain, climate, and airspace) to support institutional training, combat development, and doctrine development. The Army’s BRAC analysis included a review of 87 active component installations and 10 leased facilities. A separate effort was undertaken to review over 4,000 Army National Guard and Army Reserve facilities to explore infrastructure consolidation opportunities that would afford the reserve components better facilities and enhance, among other things, training and operations. Army officials indicated that differences in the objectives and the nature of facilities associated with the active and reserve components infrastructure made it impractical to use identical review and decision-making processes. As with previous BRAC rounds, capacity and military value analyses provided the starting point for the Army’s decision- making process. A key focus in the Army’s efforts was to preserve large maneuver areas to ensure that future training requirements could be met and to relocate missions and personnel from small, single-function installations to larger, multi-function installations. The Army Audit Agency played an important role in helping to ensure data accuracy through extensive audits of data gathered at various locations. The Army’s BRAC process was made more challenging by two ongoing force structure and basing initiatives—the rebasing of thousands of Army forces and their families to the United States as a result of the Integrated Global Presence and Basing Strategy and the restructuring of the Army’s forces under its modularity program—that were to be integrated into the BRAC process. The Army initiated its capacity analysis by collecting capacity-related data for its active duty installations (e.g., buildings, land) based on 28 capacity metrics, such as buildable acres, maneuver areas, and instructional facilities. In calculating capacity excesses or shortages through a comparison of the physical capacity data with requirements, the Army considered a surge capability to ensure that sufficient capacity existed to meet unforeseen military contingencies, future threats, and future needs as outlined in DOD’s 20-year force structure plan. The Army’s surge analysis also reinforced the importance of preserving assets such as maneuver land that would be difficult to reconstitute if eliminated. Table 8 shows selected Army’s capacity results for 7 of 12 mission areas, as presented in the Army’s BRAC 2005 report. As shown in table 8, some areas, such as armaments production and ammunition storage, had excess capacity ranging from about 5 percent to about 220 percent while other areas had shortages. Further, the Army reported that it had a service-wide excess of over 1.5 million square feet of general administrative space even though 35 installations reported shortages. While the Army’s BRAC report did not indicate the overall impact the Army’s proposed closure and realignment recommendations would have on reducing excess capacity, Army officials projected that its proposed actions would reduce excess general administrative space by over 1 million square feet while realigning Army units to better match the remaining capacity. While the overall capacity excesses and shortages, identified by installation, provided insights for potential closures or realignments, the Army subsequently conducted more detailed capacity analyses that identified the types of facilities and training lands that were required to support various units (e.g., light and heavy maneuver brigades, small and large training schools). In this manner, the Army had the ability to determine which installations could handle additional missions and units and what infrastructure improvements and additional military construction might be required to support those units. The Army did not perform a similar capacity assessment of the reserve components’ facilities because of the nature of their facilities and differing objectives, but did collect and assess data related to, for example, the condition and location of facilities, as well as expected costs such as construction and force protection upgrades that may be necessary to provide for viable reserve consolidation opportunities. Prior to the collection of this data, the Army sought interest from the state Adjutant Generals of the National Guards for units in each state participating in such efforts on a voluntary basis. The Army’s military value analysis focused on a set of 40 attributes, such as maneuver land, and housing availability for its soldiers and dependents, that are characteristics the Army considered desirable for its installations to meet Army needs. Attributes with less flexibility for change, such as the availability of maneuver land or direct fire ranges, were among those most highly valued in developing a scoring plan for evaluating the military value for each of the Army’s installations. According to Army officials, this reflected their view of the criticality of possessing adequate acreage to conduct unit training, particularly in view of the expectation for an increase in the number of brigades and return of various forces from overseas locations. The Army’s military value attributes also reflected consideration of its role in supporting the global war on terrorism, homeland defense, and transformation. Through a process of weighting each of the Army’s attributes, the Army derived relative weights for the four legislatively-mandated military value selection criteria. As shown in table 9, three of the four criteria had relatively higher weights than the remaining criterion dealing with cost and manpower implications. Imbedded within these criteria was a key focus on the need for availability of existing land and facilities for expansion purposes to address the needs as cited in those specific criteria. In this regard, the Army placed high value on these criteria as a hedge against uncertain future requirements and to ensure that they did not dispose of assets such as large tracts of land, which would be difficult to reacquire. In performing its military value assessment, the Army assessed each active duty installation and ranked each of them across the four military value selection criteria to more fully evaluate the potential for realignment and closure actions. This contrasted with the approach the Army used in the 1995 BRAC round when it developed a military value ranking for individual installations under one of 13 mission categories, which made it more difficult to assess an installation for use in a different mission area. For this round, the Army assessed the military value of each of its installations based on a common framework that linked attributes, metrics, and data call questions to military value as shown in figure 9. During its assessment, the Army stressed multi-function capabilities for installations. To account for the unique capabilities that some Army single- function installations provided, the Army applied military judgment to modify the initial ranking of its installations to better identify installations that the Army believed were best suited to meet its current and future capabilities. For example, the Tripler Army Medical Center in Hawaii, which initially ranked low in military value, is DOD’s only medical center of significant size in the Pacific and therefore was retained for strategic reasons. Ultimately, the Army moved nine installations higher in the list based on their unique capabilities. Subsequently, those installations with a lower military value ranking became more vulnerable to closure or realignment actions. With respect to its reserve components, the Army did not perform a military value rank-ordering of these various installations across the country, but instead assessed the relative military value that could be obtained by consolidating various facilities into a joint facility in specific geographical locales to support, among other things, the reserve components’ training, recruiting, and retention efforts. Throughout the BRAC process, the Army Audit Agency advised the Army on the development and implementation of its internal control procedures; performed audits of the Army’s conduct of the process, including the validation of data and various models used to assist in decision making. During the capacity and military value data calls, the Army Audit Agency performed on-site audits of data collection efforts at various installations on a sample basis to validate the data being gathered. Instances of inaccurate data or inadequate source documentation identified during these audits were generally corrected by the Army. As a result, the auditors generally found the data to be sufficiently reliable for use in the BRAC process. The Army used the results of its capacity and military value analyses, along with the 20-year force structure plan, as the foundation for the development of hundreds of potential closure and realignment scenarios. Scenarios under consideration were refined using various models— primarily an optimization model and the Cost of Base Realignment Actions (COBRA) model—along with military judgment. The optimization model, using capacity data, military value scores, and other data, provided the Army with various competing, plausible alternatives associated with the restationing of various missions and forces within the infrastructure. The model provided for alternative scenarios and their impact on overall military value as functions were moved to higher ranked installations. The COBRA model, which was used by all military services and joint cross- service groups to address the fifth selection criterion regarding costs and savings, provided the Army with the relative cost and savings estimates of these various alternatives. The Army further assessed the various scenarios in terms of the remaining selection criteria 6 through 8, regarding the economic impact on communities affected by BRAC, the ability of the infrastructure within communities to support military missions, and the environmental impact of the BRAC actions, respectively. The Army used input from various DOD- generated models in assessing its scenarios against these criteria, which, while important and mandated by the BRAC legislation, played less of a role than that of military value. However, the Army considered these criteria in order to ensure that there were no insurmountable challenges that would derail the implementation of any particular scenario. In addition, they were used to differentiate between competing scenarios. For example, the Army determined its final stationing of modular brigades based in part on its assessment of the environmental impact these brigades would have on the receiving installations. The Army also integrated into the overall process those scenarios that had been generated for the reserve components in the parallel process referred to previously. Those scenarios were developed through a series of meetings with state officials across the country. As with the active component, the reserve component scenarios were assessed using the COBRA model and other models. The Army also worked closely with the joint cross-service groups as they developed recommendations that affected Army installations. In some cases, the Army developed scenarios that were provided to the joint cross- service groups for further consideration. For example, the Army developed initial scenarios proposing to close three chemical demilitarization facilities, which were subsequently provided to the Industrial Joint Cross- Service Group, which ultimately developed and processed recommendations for these closures. Alternatively, some scenarios which ultimately became Army recommendations were developed in conjunction with the joint cross-service groups. For example, the Industrial Joint Cross- Service Group’s scenario regarding the realignment of the depot maintenance workload out of the Red River Army Depot in Texas, was instrumental in leading to an ultimate Army recommendation to close the depot. Similarly, the Education and Training Joint Cross-Service Group developed a scenario to realign the Army’s Armor Center and School from Fort Knox, Kentucky to Fort Benning, Georgia, an action that was later folded into the Army’s broader realignment of Fort Knox. As the Army and cross-service group recommendations were being finalized, the Army held a series of meetings with the joint cross-service groups to ensure that all recommended actions involving Army installations were properly integrated and corresponding impacts were considered in their entirety. The Army produced 56 recommendations that were approved by DOD—6 closures of active component installations, 6 realignments of active component installations, and 44 recommendations consisting of multiple reserve components closure and realignment actions grouped by state or region. These recommendations, along with other Army-related recommendations produced by the joint cross-service groups, align, for the most part, with the Army’s objectives of reducing the number of primarily single-function, smaller installations and transforming the infrastructure to better meet current and expected future Army needs. Table 10 provides the financial implications of the Army’s recommendations. As shown in table 10, the Army’s recommendations are expected to produce nearly $500 million in estimated net annual recurring savings beginning in 2012, but have a large 20-year net present value cost of about $3 billion, rather than savings which are typically expected in that timeframe; this is due primarily to very large up-front costs, nearly $10 billion in expected one-time costs, that are required to implement the recommendations. A few of the recommendations, particularly the one involving the redeployment of Army forces to the United States under DOD’s Integrated Global Presence and Basing Strategy, are responsible for the high costs and negative returns. The recommended closures of 6 active duty installations, which are largely installations of lower military value within the Army, have the greatest potential for savings with a combined estimated net present value savings over the next 20 years of about $3.8 billion and payback periods of 6 years or less. Most of the expected savings from these recommendations are due to reductions in personnel costs and overhead (e.g., base operations support). Expected personnel savings from these 6 recommendations are driven by the elimination of nearly 3,500 personnel of which nearly 25 percent, or over 800, are military. While 3 of the remaining 6 active duty base realignment recommendations as shown in table 10 also produce savings, 3 recommendations account for more than $9.4 billion in 20-year net present value costs and will never payback. The largest of these three latter recommendations involves the rebasing of Army forces to the United States from overseas locations. The Army projected that this realignment alone has a one-time cost of about $4 billion and annual recurring costs of almost $300 million and will never produce savings. Army officials note that a contributory factor to these high costs is the fact that the Army could not claim the estimated savings that would accrue from the expected closure of the overseas installations and the departure of Army forces from those locations. The Army estimates that had these estimated savings been accounted for in BRAC, the recommended actions would have produced substantial net savings rather than the costs as indicated. We did not validate the Army’s savings estimates for the overseas closures, and it is not clear to us that sufficient information is available at this time to fully assess the total changes in overseas basing costs since much of the detail regarding these plans has not been finalized. Further, we agree with DOD that it would not be appropriate for the Army to include these particular savings in BRAC as BRAC provisions in existing legislation do not contemplate consideration of savings from the closure or realignments that take place outside of the United States. With regard to the reserve components, the Army adopted 44 recommendations, which taken as a whole, would provide a net present value savings of over $1.5 billion over the next 20 years but have an average payback period of over 12 years. Five of the recommendations involve the realignment of the Army Reserve’s command and control structure within five regional areas. The remaining recommendations realign reserve components facilities in 38 states and Puerto Rico by constructing 125 new armed forces reserve centers while closing 176 Army Reserve centers and with the understanding that various states would close 211 National Guard armories and centers. These closures represent about 10 percent of the over 4,000 existing Army reserve components’ facilities across the country. While most of the Army’s projected savings associated with the reserve components’ recommendations result from reductions in personnel costs by eliminating over 4,000 personnel, about 80 percent of these eliminations are military personnel. Time did not permit us to assess the operational impact of each recommendation, particularly recommendations that included multiple closure and realignment actions across multiple locations. However, we offer a number of broad-based observations about the proposed recommendations. Some recommendations may warrant additional attention from the BRAC Commission based primarily on issues associated with the projected savings from military personnel reductions, uncertainties regarding the rebasing of overseas forces and modularity, potential impact of expected increase in the use of training ranges, the impact on gaining communities, uncertainties regarding the reserve component recommendations, the bundling of various recommendations, and concerns over the transfer of workload from Red River Army Depot, Texas. Our analysis showed that about $450 million of the Army’s projected annual recurring savings from its recommended closure and realignment actions are based on claimed savings from eliminating military personnel. Army officials acknowledged that a large portion of their annual recurring savings were derived from military personnel eliminations but noted that the Army’s financial outlook improved if joint cross-service group recommendations involving Army bases are considered. Nevertheless, the Army does not plan to reduce its active or reserve component end-strength in implementing these recommendations. According to Army officials, these personnel are being redistributed within the Army. While we believe that the potential exists for these personnel to provide a benefit to the Army in their new positions, it represents a savings to the Army in the sense of potentially avoiding costs that otherwise might be incurred in increasing authorized end strength levels. They do not represent dollar savings that might be shifted to other appropriations to meet other priority needs such as equipment modernization or improving remaining facilities, areas typically cited as likely beneficiaries of BRAC savings. Further, because DOD envisions BRAC savings in general to be used to partially fund up-front investment costs associated with implementing BRAC actions, the Army may be forced to find other sources of funding as military personnel savings will not likely be available for this purpose. The Commission may wish to consider this issue in evaluating the BRAC recommendations. Uncertainties over plans to realign thousands of soldiers and their families to the United States as a result of the Integrated Global Presence and Basing Strategy as well as the Army’s modularity efforts to create new modular brigades have the potential to change the expected costs and savings associated with the Army’s BRAC recommendations. The Army’s BRAC recommendations incorporate about 15,000 of the 47,000 Army personnel currently expected to return as a result of the global basing study. The Army also incorporated the stationing of five of ten brigades being created under the Army’s modular restructuring effort. Estimated BRAC costs and savings are typically calculated based on assumptions for specific units or missions that are expected to realign to specific installations in specific years. Changes to these assumptions can alter the costs and savings associated with the actions being undertaken. Existing Army plans for the return of overseas forces and modularity were the basis for the assumptions used to calculate estimated costs and savings and to determine potential impacts to the environment and communities surrounding the affected installations. However, our analysis identified several areas of uncertainty that could affect the assumptions contained in those recommendations: Army officials told us that DOD has been and is continuing to modify its overseas restationing plans, even as the Army BRAC recommendations were being finalized. Because of BRAC reporting requirements, the Army had to finalize its recommendations before the overseas rebasing plans were finalized. Army officials indicated that the major overseas restationing actions included in the BRAC recommendations are expected to occur as currently envisioned. However, as plans continue to evolve, the specific details regarding the rebasing could be adjusted, with corresponding adjustments in costs and savings being required. In a May 2005 report produced by the Commission on Review of the Overseas Military Facility Structure of the United States, the Commission recommended slowing down the Army’s entire overseas restationing process. If DOD heeds this recommendation, the timing of some planned restationing actions could be affected with the potential risk of not completing BRAC closure or realignment actions within the 6-year implementation period with a 2011 completion date as established by the BRAC legislation. Further, over half of the Army’s forces returning from overseas are expected to be folded into the new modular brigades being formed in the United States. Uncertainties over the timing of their return could also impact the costs and savings associated with those brigades. In a March 2005 congressional testimony, we reported that the design configuration of the Army’s modular brigades had not been finalized at that time. In this regard, the Army is considering adding an additional combat battalion to each of its modular brigades and has not finalized the design of higher echelon and support units. Any such changes to the design that was used in deriving the cost and savings estimates and potential impacts to the environment and communities of the recommended actions are likely to impact the estimates and may alter the potential impacts as well. The Commission may wish to ensure that it has the Army’s latest plans regarding the overseas rebasing and modularity efforts in reviewing the Army’s recommendations. The Army’s BRAC recommendations provide for the stationing of returning overseas forces and new modular brigades on existing Army installations. Our review of Army documentation shows these installations are already facing environmental and encroachment issues that constrain their ability to meet unit training requirements. These issues raise concerns that currently constrained installations may face additional challenges and unexpected costs in meeting the training requirements of the additional forces the Army plans to station at these installations. As we reported in June 2005, several of the Army’s training ranges already face challenges resulting from inadequate maintenance and modernization and may also require substantial investment for modernization to support the training requirements of the new brigades. Army officials stated they reviewed their BRAC recommendations to ensure that there were no insurmountable environmental or encroachment obstacles. They also noted that their recommendations included costs for training range upgrades. However, we have not validated whether these costs will adequately address training range limitations. Further, we have concerns as to whether the Army will need to acquire additional training range land at existing bases that are already experiencing range limitations—a potential cost not identified in the current BRAC recommendations. Concerns over the ability of existing training ranges to meet training requirements are exacerbated by uncertainties over the final number and composition of the modular brigades as well as the potential for additional forces returning from overseas. Because of existing constraints on training ranges, the Army developed scenarios to examine the possibility of stationing operational Army units on other installations, including installations belonging to other military services and Army installations with considerable acreage such as the Yuma Proving Ground in Arizona. The Army deemed none of these scenarios feasible for various reasons, such as the configuration of other service installations and their associated training ranges did not meet Army training requirements. For other scenarios, such as use of the Yuma Proving Ground, the lack of adequate infrastructure and the associated high military construction costs that would be required essentially made them infeasible. However, Army officials told us that should the Army decide to create an additional five modular brigades or bring additional forces back from overseas, it may become necessary to station these units at installations such as the Yuma Proving Ground, which has large tracts of land, because existing Army installations might not be able to support these additional units. The Commission may wish to review the Army’s plan for addressing training range issues and the potential need to acquire additional land to mitigate likely challenges the Army faces in the probable increased use of its training ranges. Several of the Army’s recommendations involve relocating significant numbers of forces and their families to various installations, which raises concerns about the ability of local communities to adapt to these changes and absorb these personnel increases. For example, Fort Bliss, Texas is expected to receive a net gain of over 11,000 military and civilian personnel. The full impact of such increases on surrounding communities, particularly on schools, housing, and other community infrastructure, is unclear at this time. According to Army officials, its analysis for the selection criterion regarding community impact (criterion seven) provided an overall assessment of the ability of local communities impacted by a potential BRAC action to handle additional personnel and their families, including the identification of potential obstacles that could prevent a recommendation from being implemented. For example, in assessing the impact of the return of forces from overseas, the Army’s review of community infrastructure for Fort Bliss and Fort Riley indicated the importance of working with these communities to assess and implement housing and schooling requirements. However, the Army concluded that these issues did not represent impediments to implementing recommendations involving these bases. Addressing the challenges that these communities face may require significant investments, particularly with regard to available housing and schools, which would increase pressures for federal assistance from various agencies to help mitigate these needs. While such costs might be borne outside the defense budget to some extent, they would nevertheless represent additional costs to the federal government. These potential costs, although not required to be captured in DOD’s cost and savings analyses for the various recommended actions, could be substantial, given the number of Army installations with expected personnel gains. Army officials stated that they expect to resolve these issues during implementation and that by staggering the movement of units being moved to these installations, they believe they will be able to reduce adverse impacts and enable communities to better prepare for their arrival. Nevertheless, some communities may lack the infrastructure to easily absorb these forces. This could impact the timing of the movement of forces to these communities, which in turn could alter current BRAC cost and savings estimates from a governmentwide perspective. The Commission may want to review the Army’s plans for addressing these issues. We identified a number of uncertainties associated with the Army’s reserve components’ recommendations. Most of these recommendations, as detailed in the Army’s 2005 BRAC report, are contingent upon certain actions that have either yet to take place or be decided. For example, the Army expects to build 125 Armed Forces Reserve Centers, which are currently expected to be able to accommodate National Guard units as well as Army Reserve units and some reserve units from the other military services. However, the decision to relocate these National Guard units lies with state authorities. While the states with Guard units that are affected by BRAC recommendations have agreed, on a voluntary basis, to be included in the process, they can opt out at any time, thereby creating uncertainties over future state actions and their impact on the precision of current cost and savings estimates for these recommendations. Should state authorities decline to relocate some or all of these units, the costs and savings associated with these armed forces reserve centers could change. Some of the reserve components’ recommendations have other contingencies as well. For example, the recommendation for the Texas reserve components calls, in part, for an Armed Forces Reserve Center to be located in Amarillo, Texas, if the Army is able to acquire land suitable for the construction of facilities there. Many others are like this as well. Should the land not be available, these recommendations will need to be adjusted as well as the related costs and savings estimates. While the Army’s reserve components’ recommendations as a whole are projected to generate more than $1.5 billion in net savings over a 20-year period if implemented, the uncertainties regarding some of the actions these recommendations are relying on could result in increases or decreases to this estimate. The Commission may wish to seek clarifications as to the status of these state- based actions and the potential consequences if some of those actions are not executed as currently planned. Most of the Army’s recommendations involve the bundling of multiple closure and realignment actions under one recommendation, which reduces the visibility of the estimated costs and savings as well as the payback periods of the individual actions that are embedded within the recommendation. While the the Army only produced six recommendations for the realignment of its active component installations, most of these recommendations have several components to them. For example, one Army recommendation involves the realignment of the Armor Center and School from Fort Knox, Kentucky, to Fort Benning, Georgia; the activation of a new modular brigade at Fort Knox; the relocation of various combat service support and other units from Europe and Korea to the United States; and the relocation of a reserve training center from Fort McCoy, Wisconsin, to Fort Knox. Similarly, the Army packaged all of its proposed reserve components’ realignments and closures within a state into a single recommendation for that state. As a result, there may be components within a recommendation that have relatively high costs or long pay-back periods (or never produce savings) even though the recommendation taken as a whole appears to have relatively higher savings or a shorter payback period. The Commission may therefore wish to request and examine information on the costs and savings associated with these individual actions. The following examples highlight these potential issues: The Army’s maneuver training recommendation would realign Fort Knox by incorporating several elements of scenarios the Army and the Education and Training Joint Cross-Service Group developed over time. The DOD-approved recommendation includes the stationing of a new modular brigade at Fort Knox. However, the Army’s original scenario for realigning Fort Knox, which did not include stationing the modular brigade or realigning the Armor Center and School, would have generated a 20-year net savings of almost $225 million. The Education and Training Joint Cross-Service Group’s related scenario involving the relocation of the Armor Center and School from Fort Knox to Fort Benning would have generated a 20-year net savings of over $1.3 billion. The Army’s approved recommendation combined most of the elements of these two scenarios but generated 20-year savings of about $950 million, or about $500 million less than one might have expected. The difference may be largely attributed to the inclusion of the new modular brigade in the Army’s final recommendation. The Army’s reserve components’ transformation recommendation in Arizona is expected to have a payback period of 5 years and generate a net savings of almost $52 million over a 20-year period. However, one action contained within this recommendation involves the creation of an Armed Forces Reserve Center at the Buckeye Training Site, Arizona. A previous scenario, which focused solely on this action, indicated that the Army would incur a net cost of almost $9 million over the 20-year period and that it would take more than 100 years to produce savings. By bundling this action with others, the net costs of this action are obscured by the net savings of the recommendation’s other actions. We are raising several issues with the recommended closure of the Red River depot and the transfer of its functions to other locations that may warrant further review by the Commission. The issues relate to the transfer of the Red River combat vehicle workload to the Anniston Army Depot, Alabama; the transfer of certain munitions to the McAlester Army Ammunition Plant, Oklahoma; and the replication of Red River’s capability to remove and replace rubber pads for vehicle track and road wheels. As discussed in appendix VIII, the Industrial Joint Cross-Service Group, when developing its maintenance proposals, completed its depot workloading analysis on the basis of one and a half shifts per workday (60 hour workweek) rather than the one shift per day (40 hour workweek) under the current system, thus increasing available capacity and allowing it to consider depot closures. Industrial group officials told us that use of more than one shift, which is a common private industrial better business practice, would enhance transformational opportunities in that it would provide for more efficient use of facilities and equipment. Industrial group officials stated that the expanded shift concept, although transformational, was only a “sizing or planning tool” to examine ways to increase depot capacity and that it would be left up to each depot to decide whether or not to employ the expanded shift concept. In other words, it was a way to see if a depot could accommodate the incoming transfer of additional workload. We were also told that no policy changes were envisioned to actually implement the expanded shift concept. Available information indicates that the closure recommendation may not be implemented based on the concept of a one and a half shift operation at the Anniston Army Depot, which is to receive the combat vehicle workload from Red River. In our visit to Anniston Army Depot, officials told us that, with additional construction to increase capacity as provided for in the supporting documentation for the recommendation, they would be able to accommodate this additional workload without much difficulty and without working under the expanded shift concept. Industrial group officials acknowledged that, while some one and a half shift operations may be implemented at other activities, only a one shift operation was envisioned at Anniston, given the uncertainty associated with future requirements and the need to minimize risk by providing for additional capacity if a contingency arises. As such, it appears that there is essentially no substantive transformational changes occurring with the closure of the Red River Army Depot. The BRAC recommendation to close the Red River Depot also dictates the transfer of its munitions storage mission to another Army depot--McAlester Army Ammunition Plant, Oklahoma. However, officials at Red River told us they were concerned about whether storage capacity at McAlester was sufficient to handle all of Red River’s munitions. Specifically, Red River officials told us during a recent visit that available excess storage capacity at McAlester has decreased since BRAC data were gathered, thus raising concerns whether all of Red River’s munitions can be stored there. Further, Red River officials asserted that McAlester did not have sufficient storage capacity for special types of munitions without constructing new storage facilities. According to Red River officials, certain munitions (category I and II) require different storage capacity and that McAlester currently does not have enough storage capacity for Red River’s entire category I munitions. However our analysis of the closure recommendation supporting documentation does not include any provision for military construction funds. Industrial group officials told us, however, that it expects that the McAlester plant will demilitarize much of its ammunition and thus free up space for the munitions stored at Red River. However, given that some diversion of demilitarization funds for other purposes has occurred in recent years, it raises questions as to the extent of the demilitarization that will occur. Nonetheless, in their opinion, this potential issue is not of concern to them. Time did not permit us to fully resolve the conflicting information regarding the extent to which the munitions may be transferred and McAlester’s ability to sufficiently accommodate the storage of any transferred munitions. Red River officials also raised concerns about the complexities associated with replicating its rubber production capability, which consists of removing and replacing rubber pads for vehicle track and road wheels, at Anniston Army Depot, Alabama, and that it is currently the only source for road wheels for the Abrams M1 tank. Specifically, Red River officials told us this capability is not an easy process to reproduce, including obtaining the required certification associated with the rubber production capability and that the processes must be qualified through rigorous testing. The complexities with replicating the rubber production capability was also echoed by officials at Anniston Army Depot, Alabama—the installation which is expected to absorb most of Red River’s combat vehicle workload. Officials at Anniston told us they expect a long certification process in order to perform the required rubber repair process and that this represents the most serious challenge in the workload transfer of Red River’s work. As to the Abrams Ml tanks road wheels, Red River officials told us that if the capability to produce road wheels is interrupted, the ability to sustain the warfighter is diminished and overall readiness could be degraded. To mitigate this risk, officials at Red River told us that it is imperative that the Army construct a new rubber production facility at Anniston, establish its processes and qualify its product before ceasing rubber production at Red River. Industrial group officials told us that, should a problem arise in this area, that commercial sources are available to purchase rather than repair these parts. We did not independently verify their assertion. The Commission may want to review the extent to which these concerns associated with Red River are valid and whether they were adequately considered by DOD. The Navy followed the common analytical framework established by the Office of the Secretary of Defense (OSD) for reviewing its functions and facilities. The Navy’s process produced 21 base closure and realignment recommendations, which cover 63 active and reserve installations. The Navy projects that its recommendations would realize about $7.7 billion in net present value savings over a 20-year period. Payback periods—the time required for savings to offset closure costs—range from immediate to 15 years and average 3.5 years. At the same time, there are limitations associated with the projected savings related to the lack of planned reductions in military personnel end-strength associated with the savings. Some of the Navy’s recommendations may warrant additional attention from the BRAC Commission based on projected force structure changes, decisions to realign versus close some bases, and extended payback periods. The Naval Audit Service, which performed audits of the data, concluded that the data were sufficiently reliable for use during the BRAC process. The Navy established an organization to conduct the closure and realignment analysis similar to the one it used in the 1995 round. The Secretary of the Navy established a group of senior military officers and civilian executives, the Infrastructure Evaluation Group (IEG), chaired by the Assistant Secretary of the Navy (Installations and Environment) to conduct the process, and a related team, the Infrastructure Analysis Team, to support the IEG. The Secretary subsequently established a second senior-level group, the Department of the Navy Analysis Group, chaired by the Special Assistant to the Secretary of the Navy for BRAC, that was subordinate to the IEG, and he directed it to conduct the Navy’s analysis for Navy-unique functions. Another associated group, the Functional Advisory Board, consisted of the Navy and Marine Corps principal members of the seven joint cross-service groups and was responsible for ensuring that the Navy leadership was informed of matters relevant to those groups and for articulating the Navy’s position on common business-oriented support functions for Navy leaders. The Navy established numerous goals for BRAC, organized around such considerations as (1) facilitating recruitment and training, (2) providing quality of life, (3) matching force structure to national defense strategy, (4) adequately equipping the force, (5) ensuring access to an optimally integrated logistical and industrial infrastructure, and (6) maintaining secure and optimally located installations for mission accomplishment (including homeland defense). With these and other considerations in mind, the Navy established numerous objectives corresponding to DOD’s BRAC principles, examples include: Optimize access to critical maritime training facilities. Accommodate the 20-year force structure plan. Facilitate active/reserve integration and synchronization. Leverage opportunities for joint basing and training. Enable further installation management regional alignment. Optimize regional management structure for recruiting districts and reserve readiness command. Minimize use of long-term leased administrative space. Provide flexible research, development, test, and evaluation infrastructure to adapt to Navy transformational mission changes and joint operations. Consolidate aircraft basing to minimize sites while maintaining ability to meet operational requirements. Rely on private-sector support services where cost-effective and feasible. Retain sufficient organic capability to effectively support maritime- unique operation concepts. Align Navy infrastructure to efficiently and effectively support Fleet Response Plan and Sea-basing concepts. Realign assets to maximize use of capacity in fleet concentration areas while maintaining fleet dispersal and viable antiterrorism/force protection capability. In executing its BRAC process, the Navy sought to eliminate excess capacity and reconfigure its current infrastructure so that operational capacity maximized warfighting capability and efficiency. The IEG approved four major areas for analyses: operations, education and training, headquarters and support activities, and other activities. These major areas were then further divided into functions to ensure that installations performing comparable functions were compared with one another and to allow identification of total capacity and military value for an entire category of installations. The Navy’s BRAC process included a review of 889 reporting activities— 765 Navy and 124 Marine Corps—of which 673 were active component and 216 reserve component activities (reserve centers, reserve forces headquarters, reserve recruiting areas, and reserve personnel centers). As with previous BRAC rounds, capacity and military value analysis provided the starting point for the Navy’s BRAC process. The Naval Audit Service served an important role in ensuring the accuracy of data used in these analyses through extensive audits of data gathered at various locations. For its capacity analysis, the Navy universe was defined at the activity or function level, and a capacity data call was distributed to the 889 reporting activities. Capacity analysis for each activity consisted of comparing the current Department of the Navy base structure to the future force structure requirements to determine whether excess base structure capacity existed within the Department of the Navy. Current force requirements were based on the existing force structure, and future force requirements were derived from the 20-year force structure plan. All Navy and Marine Corps bases were placed into one of four categories for capacity analysis: operations, headquarters and support activities, education and training, and other activities. Each category used a different metric to analyze capacity. Almost all of the Navy’s bases were contained in the operations function category. In evaluating air operations activities the Navy used hangar modules, while in evaluating surface/subsurface operations activities it used a cruiser-equivalent concept, the same measures that were used in BRAC 1995. In evaluating ground operations activities, the Navy used a battalion-equivalent concept that considered the amount of administrative space, covered storage space, and maintenance space required to support a generic Marine Corps battalion. In evaluating munitions storage and distribution, the Navy used throughput (loading and unloading) and short-term storage functions to conduct its analysis. The Navy identified excess capacity in all four categories, as shown in table 11. In completing its capacity analysis, the Navy assumed that it would be necessary to home base all aircraft and ships at the same time. The Navy did not include additional infrastructure requirements to accommodate surge capability. According to Navy BRAC officials, the force structure— number of ships and aircraft—is finite in number, and additional ships or aircraft could not be quickly produced in the event of a contingency. The officials stated that their analysis also ensured that sufficient flexibility was retained to handle surge represented by operational tempo changes or unanticipated operational requirements. For example, for surface/subsurface operations, the Navy concluded that there was sufficient berthing space available in nonoperational bases (shipyards and weapon stations) to meet surge or other unanticipated operational requirements. Navy officials projected that their closure recommendations, if approved, would reduce excess capacity in aviation operations from 19 percent to 16 percent, in surface/subsurface operations from 25 percent to 17 percent, and in munitions storage and distribution operations from 24 percent to 16 percent, but they would not reduce excess ground operations capacity. The Navy did not recommend closing any ground operations facilities, citing cost considerations and noting that planned force structure changes would further increase its requirements. In completing its military value analysis, the Navy targeted military value questions to specific activities in order to rank installations in the four operational subgroups from highest to lowest in military value. Each of the four operational subgroups had overarching concepts by which military value scoring plans were then developed to measure and rank each installation. Military values were assigned to 35 Navy and Marine Corps installations under air operations, 29 surface/subsurface installations, and 11 ground operations installations. Table 12 shows how the Navy weighted military value criteria in its analyses of operational functions. Key factors considered in evaluating the military value of aviation operations activities included size and versatility of the facilities, proximity to training opportunities, and the strategic location of airfields. In considering surface/subsurface activities, key factors were the size and versatility of ship berthing, maintenance and support capabilities, and proximity to naval shipyards. Additional value was given for strategic nuclear submarine homeport capability and Nimitz-class nuclear powered berthing capability. Also considered was the proximity to training facilities, ranges, and operations areas as well as strategic location. Likewise, in considering ground operations activities, key factors were facilities and services, operational staff buildings, ordnance storage depots, and organic maintenance shops. Additional value was given for capability to receive and stage onward movement and integration of forces. Also considered was proximity to ranges, maneuver areas and training areas as well as proximity to aerial and seaports of debarkation. Key factors in the munitions storage and distribution operations activities were storage capability, throughput capability, strategic factors, environment and encroachment, and personnel support. Figure 10 illustrates how the Navy linked its analysis to the military value criteria for the naval aviation function. The same process was used to analyze military value with the other operational and functional areas. The Naval Audit Service played an important role in ensuring that the data used in the Navy’s analyses were certified. Through extensive audits of the capacity, military value, and scenario data collected from field activities, the audit service notified the Navy of any data discrepancies for the purpose of follow-on corrective action. While the process of validating data was quite lengthy and challenging, the Naval Audit Service deemed the Navy data was sufficiently reliable for use in the BRAC process. The Navy used results from the capacity and military value analyses as the inputs to its optimization model to help identify initial scenarios for realignment and closure. In some circumstances, such as closure of naval reserve centers, military judgment and transformation provided the basis for scenarios and later decisions. For example, Navy officials said it was necessary to retain naval reserve centers for naval air reservists near major airline hubs and activities in order to retain the demographic profile necessary to recruit and retain personnel for these units. The Navy identified 187 scenarios for consideration; 82 involved Navy and Marine Corps reserve centers. The scenarios were then further assessed through more detailed scenario analyses, cost and savings considerations, risk assessments, and the Navy’s IEG deliberations, which resulted in 53 candidate recommendations being forwarded to DOD’s IEC. After some consolidation and bundling, DOD approved 21 Department of the Navy recommendations and forwarded them to the BRAC Commission. The Navy eliminated scenarios for strategic reasons, to maintain operational flexibility, and for cost considerations. For example, various scenarios proposing to close Submarine Base San Diego, California, were dropped because a closure would have eliminated the sole capability for berthing attack submarines on the West Coast. Likewise, scenarios proposing to close Naval Station Everett, Washington, were dropped because of the strategic importance of this seaport. Various proposals to close active naval air stations were dropped because of operational concerns. For example, the Navy analyzed the potential to close Marine Corps Air Station Beaufort, South Carolina, and relocate its squadrons to Marine Corps Air Station Cherry Point, North Carolina. However, the Navy leadership concluded that Marine Corps Air Station Beaufort should be retained for future tactical aviation basing flexibility, especially in light of concerns about the continued viability of basing aviation units at Naval Air Station Oceana, Virginia. Due to increasing environmental and encroachment issues surrounding Naval Air Station Oceana, the Navy also analyzed various scenarios to close it. However, the analyses indicated a long payback period for achieving return on investment, high one-time costs, and operational issues at receiving sites. Therefore, the Navy determined that the closure of Naval Air Station Oceana was not feasible. Another complicating factor for basing of East Coast tactical aircraft is the Navy’s attempt to purchase approximately 33,000 acres in eastern North Carolina to build a new outlying landing field to provide simulated aircraft carrier landings for aircraft stationed at Naval Air Station Oceana and Marine Corps Air Station Cherry Point. The purchase is currently being challenged in federal court over environmental concerns. The Navy also did not pursue some scenarios because of cost considerations and extended payback periods. For example, Navy data showed a one-time cost of $838 million to close Construction Battalion Center Gulfport, Mississippi, and relocate it to Camp Lejeune, North Carolina, and a one-time cost of $643 million to close Marine Corps Recruit Depot San Diego, California, and relocate all recruit training to Parris Island, South Carolina. The Navy leadership determined that these costs did not justify closing either the Construction Battalion Center Gulfport or the Marine Corps Recruit Depot San Diego. The Navy also considered alternatives to homeport an additional carrier strike group forward in the Pacific theater through the BRAC process to accommodate Integrated Global Presence and Basing Strategy decisions. The Navy analyzed moving a carrier to Pearl Harbor, Hawaii, and Guam, and found that other than cost, there was no clear BRAC preference for either the losing or the gaining base. The Navy leadership postponed any decision until the ongoing Quadrennial Defense Review is completed. The Navy worked closely with the joint cross-service groups as they developed recommendations that affected Navy installations. In some cases, a joint cross-service group recommendation or series of recommendations relocated a majority of the functions, workload, equipment, or personnel from a Department of the Navy installation, thereby enabling closure of the entire installation. Where the DAG determined that the aggregate of joint cross-service group actions were of such magnitude that it affected the “critical mass” of the installation, e.g., impact on the major mission, a substantial number of personnel, and/or a substantial amount of acreage, a Navy closure scenario was developed. The closure of Portsmouth Naval Shipyard, Maine is an example of such a closure. The ISG and IEC approved an industrial joint cross-service group recommendation to relocate the ship overhaul and repair function at Portsmouth Naval Shipyard to Norfolk Naval Shipyard, Puget Sound Naval Shipyard, and Pearl Harbor Naval Shipyard, and to relocate the Submarine Maintenance Engineering, Planning and Procurement Activity at Portsmouth Naval Shipyard to the Norfolk Naval Shipyard. This recommendation eliminated Portsmouth Naval Shipyard’s primary mission and moved or eliminated approximately 90 percent of its workforce. After conducting criteria 5-8 analyses, the Navy recommended closing Portsmouth Naval Shipyard in its entirety. The Navy projects that its 21 recommendations will produce about $754 million in net annual recurring savings and, after savings have offset implementation costs, a 20-year net present value savings of $7.7 billion. Table 13 provides a summary of the financial aspects of the Navy’s recommendations. The Navy’s recommendations include 16 closures and 5 realignment actions, affecting 63 installations. Much of the projected annual recurring savings are based on military and civilian personnel reductions. The Navy has two recommendations with payback periods greater than 10 years—the realignment of Naval Station Newport, Rhode Island, and the closure of the Naval Support Activity Corona, California. Time did not permit us to assess the operational impact of each recommendation, particularly individual recommendations that include multiple closure and realignment actions at multiple locations outside of a single geographic area. Nonetheless, we offer a number of broad-based observations about the proposed recommendations. These recommendations may warrant additional attention from the BRAC Commission based on issues associated with projected savings from military personnel reductions, force structure changes, decisions to realign versus close some bases, extended payback periods, and potential impact on the U.S. Coast Guard. There remains uncertainty as to what the Navy’s future force structure will actually look like, particularly with battle force ships. While the Navy’s force structure plan that accompanies its BRAC report gives a range of 341 to 370 ships in the fleet in 2024, the Navy’s 30-year shipbuilding plan identifies a possible lower limit of 314 ships in 2024 (including all type surface ships and submarines). Additionally, the shipbuilding plan provides a fleet profile in the decade afterward (to the year 2035) with as few as 260 to 325 ships. This includes a decrease in aircraft carriers from the current 12 to 10 in 2035, as projected in the Navy’s shipbuilding plan. Our analysis showed that about $386 million, or about 51 percent, of the projected $753.5 million in net annual recurring savings are based on savings from eliminating almost 4,000 active duty military personnel positions. A Navy official indicated that these reductions will help the Navy achieve the projected 21,000 active military personnel reductions already programmed between fiscal year 2006 and 2011. However, the Navy has already reduced the military personnel account to reflect the savings associated with the projected 21,000 end-strength reduction. While the projected almost 4,000 reductions associated with BRAC actions might help the Navy achieve their overall programmed end strength reductions, it will not generate any additional dollar savings that could be reallocated for other higher priority needs. While the recommendations to close Submarine Base New London, Connecticut, and Portsmouth Naval Shipyard, Maine, project significant savings, both are based on projected decreases in the number of submarines in the future force structure. However, as mentioned earlier, there is uncertainty over the number of submarines and surface ships required for the future force. The proposed closure of Submarine Base New London is based on reducing existing excess capacity in the surface/subsurface category and planned reductions in the submarine force. Both the 25 percent excess capacity identified in the surface/subsurface infrastructure and the projected 21 percent reduction in the submarine force led the Navy to analyze various proposals to close submarine bases. As previously noted, the Navy’s BRAC scenario analysis focused on East Coast submarine bases because attack submarines are single-sited on the West Coast. The Navy considered three alternatives: (1) moving all submarines at Naval Station Norfolk, Virginia, to New London, Connecticut; (2) moving all submarines at Submarine Base New London and the Submarine School New London to Naval Station Norfolk; and (3) moving submarines at Submarine Base New London to both Naval Station Norfolk and Submarine Base Kings Bay, Georgia, and moving the submarine school to Kings Bay or Naval Station Newport, Rhode Island. The Navy analysis showed that only the option to relocate submarines from New London to Norfolk and Kings Bay achieved a reduction in capacity and savings resulting from a base closure. Navy officials noted that Submarine Base New London had a lower military value than both Norfolk and Kings Bay. As we also discuss in appendix XIV, this recommendation has the largest economic impact on any community in terms of the number of job losses (8,457 direct jobs and 7,351 indirect jobs). These direct and indirect job losses would result in a negative change of 9.4 percent in unemployment for the economic area around Submarine Base New London. The majority of the projected savings would result from the elimination of about 80 percent of the civilian personnel positions at New London. Officials at New London we met with concurred with the projected number of civilian positions that could be eliminated based on coordination with both receiving locations—Kings Bay, Georgia, and Norfolk, Virginia, and on the number of personnel that would be needed to support the missions being relocated. However, a separate issue of concern relates to the proposed move of the Navy’s submarine school from New London to Kings Bay. In our discussions with officials at New London, we found while the Navy’s BRAC cost and savings analysis includes one-time costs to move the specialized equipment associated with the submarine school, the Navy analysis does not appear to have included an assessment of the time it would take to pack, move, and unpack the equipment, and the potential impact on the training pipeline and the certification of crews for submarines. In subsequent discussions with Navy headquarters officials, we were told that the submarine school would be the last activity to move from New London to ensure that facilities at Kings Bay are ready to start training. Furthermore, they noted that the implementation plan will ensure that the Navy will be able to perform crew certification and maintain the training pipeline. The BRAC Commission may want to assure itself that the Navy has developed a transition plan to satisfy the training and certification requirements until the receiving sites are able to perform this training, without unduly interrupting the training pipeline. The proposed closure of the Portsmouth Naval Shipyard assumes that the remaining three shipyards could perform all of the projected depot level maintenance workload based on planned reductions in the number of attack submarines and the Navy’s proposal to decommission an aircraft carrier. The Navy, with agreement from the Industrial Joint Cross-Service group, which initially had assessed depot functions, selected the Portsmouth Naval Shipyard for closure, despite Pearl Harbor Shipyard’s having a slightly lower military value score, because it determined that Portsmouth was the only closure that would both eliminate excess capacity and satisfy the Combatant Commander’s and Navy’s strategic objective to place ship maintenance capabilities close to the fleet. The Navy BRAC and Industrial Joint Cross-Service Groups analyzed scenarios closing each of the four shipyards, and determined that only the potential closure of Portsmouth or Pearl Harbor was feasible due to cost and capacity considerations. Initially, based on capacity data and the 20- year force structure plan submitted in March 2004, the Industrial Joint Cross-Service Group determined that there was sufficient excess capacity in the aggregate across the four shipyards to close either Pearl Harbor or Portsmouth. However, the group determined that there was insufficient excess capacity in certain commodities in the remaining three shipyards to accept all the workload from the closing shipyard. As such, the group initially determined that no shipyard should be closed. However, based on changes in the DOD’s 20-year force structure plan it submitted to Congress in March 2005—reductions in the number of submarines and the decommissioning of an aircraft carrier—the industrial group’s analysis indicated that workload for all commodities at Portsmouth or Pearl Harbor could be accommodated by the remaining three shipyards. A Naval Sea Systems Command analysis of dry dock availability indicates that the three remaining Navy shipyards could handle the projected ship repair and overhauls in the future. However, the analysis indicates that within the next three years there would not be much, if any, room for unanticipated ship repairs. According to Navy officials, any unanticipated requirements would be addressed by a combination of delaying and re-prioritizing scheduled overhaul work, and authorizing additional overtime, which they noted is no different than how they manage these requirements in the current operating environment. In selecting Portsmouth over Pearl Harbor for closure, the Navy noted that Pearl Harbor is in a fleet concentration area in the Pacific theater and is the homeport for many ships, while Portsmouth is not in a fleet concentration area or a homeport for any ships. In addition, closing Pearl Harbor would require the ships that are homeported there to transit back to the east coast, in some cases, for maintenance, which the Navy would essentially view as a deployment and, for quality of life reasons, would want to avoid if possible. Another strategic objective was to maintain dry docks for aircraft carriers on both coasts and in the central Pacific. Pearl Harbor has aircraft carrier dry-docking capability, but Portsmouth does not. In our meeting with employees at the Portsmouth Naval Shipyard in June 2005, they raised questions about several issues regarding the cost and savings analysis developed to support the proposed action. First, they objected to the industrial group and the Navy disallowing about $281 million in costs ($205 million one-time and $76 million recurring) that they believed would be incurred if the shipyard were to close. About $52 million of the recurring costs are associated with sustainment of facilities and power plant from fiscal year 2008, when the base is projected to close, until 2011. While some of these costs are likely valid, overall they appear high in relation to the Navy’s projected savings of about $120 million over the same period from reduced base operating support and sustainment of facilities. The majority of the one-time costs are associated with closure of the buildings, historical preservation of buildings, and write-off of undepreciated assets of the working capital fund. While it is questionable whether all of these costs should be included, our analysis shows that if they are all included, the projected 20-year savings would decrease by $192 million, or 15 percent. Portsmouth employees were also concerned that the cost and savings analysis did not adequately capture the widely recognized efficiencies of their shipyard, which, if adopted, could translate into additional costs that the Navy would incur by shifting its workload to the remaining three Navy shipyards. The employees estimated that they perform submarine overhaul and depot maintenance work at about $54 million per year less than the average of the other three shipyards, an efficiency which was not included in the Navy’s analysis. Department of Navy officials recognized that the Portsmouth Naval shipyard is presently more efficient than the Puget Sound and Pearl Harbor shipyards, but noted that it is very difficult to quantify the impact of this efficiency. Navy officials noted that the scope of work performed is not always the same, depending on the condition of each submarine, and wages, especially in Pearl Harbor, are higher than in Portsmouth. Navy officials told us they were reviewing the efficiency analysis developed by the Portsmouth Naval Shipyard; however, their analysis was not completed in time to be included in this report. The Commission may wish to consider the views of the shipyard employees and the results of the Navy’s review in their analysis of this recommendation. The Navy initially recommended the closure of Naval Air Station Brunswick, Maine, and Marine Corps Logistics Base Barstow, California. However, based on direction from the IEC, these closure recommendations were changed to realignments. As a result, the 20-year savings decreased by almost $2 billion, as shown in table 14. According to Navy BRAC officials, the senior Navy leadership was reluctant to give up the Navy’s remaining air station in the Northeast but found the potential savings significant enough to recommend closure of Brunswick. However, the judgment of the IEC changed the closure to a realignment to retain access to the strategic airfield in the Northeast. As a result, the base will become a naval air facility with an operational runway, but all aircraft and associated personnel, equipment, and support will be relocated to Naval Air Station Jacksonville, Florida, and the Aviation Intermediate Maintenance will be consolidated with Fleet Readiness Center Southeast Jacksonville, Florida. The Navy is maintaining its cold weather–oriented Survival, Evasion, Resistance, and Escape School, a Navy Reserve Center, and other small units at Brunswick. Navy officials also stated that Brunswick would provide a base from which to carry out potential homeland defense missions should those missions not be able to be carried out from other military or civilian airfields in the Northeast. The Industrial Joint Cross-Service Group had proposed to close the depot maintenance functions at Barstow because of its low military value and to increase opportunities for joint maintenance at Army depots doing similar work. However, the Marine Corps objected to the closure because that would eliminate its only West Coast ground vehicle depot maintenance presence and would increase repair cycle times for the Marine’s West Coast equipment by increasing rail transit and customer turnaround time by 10 to 30 days. In response to the Marine Corps’ concerns, the IEC directed the Industrial Joint Cross-Service Group to develop several alternative recommendations that would have closed Barstow but still realigned its workload to other West Coast activities. The Industrial Joint Cross -Service Group estimated that all of these options would result in higher net annual recurring and 20-year net present savings than would the realignment option. The Commission may want to assess DOD’s rationale for changing the recommendation from a closure to realignment in light of the projected reductions in savings. The Navy has two recommendations for which the payback period is greater than 10 years, much longer than typically associated with recommendations in the 1995 BRAC round, and the one-time costs are significantly greater than the projected 20-year savings by which BRAC rounds are typically measured. The Navy’s proposal to realign Naval Station Newport by relocating the Navy Warfare Development Command to Naval Station Norfolk has a 13-year payback period and a projected one- time cost of about $12 million, primarily to rehabilitate existing structures and move 111 personnel. According to Navy officials, this recommendation places the Navy Warfare Development Command closer to Fleet Forces Command and the Second Fleet Battle Lab it supports. Likewise, the Navy recommendation to close Naval Support Activity Corona has a payback period of 15 years, one-time cost of about $80 million, and 20-year savings of about $400,000. Navy data shows that the one-time cost is primarily to rehabilitate existing facilities and relocate personnel from Corona to Naval Air Station Point Mugu, California. Navy officials stated the closure had merit because the Corona facility was a single-function facility whose mission could be performed at other multifunction bases. Several Navy recommendations to close bases could affect the U.S. Coast Guard. However, the Navy’s cost and savings analysis did not consider any costs that could be incurred by the Coast Guard if the bases are closed. Navy officials recognized that the Coast Guard would be affected by several of its recommendations and considered the impact in its deliberations. However, they determined that it was unreasonable to include any cost estimates for the Coast Guard because the Navy could not assume the final disposition of the facility and how much, if any, of the facility the Coast Guard would opt to retain. Coast Guard officials stated that the Navy briefed them on their potential recommendations several months prior to the public announcement of the recommendations. The Coast Guard is in the process of developing potential basing alternatives, to include cost impacts, for each affected location. However, the Coast Guard had not completed these estimates in time for us to include them in our report. The Air Force followed the common analytical framework established by the Office of the Secretary of Defense (OSD) for reviewing its functions and facilities. The Air Force’s process produced 42 recommendations. Most of the recommendations are devoted to reserve component bases, including several realignment actions reallocating aviation assets to multiple locations. In comparison with the other services, its recommendations contain the smallest number of closures (three) of active component bases. It had two major realignments, however, that left the bases in a reduced active duty status, and another where the base was transferred to the Army, with the Air Force retaining a limited presence as a tenant. The Air Force recommendations project the greatest savings of any of the services— $14.6 billion in 20-year net present value savings. Payback periods—the time required for savings to offset closure and realignment costs—for active component bases range from immediate to 14 years, and average 3 years, and for reserve component bases they range from immediate to 18 years, and average 6 years. However, our analysis indicates that these projected savings in each of their categories could have some limitations, primarily due to the lack of personnel end-strength reductions associated with claimed savings. In addition, some Air Force recommendations may warrant additional attention by the BRAC Commission because of uncertainty regarding future mission requirements for adversely affected reserve component personnel, and because of lengthy payback periods associated with some recommendations having been merged with other recommendations that have shorter payback periods, thus making the former appear more acceptable. The Air Force Audit Agency, which performed audits of the data, concluded that the data were sufficiently reliable for use during the BRAC process. The Secretary of the Air Force established a group of senior Air Force military and civilian personnel to form an executive deliberative body responsible for conducting the Air Force base closure and realignment analyses. The Base Closure Executive Group was led by a Deputy Assistant Secretary and a General Officer from Plans and Programs, who served as co-chairs. This group’s working-level staff made up the Base Closure Working Group, which provided direct support for data collection, validation, and analysis in the development of base closure and realignment recommendations. The Air Force 2005 BRAC goals were to transform by maximizing warfighting capability of each squadron and realigning infrastructure with future defense strategy, maximizing operational capability by eliminating excess physical capacity, and to capitalize on opportunities for joint activity. To guide the BRAC process, the Air Force developed the following principles, to be applied to both active and reserve components: Maintain squadrons within operationally efficient proximity to DOD- controlled airspace, ranges, military operations areas, and low-level routes. Optimize the size of Air Force squadrons in terms of aircraft models, aircraft assigned, and crew ratios applied. Retain enough domestic capacity to base the Air Force entirely within the United States and its territories. Retain aerial refueling bases in optimal proximity to their missions. Better meet the needs of the Air Force by maintaining or placing Air Reserve Component (Air National Guard or Air Force Reserve Command) units in locations that best meet the demographic and mission requirements unique to the Air Reserve Component. Ensure joint basing realignment actions (in comparison with the status quo) either increased the military value of a function or decreased the cost for the same military value of that function. Ensure that long-range strike bases provide flexible strategic response and strategic force protection. Support the Air Expeditionary Forces framework by keeping two geographically separate munitions sites. Retain enough surge capacity to support deployments, evacuations, and base repairs. Consolidate or co-locate legacy fleets (such as A-10, B-1, B-52, F-15, and F-16 aircraft). Ensure global mobility by retaining two air mobility bases and one additional wide-body-capable base on each coast. Several of the above principles were included in an Expeditionary Air Force Principles White Paper, which outlined principles to shape future force development and basing. This document, discussed the increased effectiveness and efficiency of consolidating smaller squadrons into larger units. The significant reduction in aircraft based on the future force structure plan of 2025 will reduce the Air Force infrastructure, including that of the Air Reserve and the Air National Guard to select the best combination of bases, while accomadating use of reserve components for emerging missions, such as homeland defense and unmanned aerial systems. The Air Force BRAC process included a review of 154 installations—70 active and 84 reserve. As with previous BRAC rounds, capacity and military value analyses provided the starting point for analysis. However, in this BRAC round the Air Force concentrated its analysis on operational aircraft and space missions, since joint cross-service groups developed capacity and military value analyses and recommendations for various commonly held business-oriented categories, such as education and training, headquarters, and technical functions. The Air Force Audit Agency performed an important role in ensuring the accuracy of data used in these analyses through extensive audits of data gathered at various locations. The Air Force collected information on key capacity areas, such as physical capacity (buildings and utilities), environmental issues (air emissions and water resources), encroachment (constraints and noise safety), airfields, airspace and ranges (operational capacity of runways, ramp space, and fuel storage), communications (telecommunications), and personnel. The capacity data call was designed to provide information to assess bases for current and future missions in the following mission areas: (1) airlift; (2) space operations; (3) bombers; (4) tankers; (5) command and control and intelligence, surveillance and reconnaissance; (6) unmanned aerial vehicles; (7) fighter aircraft; and (8) Special Operation Forces and Combat, Search, and Rescue. The Air Force also considered surge requirements in its capacity analysis. According to Air Force officials, surge was defined as the ability to domestically “bed down” all aircraft, including those currently stationed overseas, as well as the ability to respond to natural disasters, emergencies, and runway repairs. Following the collection of the capacity data call, the Air Force requested that its eight major commands and the Air National Guard estimate each installation’s capacity to acquire additional squadrons, taking into consideration existing conditions, facilities, additional construction requirements, and operational and environmental infrastructure. The capacity analysis incorporated information from the 20-year force structure plan to serve as a baseline and to further define requirements in the future. Although this analysis indicated the ability of bases to bed down additional aircraft, according to Air Force officials, it did not provide a specific excess capacity percentage by installation or major command. Accordingly, an overall capacity analysis report was not made available to us, comparable to that provided by the other military departments. However Air Force officials said they considered capacity information in their assessment of installations. Air Force officials did provide limited capacity information in their final BRAC report. Table 15 provides excess capacity percentages that were calculated for two areas. According to Air Force officials, their recommendations if implemented are projected to reduce excess capacity by 37 percent for flight line and ramp space and 75 percent in buildings and facilities. In completing its military value data calls, the Air Force evaluated each of its bases in each of the eight mission categories, regardless of the base’s current use. Military value data analysis was directly linked to the four DOD military value selection criteria required by the BRAC process and legislation. As shown in table 16, the Air Force developed a weighting system for the military value criteria with the first two criteria having larger weights, or importance, than the remaining two criteria. The Air Force used various military value attributes (characteristics, factors, etc), metrics (measures), and questions related to each of the four military value criteria. Key military value attributes included operating environment, geographic-location factors, key mission infrastructure, operating areas, mobility/surge, growth potential, and cost. Other installation-specific attributes included such factors as electromagnetic spectrum and bandwidth, munitions storage and handling, runway dimensions, ramp area, space launch, proximity to (and quality of) airspace and ranges, and geographical factors. Figure 11 shows how the attributes, metrics, and military value data questions were linked to the military criteria for the fighter aircraft mission category. The Air Force followed a similar process for all eight mission categories. Likewise, each base was evaluated against metrics associated with each of the eight mission categories, which resulted in multiple military values for each base. Air Force officials stated that the resulting military value scores enabled them to determine which bases were best to retain and which were less desirable. This enabled them to produce mission compatibility indexes for their bases related to each of the four military value criteria. However, the Air Force did not develop one composite score for each base across all eight mission areas, which might have allowed for a clearer distinction between lower and higher military value rankings. Instead of developing one composite score, the Air Force established an overall mission compatibility index score within each of the eight mission areas, which provided each installation with eight entirely different scores for the various mission areas. According to Air Force officials, this approach was used to apply military judgment to select the best combination of bases to retain. During both the capacity and the military value data collection and analysis processes, the Air Force Audit Agency provided the Air Force with real- time evaluations of BRAC 2005 policies, procedural controls, systems, and data to ensure accurate data and analyses support for BRAC recommendations. One of its primary efforts involved three audits to verify the Air Force data call responses submitted during the BRAC process. Although the auditors found errors or inadequate source documentation, they reported that most discrepancies were subsequently corrected. In addition to these nationwide audits, the Air Force Audit Agency produced audit reports on other facets of the BRAC process, including the Air Force Internal Control Plan, COBRA data, and various modeling and analysis tools that were used in development of recommendations. The final Air Force Audit Agency reports on BRAC data concluded that overall the Air Force data were reliable for the purpose of developing recommendations. The Air Force identified over 100 scenarios, which were later reduced to 42 recommendations. The Air Force scenario teams identified potential scenario groups of like weapons systems, and then the Base Closure Executive Group selected scenarios for analysis. While the Air Force relied on certified data to identify proposed closure and realignment recommendations, other factors were instrumental in guiding decisions for closures and realignments, including changes in unit sizing, a decreased force structure, the active and reserve mix and future total force initiatives such as those discussed in the Expeditionary Air Force White Paper. Toward the end of the BRAC process, the Air Force eliminated and scaled back several recommendations because they did not actually result in net savings. In addition, the Air Force combined several interrelated recommendations (some that provide savings and some that do not) to present a consolidated recommendation with savings and a shorter payback period than would otherwise appear had some recommendations. The military value data were analyzed by a computer-generated optimization model called the Air Force cueing tool. This model used the military value data and the 20-year force structure plan to create a starting point for Base Closure Executive Group deliberations by allocating aircraft to the fewest bases while conserving the greatest military value. This model also included Air Force imperatives. For example, to ensure unimpeded access to polar and equatorial earth orbits for U.S. satellites, the Air Force decided that Vandenberg Air Force Base, California, and Patrick Air Force Base, Florida, must be retained. Likewise, the Air Force retained Andrews Air Force Base, Maryland, to provide support to the President of the United States. According to Air Force officials, the cueing tool results were the starting point for analysis in allocating its inventory of aircraft. The model had various limitations, such as its inability to factor the active/reserve force mix for specific types of aircraft or the different types of aircraft at an installation. Furthermore, it assumes that all aircraft are bedded down at bases ranked highest in military value, which generally were active bases. To address these limitations, the Base Closure Executive Group relied on military judgment in some cases to overrule the results of the model to preserve the existing active/reserve force mix, a ratio expectation to be maintained through 2011. In reviewing alternatives for BRAC recommendations, the Air Force went through various iterations of the BRAC recommendations (called second look, third look, and so forth) in order to provide force structure alignments that conformed to the Air Force principles and improved military capability and efficiency, consistent with sound military judgment. Air Force scenario teams analyzed the results of the analytical tools, including information to be considered with each recommendation—for example, force structure reductions from the future year force structure plan, new missions, military construction requirements, homeland defense missions, and other areas. Furthermore, the scenario teams were responsible for identifying any “showstoppers,” in terms of capacity or environmental characteristics that would make a recommendation difficult to implement. These consisted of running a potential recommendation through the COBRA model and developing the information for selection criteria 6 (economic impact), 7 (community infrastructure), and 8 (environmental impact) to help identify or evaluate possible closure and realignment actions. The majority of the candidate recommendations had various components derived from using the optimization model; however, a few of the recommendations did not. For example, a few of the candidate recommendations involved realigning aircraft from an active base to an Air National Guard station with a lower military value score in order to achieve the appropriate mix between active and reserve forces and to increase the standard squadron size. Further, in some recommendations Air National Guard aircraft were realigned to other Air National Guard stations with a lower military value to align common versions of weapon system types, and for strategic interests. Four other recommendations were not derived from an optimization model because the model primarily focused on the bedding down of aircraft rather than specific functional areas, such as repair facilities. These recommendations involved logistics support centers, standard air munitions packages (munitions storage), and avionics intermediate repair and maintenance facilities. Air Force officials told us they had requested that the Industrial Joint Cross-Service Group consider the above candidate recommendations in its process, but the group declined and deferred to the Air Force because it was considering scenarios at a joint operational level rather than at the installation level. As a result, Air Force officials told us that they applied either a Mission Compatibility Index approach to these scenarios in deliberative session to assess installations for future missions or they recommended certain functions to follow the placement of aircraft in other Air Force recommendations. The Air Force recommended closing 10 installations (3 active, 3 Air Reserve, and 4 Air National Guard bases) and realigning 62 other installations. In total, the Air Force projected its BRAC recommendations to result in 20-year net present value savings of over $14 billion—the largest projected savings of any service or Joint Cross-Service Group—and net annual recurring savings of $1.2 billion. Table 17 shows the financial aspect of the Air Force recommendations. Over 80 percent of the projected 20-year savings are based on the first 5 recommendations shown in table 17, which involve closing two and realigning three active bases and have payback periods of 1 year or less. Conversely, the one-time costs of over $1.8 billion to implement all recommendations are primarily comprised of new military construction to implement the recommendations. Most of the Air Force’s recommendations involve realignment of Air Guard facilities with limited savings. For example, the Air Force is proposing to realign five Air National Guard stations, with payback periods greater than 10 years and $12 million in 20-year savings, with onetime costs of about $71 million. According to Air Force officials, these proposals were necessary because the Air Force recommendations are interwoven, depending on realignment actions from other recommendations. For example, 72 realignment and closure recommendations involving active and reserve installations were combined to create 42 candidate recommendations. At least one segment of all but 3 of the 42 Air Force recommendations that were combined affects the Air Force Reserve Command or Air National Guard. Based on our analysis we noted that the majority of the net annual recurring savings (60 percent) are cost avoidances from military personnel eliminations. However, eliminations are not expected to result in reductions to active duty, Air Reserve and Air National Guard end strengths, limiting savings available for other purposes. None of the recommendations included in the Air Force’s report involve consolidation or integration of activities or functions with those of another military service. However, the Air Force believes that its recommendations to realign Pope Air Force Base, North Carolina, and Eielson Air Force Base, Alaska, and to move A-10 aircraft to Moody Air Force Base, Georgia, will provide an opportunity for joint close air support training with Army units stationed at Forts Benning and Stewart, Georgia. Furthermore, the Air Force’s recommendations support transformation efforts by optimizing (increasing) squadron size for most fighter and mobility aircraft. According to the Air Force BRAC report, the recommendations maximize warfighting capability by fundamentally reshaping the service, effectively consolidating older weapons systems into fewer but larger squadrons, thus reducing excess infrastructure and improving the operational effectiveness of major weapons systems. We have previously reported that the Air Force’s could not only reduce infrastructure by increasing the number of aircraft per fighter squadron but could also save millions of dollars annually. Time did not permit us to assess the operational impact of each recommendation, particularly where recommendations involve multiple locations. Nonetheless, we offer a number of broad-based observations about the proposed recommendations and selected observations on some individual recommendations. Our analysis of the Air Force recommendations identified some issues that the BRAC Commission may wish to consider, such as the projected savings from military personnel reductions; impact on the Air National Guard, impact on other federal agencies; and other issues related to the realignments of Pope Air Force Base, North Carolina; Eielson Air Force Base, Alaska; and Grand Forks Air Force Base, North Dakota and the closure of Ellsworth Air Force Base, South Dakota. Our analysis showed that about $732 million, or about 60 percent, of the projected $1.2 billion net annual recurring savings are based on savings from eliminating military personnel positions. Initially, the Air Force counted only military personnel savings that resulted in a decrease in end strength. However, at the direction of OSD, the Air Force included savings for all military personnel positions that were made available through realignment or closure recommendations. The Air Force was unable to provide us documentation showing at the present time to what extent each of these positions will be required to support future missions. According to Air Force officials, they envision that most active slots will be needed for formal training, and all the Air Reserve and Air National Guard personnel will be assigned to stressed career fields and emerging missions. Furthermore, Air Force officials said that positions will also be reviewed during the Quadrennial Defense Review, which could decrease end strength. Either way, claiming such personnel as BRAC savings without reducing end strength does not provide dollar savings that can be reapplied outside personnel accounts and could result in the Air Force having to find other sources of funding for up-front investment costs needed to implement its BRAC recommendations. At least one segment of all but 3 of the 42 Air Force recommendations that were combined affects the Air Force Reserve Command or Air National Guard. The Air Force BRAC report lists 7 closures and 35 Air Reserve and Air National Guard realignments. Overall, 68 Reserve Command (12) and Air National Guard (56) installations were affected by a closure or realignment, or they received aircraft or missions from these actions. According to Air Force officials, its BRAC recommendations have resulted in a reduction of 29 installations with flying missions. Of these reduced installations with flying missions, over 75 percent, or 22, are from the Air National Guard. If implemented the BRAC recommendations will affect over 30 percent of the 70 Air National Guard and 13 Air Reserve installations with air flying units, respectively. Table 18 shows the reduction of flying units in the BRAC process by active force, Air Force Reserve Command, and the Air National Guard. Based on our analysis of COBRA data, we estimate that more than 1,419 positions in the Air Reserve and 5,700 positions in the Air National Guard will be affected by the proposed recommendations, in terms of military personnel and civilians eliminated and realigned. In recommendations affecting active installations, over 26,000 positions are affected (eliminated and realigned); however, since the Air Force has combined active and reserve component actions in some recommendations those positions also include additional Air National Guard and Air Reserve personnel. Also the Air Force recognizes that in moving Air National Guard and Air Reserve units, part-time military (commonly referred to as drill) personnel will also be affected since they will not be moved. A significant portion of the personnel associated with these units must be replaced at the gaining installation and will require training. At Air National Guard installations with flying units, over 30 percent have been recommended for realignment or retirement; many of the personnel positions associated with the units do not have missions. Air Force officials said they plan to use these positions for emerging missions in such areas as homeland security, unmanned aerial vehicles, and intelligence, which they expect to further refine as part of the ongoing Quadrennial Defense Review. Initially, many of the Air Force proposals involving the Air National Guard and Air Force Reserve with payback periods ranging from 10 to more than 100 years were stand-alone recommendations. Those recommendations linked by related operational realignment actions were grouped together to produce recommendations that had significant savings and minimized the longer payback periods. We found that this occurred in the realignment of Lambert-St. Louis International Airport Air Guard Station, Missouri, which originally had a 63-year payback period and resulted in a 20-year net present value cost of $22 million. However, this realignment is now a part of the closure of Otis Air National Guard Base, Massachusetts, and the realignment of Atlantic City Air Guard Station, New Jersey because of related operational realignment actions. The current combined recommendation results in a 20-year net present value savings of $336 million and a 3-year payback period. Figure 12 shows the various BRAC actions in this recommendation. For example, 18 F-15 fighter aircraft are realigned from Otis Air National Guard Base and Lambert-St. Louis Air Guard Station to Atlantic City Air Guard Station. Furthermore, all three Air Guard Stations also realign other aircraft to three separate installations, Nellis Air Force Base, Nevada; Burlington Air Guard Station, Vermont; and Jacksonville Air Guard Station, Florida. Finally, questions have been raised by various state officials whether the Secretary of Defense is authorized to close or realign Air National Guard bases without the consent of the state governor. DOD’s Office of General Counsel has not issued a legal opinion on this issue. According to an Air Force official, as of the date of this report, there have been no legal challenges brought against DOD regarding this issue. The Air Force recommendation to close Otis Air National Guard Base could impact the U.S. Coast Guard. While the Air Force officials recognized the Coast Guard could be affected if the base was closed, their cost and savings analysis did not consider any costs that could be incurred by the Coast Guard. Air Force officials stated they didn’t have access to credible cost data during the BRAC process since cost estimates would have been speculative; the Air Force could not assume the final disposition of the facility and how much, if any, of the facility the Coast Guard would opt to retain. The Coast Guard is in the process of developing potential basing alternatives, to include costs impacts, for each affected location. Subsequent to the recommendations being made public, the Coast Guard estimated that they would incur about $17 million in additional annual operating costs to remain at Otis Air National Guard Base. The realignment of Pope Air Force Base involves the transfer of 100 percent of the acres and facilities to the Army to become part of Fort Bragg, with a C-130 active/reserve associate unit remaining to support the Army. Our analysis indicates that there is a significant difference between the savings claimed by the Air Force and the costs projected by the Army regarding base operations support, recapitalization, and sustainment for facilities on Pope Air Force Base. For example, the Air Force claimed total net annual recurring savings of about $36 million for not providing base operations support and recapitalization and sustainment of facilities on Pope Air Force Base. However, the Army estimated total annual recurring costs for these areas to be about $19.5 million. This estimated cost comprises over $13 million from the Army as well as over $5.5 million from the Air Force to remain as tenant at Fort Bragg. According to Army officials, their estimated costs included taking ownership for all facilities on Pope Air Force Base. The Air Force is also proposing to realign Eielson Air Force Base by moving all active duty units, leaving the Air National Guard units, and hiring contractors to provide base operating support and maintenance and repair of the facilities. The Air Force projects this action would produce a 20-year net present value savings of $2.8 billion, the most of any Air Force recommendation. Air Force officials said the decision to realign Eielson was made because of the high cost of operating the base and its value as major training site. The officials noted that the realignment will enable the Air Force to expand an annual training exercise as well as provide opportunities for increase use of the training area by other Air Force units. However, we have some question about the facilities that need to be retained to support the training mission and Air National Guard units. While the Air Force plans to give up the base family housing, it appears that all other base facilities would be retained. For example, Air Force COBRA data indicates that there will be no reduction in the square feet of facilities. The data also indicates that 64 percent of the facilities will be sustained at current funding. The Air Force proposed to close Grand Forks Air Force Base but this was changed to a realignment by the Infrastructure Executive Council a week before the recommendations were finalized within the department. As a result, the projected savings were significantly reduced, as shown in table 19. The decision to realign rather than close the base did not affect the need to move current aircraft and associated personnel to other bases to achieve the active and reserve mix. According to the Air Force BRAC report, this change to a realignment was based on military judgment to keep a strategic presence in the north central United States and on the fact that Grand Forks Air Force Base ranked high for acquiring a possible unmanned aerial vehicle mission. Even though Grand Forks Air Force Base was retained for strategic reasons, Minot Air Force Base is also located in North Dakota and is not affected by any BRAC recommendation. Furthermore, Minot Air Force Base scored only 3.4 points less than Grand Forks Air Force Base in the unmanned aerial vehicle mission area. The Air Force is proposing to close Ellsworth Air Force Base, South Dakota, and move its 24 B-1 bomber aircraft to Dyess Air Force Base, Texas to achieve operational efficiencies at one location. Ellsworth Air Force Base ranked lower in the military value than Dyess Air Force Base. In the 1995 BRAC round, the Air Force considered but chose not to close Ellsworth Air Force Base out of concern over placing all B-1 aircraft at a single location. In contrast, one of the Air Force principles which guided the BRAC 2005 process emphasized consolidating or co-locating legacy fleets such as the B-1 aircraft. Air Force officials stated that they no longer had concerns about consolidating the B-1 fleet in one location because it does not have the same operational mission requirements it had 10 years ago. The Education and Training Joint Cross-Service Group followed the common analytical framework established by the Office of the Secretary of Defense (OSD) for reviewing its functions and facilities. The group produced a relatively small number of recommendations (nine) compared with the amount of excess capacity it identified. The group reported that the Infrastructure Steering Group (ISG) or the Infrastructure Executive Council (IEC) had each disapproved two recommendations for various reasons, and four recommendations were rolled into military department recommendations and are discussed in appendixes related to these groups. The group’s recommendations are projected to produce $1.3 billion in net present value savings over a 20-year period. For these recommendations, the length of time required for the savings to offset closure costs varied widely, with two recommendations expected to take just 1 year, two other recommendations requiring 13 and 16 years, respectively, and one never having any payback. We identified issues regarding the projected savings and extended payback periods with some recommendations that may warrant further attention by the BRAC Commission. The DOD Inspector General and service audit agencies, which performed audits of the data used in the process, concluded that the data were sufficiently reliable for use during the BRAC process. The overarching goal of the Education and Training Joint Cross-Service Group was to pursue those educational and training economies and efficiencies that enhance readiness and promote academic synergies for more joint or interservice education. The group was chaired by the Principal Deputy Under Secretary of Defense (Personnel and Readiness), with senior-level members from Air Force Manpower and Reserve Affairs, Marine Corp Training and Education Command, Army and Naval Personnel, and the Joint Staff. This cross-service group was organized into four subgroups, focusing on (1) flight training, (2) specialized skill training, (3) professional development education, and (4) ranges. The group identified five principles that were used to provide focus to its work: Advance jointness: Declare jointness paramount for specific functions. Establish Joint National Training Capability. Achieve synergy: Jointly construct, co-locate or put in close proximity multiple functions that are mutually supportive. Increase cross- functional use of training and testing ranges. Capitalize on technology: Leverage distance learning capability to significantly reduce residential requirements. Exploit best practices: Establish centers of excellence. Outsource to alternative providers. Minimize redundancy: Identify common functional areas and eliminate duplication, reduce or avoid costs, standardize instruction, and increase efficiency. The organizational structure and the above guiding principles provided a framework to evaluate the potential of a broad series of transformational options to improve DOD education and training. Capacity and military value analysis became the starting point for the group’s analyses. The DOD Inspector General and service audit agencies performed an important role in ensuring the accuracy of data used in these analyses through selective audits of data gathered at various locations. To form the basis for its analyses, the group developed metrics in each of the functional areas to measure capacity and subsequently collected certified data linked to these metrics from various defense activities whose missions resided within these categories. Each subgroup developed metrics to analyze capacity and to compare the various functions. The major standards used by each subgroup are described below: For undergraduate fixed and rotary flight training, runway and airspace capacity were the primary metrics used to analyze capacity. Runway capacity for fixed wing aircraft was calculated using Federal Aviation Administration standards to define the number of runway operations that could be conducted during daylight hours for 244 training days, at 12 hours per day. This approach accounted for weather conditions, the number and configuration of runways, the mix of aircraft, and the percentage of touchdown/takeoff operations. Other metrics included the amount of ramp (apron) space and ground-training facilities, such as classrooms and simulators. For professional development education, capacity was based on classroom equivalent hours available on a 6-hour training day basis for 244 days a year. Classroom equivalent hours represent the number of 1- hour classes (15 students per class) that can be held in designated facilities, and they are based on available classroom space and instructor office space. For specialized skill training, capacity was measured by the student population that can be sustained by the number of available dormitory rooms, dining facilities, and classrooms. This figure was based on an 8- hour training day for 244 days per year. For ranges, capacity was based on the volume and time for training and open air testing at ground, air, and sea levels. Each subgroup focused its capacity analysis on the existing capability to perform specific functions. Surge requirements, where applicable, were determined by military judgment. Excess capacity was defined as current capacity less current usage plus surge capacity. As seen in table 20, significant excess capacity was identified across all education and training functions except for the ranges subgroup. The percentage of excess capacity includes consideration of surge requirements for all functions except professional development education. According to service officials, in the event of a mobilization, postgraduate educational institutions and facilities would cease to operate and the students would revert back to their warfighting duties. The surge requirements for the remaining functions were based on military judgment. For example, the flight and specialized skill training subgroups used a 20 percent surge factor based on a review of current planning documents and military judgment. Likewise, a 25 percent surge factor was used for training ranges and a 10 percent factor for test and evaluation ranges, based on military judgment. According to service officials, a higher surge factor was used for training ranges to meet anticipated training needs for contingencies and mobilization, while test and evaluation are more measured and predictable and less likely to generate large surge loads on test and evaluation missions. The group did not analyze the extent to which its proposed recommendations would reduce excess capacity across all education and training functions. Nonetheless, the Air Force estimated that the recommendation to consolidate undergraduate pilot training would reduce excess capacity by 2 percent. At the same time, the excess capacity identified will remain in undergraduate rotary wing training because the Navy could not agree on a scenario to consolidate training. Since there were no recommendations involving training ranges, there was no reduction in excess capacity in the sea and open air testing areas. Each subgroup developed military value scoring plans to analyze and rank each training facility using DOD’s four military value selection criteria. The subgroups assigned weighted values to each of the four criteria based on relative importance in assessing the military value of a site under each subgroup and related functions. Table 21 shows the weights for each subgroup. Some key assumptions used by the subgroups in developing scoring plans for military value include the following: Installations with larger capacities are of comparatively greater military value for flight training and specialized skill training. Managed training areas (particularly airspace) would be extremely hard to reconstitute if lost due to the BRAC process. Existing service qualitative training requirements must be maintained. Retain unique, one-of-a-kind assets or capabilities. Attributes varied by subgroup. For example, the flight training subgroup identified six attributes that included airfield capacity, weather, environmental constraints (air quality, noise abatement, and encroachment), quality of life, managed training areas, and ground training facilities. The professional development subgroup applied location (access to senior political and military decision makers), educational output, facilities, educational staff, and quality of life. The specialized skill training subgroup attributes included location, quality of life issues, training facilities/resources (number of classrooms and available housing), support for other missions, training mission/throughput, and environmental constraints/expansion potential. Finally, the attributes for the ranges subgroup included personnel (experience and education), workload, physical plant (available space and range features), synergy with other ranges, and encroachment. Figure 13 gives an example of how the flight training subgroup was linked to the military value criteria. The specialized skill training, professional development education, and ranges subgroups used similar approaches of attributes, metrics, and data call questions to link analysis back to the military value criteria. The DOD Inspector General and service audit agencies reviewed the data and processes used by each subgroup to develop their recommendations. The overall objective was to evaluate the validity, integrity, and documentation of the data used by the subgroups. The DOD Inspector General and service audit agencies used real-time audit coverage of data collection and analyses processes to ensure that the data used in the Education and Training Joint Cross-Service Group capacity analysis and military value analysis were reliable and certified. Through extensive audits of the data collected by each subgroup from field activities during the process, the Inspector General and service audit agencies notified the group about identified data discrepancies for the purpose of follow-on corrective action. While the process for validating data was quite lengthy and challenging, the Inspector General and the service audit agencies ultimately determined the education and training–related data to be sufficiently reliable for use in the BRAC process once the subgroups made corrections to all the discrepancies. Although corrections were later made, the group did not have accurate and complete capacity and military value data when it started developing potential closure and realignment scenarios, and therefore, it had to rely on incomplete data, military judgment, and transformation options in developing initial scenarios for consideration. However, certified capacity and military value data and results of COBRA analyses were subsequently used to support the group’s final candidate recommendations. The group initially identified 64 scenarios and selected 17 candidate recommendations that were forwarded to the ISG. Four of the recommendations were rejected by the ISG and IEC and 4 of the group’s recommendations were integrated into military service recommendations. Ultimately, 9 recommendations were approved by the IEC. Generally, scenarios were eliminated because they were alternatives to a recommendation that was selected or because the services objected to the scenario and the group leadership decided to delete it. For example, the professional development education subgroup developed three scenarios to streamline graduate education courses—two to consolidate these functions at existing military facilities and another to obtain graduate-level education at civilian colleges and universities. The group selected the privatization option because of the significant savings; however, it was rejected by the IEC, as discussed later. The professional development education subgroup also developed nine scenarios to realign the senior- level education courses provided by the service war colleges. The group elected to relocate the service war colleges under the National Defense University as the “best choice” option because it establishes a joint strategic center of excellence in the National Capitol Region. However, the IEC rejected this option, as discussed later. Finally, the flight subgroup developed eight alternatives to consolidate undergraduate pilot training. However, the Navy and the Air Force objected to these scenarios because they believed they would result in too much disruption to the pilot production pipeline. The flight training subgroup was the only subgroup that used an optimization model in its scenario analysis. The subgroup used it to identify potential locations to consolidate undergraduate fixed wing pilot training functions among 11 installations. According to flight subgroup officials, the model was not used for rotary wing pilot training because there are only two locations where this training is conducted. Likewise, they noted that it was not used to select sites for the Joint Strike Fighter and Unmanned Aerial Vehicle training because there were limited sites selected for this training. Officials from the other three subgroups stated they did not use the model because of the limited number of facilities or functions reviewed. For example, the professional development education subgroup compared from two to six locations within each scenario, so the team manually developed scenarios by maximizing military value and capitalizing on excess capacity. The group estimated that its recommendations will produce $1.3 billion in 20-year savings and $236 million in net annual recurring savings. Table 22 provides a summary of the financial aspects of the group’s recommendations, all of which are realignment actions. Our analysis indicates that $1.3 billion, or over 95 percent, of the group’s projected 20-year savings results from two recommendations that involve only the Army—the combat service support center and the air defense artillery center. The greater part of the projected savings from these two recommendations is based on military personnel reductions. While five of the nine recommendations would foster jointness, they have limited projected savings. For example, the three recommendations that would establish joint centers of excellence for training (culinary, transportation management, and religious studies) are projected to produce only $45.6 million, or less than 1 percent, of the projected 20-year savings. Furthermore, the recommendation to consolidate the Joint Strike Fighter training has a payback period of never and a 20-year net present value cost of $226 million. Time did not permit us to assess the operational impact of each of the Education and Training Joint Cross-Service group’s recommendations, particularly where operations proposed for consolidation extend across multiple locations outside of a single geographic area. While available data supporting the recommendations suggest that their implementation should provide for more efficient operations within DOD, the BRAC Commission may wish to consider the basis for the group’s assumptions about military personnel reductions, because these have a significant impact on the recommendations’ annual recurring savings and the potential benefits in relation to the investment costs for recommendations with longer payback periods. Significant portions of the savings in three recommendations—combat service support, air defense, and aviation logistics—are related to military personnel reductions. These recommendations represent $217 million, or 92 percent of the Education and Training Joint Cross-Service Group’s projected net annual recurring savings. Our analysis indicates that about $174 million of the net annual recurring savings is based on eliminating over 2,000 military positions within the Army. However, the Army does not plan to reduce its end strength by 2,000 in implementing these actions. These projected revenues do not represent dollar savings that can be readily reallocated to other accounts and applied to other priorities such as modernization, an area typically cited as a potential beneficiary of BRAC savings. Our analysis shows that without the savings from the military personnel reductions, the payback for the combat service support recommendation increases to 35 years, and for both the air defense and aviation logistics recommendations there would be no payback. The group has proposed one recommendation that has no expected payback period and two others that have payback periods that exceed 10 years, far longer than the average payback typically associated with recommendations in the 1995 BRAC round. The recommendation to establish an integrated training center for the Joint Strike Fighter at Eglin Air Force Base, Florida, has no expected payback period, one-time cost of $199 million ($168 million is for military construction), and annual recurring cost of $3.3 million. This recommendation calls for the realignment of nearly 800 military positions—675 maintenance and 115 pilot—from five military installations to Eglin Air Force Base to train entry- level aviators and maintenance technicians from the Navy, Marine Corps, and Air Force in how to operate and maintain the new Joint Strike Fighter aircraft when produced and deployed. According to the chairman of the flight training subgroup, the recommendation does not provide the opportunity to generate savings through the consolidation and alignment of similar personnel because it is a new mission. However, this recommendation would establish a baseline program in a consolidated/joint school with a curriculum that brings a joint perspective to the learning process. The two recommendations with payback periods greater than 10 years affect the Army. For example, the recommendation to relocate the Army Prime Power School from Fort Belvoir, Virginia to Fort Leonard Wood, Missouri, has a 16-year payback period, onetime cost of $6 million, and a 20-year net present value savings of less than $1 million. According to the DOD BRAC report, implementation of this recommendation consolidates engineer courses at Fort Leonard Wood, since the common-core phase of engineer courses are already taught at Fort Leonard Wood. Likewise, the recommendation to realign Fort Eustis, Virginia by relocating the Aviation Logistics School and consolidating it with the Aviation Center and School at Fort Rucker, Alabama has a 13-year payback period, one-time cost of $492.3 million, and a 20-year net present value savings of only $77.4 million. According to the DOD BRAC report, consolidating aviation logistics training with the Aviation Center and School fosters consistency, standardization, and training proficiency. The proposed recommendations do little to reduce the significant excess capacity (see table 20) that was identified in undergraduate pilot training for both fixed and rotary wing aircraft. The Education and Training Joint Cross-Service group identified several scenarios to consolidate undergraduate pilot training that could have enabled some base closures, but the group was unable to get the military services to agree to a joint solution. As a result, the Air Force made a proposal to realign its undergraduate pilot training and consolidate its navigator training with the Navy, which DOD adopted. However, the approved recommendation did not include rotary wing flight training. According to the chairman of the flight training subgroup, the capacity and military value analysis clearly showed that sufficient space is available at Fort Rucker for the Navy undergraduate rotary wing program to relocate from Naval Air Station Whiting Field, Florida, to Fort Rucker with limited renovation or military construction. However, the chairman noted that his group could not get the Navy to agree to the consolidation because of the Navy’s concerns over how such actions would affect other training schedules, so it was not pursued. The Education and Training Joint Cross-Service group also developed a proposal to privatize graduate education that was conducted at the Naval Postgraduate School at Monterey, California, and the Air Force Institute of Technology at Wright-Patterson Air Force Base, Ohio. The group estimated that the proposal would produce $14 million in 20-year savings, with payback in 13 years, and enable the closure of the Monterey location. However, the IEC removed this recommendation late in the process because they believed that relying on the private sector to fulfill this requirement is too risky. According to the Navy’s Special Assistant for BRAC, the Chief of Naval Operations did not want to lose the synergy and interaction between U.S. and foreign students who attended the postgraduate school, and there were questions over whether all graduate- level courses would be available at civilian institutions. The group also developed a recommendation to consolidate all the military services’ senior war colleges at Fort McNair, Washington, D.C., making them one college of the National Defense University. The group estimated that the proposal would produce $213 million in 20-year savings, with payback in 2 years. All of the military services voiced concerns about this recommendation. The Air Force believed that this recommendation would significantly degrade its Center of Excellence for Professional Military Education that includes extensive curriculum for air centric studies located at Maxwell Air Force Base, Alabama. The Navy believed that the existing system already has joint educational forums to address executive-level interchange, and it is unclear what would be gained by creating a single senior war college. Finally, the Army opposed the recommendation because it would move senior leaders and their families to the National Capital Region for 10 months. Based on the services’ concerns, the IEC rejected the proposal. However, the group, with the Army’s concurrence, developed a recommendation to move the Army War College, Pennsylvania, to Fort Leavenworth, Kansas, and consolidate it with the Army Command and General Staff College at a single location. This proposal would have enabled the closure of Carlisle Barracks in Pennsylvania, with projected 20-year savings of $555 million and a 2-year payback period. However, the IEC rejected this proposal because it wanted to maintain the proximity to Washington, D.C. that provides access to key national and international policy makers as well as senior military and civilian leaders. Finally, the group developed eight scenarios to promote joint management of the military services’ training ranges. These options included utilizing a joint national urban operations training center and establishing three joint regional range coordination centers. The group ultimately proposed one recommendation to establish three regional joint range coordination centers, which it projected would have a 20-year cost of $138 million and no payback. The ISG rejected this recommendation because it deals with a program action as opposed to a BRAC-related issue. The Headquarters and Support Activities Joint Cross-Service Group followed the common analytical framework established by the Office of the Secretary of Defense (OSD) for reviewing its functions and facilities. The group produced 21 recommendations, each of which resulted in multiple closures or realignments of activities, mostly from leased space onto military bases intended to consolidate commands, reduce costs, and enhance force protection. Nine other recommendations were referred to other joint cross-service groups or military services for inclusion in their reports. The group’s 21 recommendations are projected to realize $9.5 billion in net present value savings over 20 years. The payback period, or length of time required for the savings to offset closure costs for the recommendations discussed here, varied widely, from immediate to up to 16 years. We have identified some issues that suggest uncertainty about the level of savings likely to be realized, which the BRAC Commission may want to consider in its analysis of the proposed recommendations. The DOD Inspector General and service audit agencies, which performed audits of the data, concluded that the data were sufficiently reliable for use during the BRAC process, but did raise issues of concern impacting some recommendations. The Headquarters and Support Activities Joint Cross-Service Group comprised six senior-level principal members, representing each service, the Office of the Secretary of Defense, and the Joint Chiefs of Staff. The group was chaired by the Army Assistant Deputy Chief of Staff for Programs, and principal members included the Commandant, Naval District Washington; the Marine Corps Assistant Deputy Commandant for Manpower and Reserve Affairs; the Administrative Assistant to the Secretary of the Air Force; the Office of the Secretary of Defense Deputy Director for Administration and Management; and the Chief of the Forces Division, Joint Staff. The group analyzed common headquarters-, administration-, and business-related functions across DOD, covering the military services, and defense agencies and activities. The group’s objectives were to eliminate redundancy, duplication, and excess capacity; utilize best business practices; increase effectiveness, efficiency, and interoperability; and reduce costs. The group organized itself into three subgroups: (1) major administrative and headquarters activities, (2) geographic clusters and functional, and (3) mobilization. The major administrative and headquarters activities subgroup focus included headquarters activities in leased and DOD-owned space within and outside a 100-mile radius of the Pentagon, combatant, service component, and supporting commands, and reserve and recruiting headquarters. The geographic clusters and functional subgroup examined installation management within geographic clusters, Defense Finance and Accounting Services headquarters and field offices, correctional facilities, and civilian and military personnel centers. The mobilization subgroup looked at the potential for joint mobilization sites. Capacity analysis identified the current inventory of administrative space, while the military value analysis became the starting point for developing recommendations as they applied to the four military value selection criteria. The DOD Inspector General and service audit agencies performed an important role in ensuring the accuracy of data used in these analyses through extensive audits of data gathered at various locations. To form the basis for its analyses, the group developed metrics in each of the functional areas to measure capacity and subsequently collected certified data from the military services and defense agencies and activities. In most cases, the group used a single metric, a standard factor of 200 gross square feet per person in analyzing existing administrative space requirements. The group also used fiscal year 2003 inmate population and current and maximum operational capacities for correctional facilities, and it used fiscal year 2004 personnel processing numbers and peak processing capacities at military installations serving as reserve component mobilization sites to estimate mobilization excess capacity figures. The capacity analysis identified excess capacity across all functions analyzed—even when surge requirements were considered. As shown in table 23, excess capacity ranged from 14 percent to 87 percent across various capacity metrics in functional categories after applying a surge factor to figures for major administrative and headquarters installations and facilities and correctional facilities. The table provides the amount of the aggregate excess capacity for each of the functional categories; however, the amount of excess capacity varies by individual installation and activity. In calculating excess capacity estimates for each of the eight categories, the group analyzed the data call responses pertaining to current capacity, maximum potential capacity, current usage, and space required for surge, using a standard factor of 200 gross square feet per employee. Subtracting current usage and surge space requirements from maximum potential capacity resulted in the excess capacity estimates. The group used a variety of approaches to consider surge requirements. For example, the major administrative and headquarters activities subgroup determined surge requirements through specific data call questions and then used these requirements in the capacity analysis in terms of requirement and space evaluations. The correctional facilities function within the geographic clusters and functional subgroup considered surge as a function of demand against maximum potential capacity. At the same time, the geographic clusters and functional subgroup determined that military personnel centers had been operating in a surge mode for the past several years and did not require additional surge capacity to be retained. The group did not determine the aggregate impact its recommendations had on reducing excess capacity. The group’s military value analysis was directly linked to the four military value selection criteria, as required by the BRAC legislation. The group assigned military values to 25 civilian personnel offices, 10 military personnel centers, 17 correctional facilities, 26 Defense Finance and Accounting Service sites, 65 installation management sites, 334 major administrative and headquarters installations and activities, and 66 mobilization sites. Each functional group developed weighted values for each selected criteria by first asking each group member to assess weights across the military value selection criterion, ranking them from highest to lowest in importance to military value. Once the rankings were determined, the weights generated for each group member were compared and, if they were close, the weights were adopted. If not, the group discussed the differences and reached agreement. Table 24 shows the various weights assigned to each of the four military value selection criteria. The group’s assessment of military value included development of attributes (characteristics, facts, etc.), metrics or measures, and data call questions for each of the three subgroups. Figure 14 demonstrates an example of how attributes, metrics, and data call questions were linked back to the BRAC military value selection criteria for the major administrative and headquarters activities subgroup. The geographic clusters and functional subgroup and the mobilization subgroup used similar approaches of attributes, metrics, and data call questions to link the analysis back to the military value selection criteria. For example, the geographic clusters and functional, and major administive and headquarters subgroups developed metrics and data call questions addressing force protection issues. Using mostly certified data, the headquarters group examined the capabilities of each function from questions developed to rank activities from most valued to least valued. Exceptions occurred where military value responses were slow in arriving, contained obvious errors, or were incomplete, and in these cases judgment-based data were used. For example, in about 30 cases, activities in leased space did not respond to particular data call questions addressed to the leased space building manager nor did they identify what entity managed the building. After numerous follow-ups with the activities and meetings with representatives of the Washington Headquarters Service and Army Corps of Engineers— property agents for DOD—the group decided to use judgment-based data derived from functional subject matter experts, in consultation with the military departments and defense agencies. In an October 2004 memorandum to the Infrastructure Steering Group describing military value scoring plan changes, the Headquarters and Support Activities Joint Cross-Service Group concluded that based on an analysis of the effect of the missing, wrong, and incomplete data on proposals, there were some data issues that could affect the generation and comparison of proposals by the group members. However, improvements to the data occurred over time, and as of May 2005, when the military value analysis was completed, the group reported that a vast majority of its data were certified. We were told by a group operations research analyst that 99 percent of the analysis was determined by certified data and less than 1 percent by judgment- based data. The DOD Inspector General and service audit agencies reviewed the data and processes used by each subgroup to develop their recommendations; the military service audit agencies reviewed data inputs from the services, and the Inspector General reviewed data inputs from defense agencies and activities. Their objectives were to validate the data and the adequacy of the supporting documentation. The process for detecting and correcting data errors was quite lengthy and challenging. Through their audits of the data collected from field activities during the process, audit agencies notified the group as data discrepancies were discovered so that follow-on corrective actions could be initiated. The military service audit agencies concluded that the information was sufficiently reliable for its intended purpose. Assessments by the DOD Inspector General’s office of the data it reviewed were more mixed. In its June 10, 2005 draft report on the Headquarters and Support Activities Joint Cross-Service Group’s data integrity and internal control process for BRAC, the DOD Inspector General’s office concluded that after corrections were made, the group generally used certified data and created an adequate audit trail for its capacity, military value, and cost of base realignment actions. However, the Inspector General’s office raised issues involving estimated one-time savings associated with vacating leased space and consistency in rounding to estimate personnel savings. According to group officials, the Inspector General’s issues were discussed with group leadership, and they decided in deliberative session that the approaches taken by the group were the most fair and accurate approaches available and should be retained. Our analysis indicates that the two issues identified by the Inspector General would reduce projected savings. Our analysis shows that if the one-time cost savings associated with antiterrorism and force protection are excluded, the 20-year net present value savings would be reduced by $268.4 million, the payback periods for 7 of the 15 affected recommendations would be extended by 1 year, and 3 years for one recommendation. Also, for the two recommendations identified by the Inspector General as using abnormal rounding techniques to estimate personnel reductions, the projected 20-year net present value savings in one case would be reduced from $13.5 million to a $749,000 cost, and for the other recommendation, the 20-year net present value savings drops from approximately $4.9 million to $ 2.6 million. The Headquarters and Support Activities Joint Cross-Service Group developed proposals without receiving all the data they had requested from numerous activities. As such, the group relied on transformational goals and military judgment to develop its initial proposals. The group also used certified data to support or reject its proposals, data which the DOD Inspector General audited for accuracy. The group used the optimization model on a limited basis for a few functional areas because potential for those functional realignment possibilities was generally slight. The following transformation options helped guide the group in developing initial proposals: Consolidate management at installations with shared boundaries and in geographic clusters. Consolidate or co-locate civilian and military personnel offices. Consolidate Defense Finance and Accounting Service central and field offices. Establish and consolidate mobilization sites and establish joint deployment processing sites. Justify locations for headquarters, commands, and activities within 100 miles of the Pentagon. Eliminate leased space. Consolidate multi-location headquarters at single locations, and eliminate stand-alone headquarters. Consolidate corrections facilities. Co-locate reserve and active component recruiting headquarters, and eliminate reserve force management organizations. Regionalize common headquarters, administrative, and business-related common support activities. The group initially developed 117 proposals, based on these transformational options and military judgment, to include alternative proposals being requested by the Infrastructure Steering Group (ISG). The group settled on 50 recommendations that were initially forwarded to the ISG. Seventeen of them were subsequently consolidated with other recommendations; 2 were rejected by the ISG and one by the Infrastructure Executive Council. Also, 9 recommendations were transferred to other cross-service groups or military departments for inclusion in their reports. That left 21 recommendations that the group addressed in its report and accordingly are addressed in this appendix. The Headquarters and Support Activities Joint Cross-Service Group projects that its 21 recommendations will produce a 20-year net present value savings of $9.5 billion, net annual recurring savings of about $914 million, and payback, or length of time required for the savings to offset closure costs for the recommendations, that varies widely from immediate to up to 16 years. Table 25 provides a summary of the financial aspects of the group’s recommendations. In total, the group estimates that its recommendations will require a total investment of $2.5 billion, primarily for new military construction and moving personnel from leased space onto military bases, and will ultimately result in net annual recurring savings of $914.2 million. Our analysis indicates that about 92 percent of the annual recurring savings results from reductions in military and civilian employment levels (about $270 million, and about $267 million respectively) and the elimination of future lease payments for administrative office space ($300 million). Eighteen of the group’s recommendations are expected to realize savings within 10 years of completing the BRAC realignment and closure actions, while 3 have a payback period greater than 10 years. Time did not permit us to assess the operational impact of each recommendation, particularly where operations are proposed for consolidation across multiple locations outside a single geographic area. However, we offer a number of broad-based observations about the proposed recommendations. While available data supporting the recommendations suggest that their implementation should provide for more efficient operations within DOD, the BRAC Commission may wish to consider the basis of the group’s assumptions for personnel reductions because they have a significant impact on the recommendations’ savings; the assumptions regarding vacating leased facilities because including antiterrorism and force protection savings also has an impact on the recommendations’ savings; challenges to implementing joint basing; cases where realignment actions with long payback periods were combined with actions with shorter durations; stand-alone actions where the payback period exceeded 10 years; and proposals eliminated prior to release of the final recommendations. Approximately $537 million, or about 59 percent, of the group’s projected net annual recurring savings are based on reductions in the number of military and civilian personnel eliminated as a result of the BRAC actions. The process used raises questions about the projected savings. The group initially used generic savings factors to estimate the number of personnel positions that could be eliminated when organizations were co-located or consolidated. These factors were developed on the basis of comments from subject matter experts and research of various databases available through the Pentagon library or the Internet. The group found that personnel reductions from 14 percent to 30 percent resulted from consolidation of organizations and 7 percent to 15 percent when they were co-located. The group adopted these personnel savings factors because the information it did collect on the number of personnel performing common support functions within the affected organizations could not be used and believed it did not have sufficient time to perform more precise manpower studies. The group used these savings factors consistently as starting points in negotiating the number of personnel reductions with the military departments and defense agencies and activities. In most cases the negotiated estimates were accepted, but in some cases the group imposed a personnel reduction percentage when negotiations failed. For example, in analyzing the costs and savings associated with relocating the Army Materiel Command from temporary lease space on Fort Belvoir, Virginia, to Redstone Arsenal, Alabama, the group leadership decided to impose a 7 percent personnel elimination based on expected economies of scale from co-locating the command with one of its major subordinate activities. Our analysis showed that the percentage factor used to estimate personnel reductions for all recommendations ranged from zero percent to about 42 percent. A separate area of concern involves savings reported for military personnel. Our analysis indicates that the group’s recommendations propose to eliminate 2,479 military positions, which would result in about $270 million in net annual recurring savings. However, service officials indicate that they do not plan to reduce their end-strength based on these proposed eliminations but rather reallocate these positions elsewhere within the force structure. Since these military personnel will be assigned elsewhere rather than removed from the force structure, the projected savings do not represent dollars that can be readily allocated outside the personnel accounts to other purposes. Fifteen of the group’s recommendations include a one-time savings of over $300 million from moving activities from leased space onto military installations. For example, these recommendations, if approved, would reduce total DOD leased space within the National Capital Region from 8.3 million square feet to about 1.7 million square feet, or by 80 percent. While our prior work generally supports the premise that leased property is more expensive than government owned property, the recommendations related to vacating leased space also raises questions about a limitation in projected savings and impact on local communities. The one-time cost savings represents costs expected to be avoided in the future by moving from leased facilities onto government owned and protected facilities rather than upgrading existing leased space to meet DOD’s antiterrorism and force protection standards. According to a DOD official, the department put together a task force after the June 1996 Khobar Tower bombing incident in Dhahran, Saudi Arabia, of mostly engineers to develop minimum force protection standards for all DOD- occupied locations. The official also stated that application of the standards in BRAC was not the result of a threat or vulnerability assessment of the affected facilities. The Pentagon Force Protection Agency will shortly begin a 10-month antiterrorism and force protection vulnerability assessment of about 60 DOD-occupied leased buildings in the National Capital Region. This assessment will provide DOD with information to estimate the costs and feasibility of upgrading leased facilities to the antiterrorism and force protection standards. The force protection standards for leased buildings apply only where DOD personnel occupy at least 25 percent of the net interior usable area; only to the portion of the building occupied by DOD personnel; and to all new leases executed on or after October 1, 2005, and to leases renewed or extended on or after October 1, 2009. Initially, the group prepared military value data call questions that could determine whether a leased location met the force protection requirements. However, group officals stated that most of these questions were discarded because of inconsistencies in how the questions were answered except for the percentage of DOD personnel occupying buildings. The group applied the cost avoidance factor consistently to all leased locations but did not collect data that would indicate whether existing leases met the standards, which could possibly result in application of the factor at locations meeting the force protection requirements. For example, the group applied over $2 million in one-time force protection cost avoidance to relocate a Navy human resources service center from the Stennis Space Center, Mississippi, to the Naval Support Activity, Pennsylvania, even though the Stennis Space Center may be as secure as any military installation. If these one-time savings, as shown in the final recommendations forwarded to the BRAC Commission, are not considered in the cost and savings analysis, our analysis shows that the projected 20- year net present value savings decrease by 3 percent ($268.4 million), the payback period increases by 1 year for 7 of 15 recommendations, and by 3 years for one recommendation as shown in table 26. After the final recommendations were released to the BRAC Commission, the group found errors in some recommendations, affecting one-time estimated savings and other costs and savings, which were still in the process of being corrected at the time of this report. Furthermore, four of the Headquarters and Support Activities Joint Cross- Service Group’s recommendations involve moving personnel from leased space to Fort Belvoir, Virginia, mostly at the engineering proving ground, increasing Fort Belvoir’s population by about 10,700. The recommendations include military construction projects to build facilities for these personnel on Fort Belvoir. In addition, the recommendations include a $55 million Army estimate to improve roads and other infrastructure in the area surrounding the fort. However, it is uncertain at this time whether this will be sufficient to fully support the impact on the surrounding community’s infrastructure, or the likelihood that federal assistance is likely to be sought by local governments to help communities reduce the impact—costs that will have the effect of increasing one-time costs and offsetting short-term savings from the recommendations. Implementation Challenges While the proposal to create joint bases by consolidating common installation management functions is projected to create greater efficiencies, our prior work suggests that implementation of these actions may prove challenging. The joint-basing recommendation involves one service being responsible for various installation management support functions at bases that share a common boundary or are in proximity to one another. For example, the Army would be the executive agent for Fort Lewis, Washington, and McChord Air Force Base, Washington, combined as Joint Base Lewis-McChord. However, as evident from our recent visit to both installations and discussions with base officials, concerns over obstacles such as seeking efficiencies at the expense of the mission, could jeopardize a smooth and successful implementation of the recommendation. Further, Air Force officials stated that most military personnel at McChord are mission critical and deployable, increasing the difficulty to identify possible Air Force military personnel reductions. The group projects 20-year net present value savings of about $2.3 billion, with net annual recurring savings of about $184 million. More than 90 percent of the recurring savings reported represent military (54 percent) and civilian (37 percent) personnel reductions. The group applied personnel reductions ranging from 1 to 10 percent for each of the 12 locations included in the joint basing recommendation. The actual percentage used for each location was negotiated between the group and the military departments based on the size of base populations and the kind of services provided. In our June 2005 report we noted that DOD and the military services’ ability to forecast base operations support requirements and funding needs has been hindered by the lack of a common terminology for defining base support functions, as well as by the lack of a mature analytic process for developing base support requirements. We also reported challenges in maintaining adequate funding to meet base operating support requirements and facility upkeep. We concluded that until such problems are resolved, DOD will not have in place the management and oversight framework needed for identifying total base support requirements and ensuring adequate delivery of services, particularly in a joint environment. In its comments to a draft of our June report, DOD indicated that it expects to release a new facilities operation model by December 1, 2005, and use it to develop the fiscal year 2008 program and budget. DOD stated that it is also conducting a cross-department initiative to develop definitions for the common delivery of installation services and expects to complete this effort by December 2005. However, regarding modeling efforts, a Senior Joint Basing Group official expressed doubt during our review whether there would be a single funding model because base operating support, as it currently exists, has too many diverse activities to model. He indicated that it is more likely that a suite of tools will evolve over time. The headquarters group consolidated some recommendations with more than 10-year payback periods, far longer than typical payback periods in the 1995 BRAC round, with other proposals having shorter returns on investment. In total, 8 of the 21 final recommendations were actually packages that consolidated two or more recommendations approved by the joint cross-service group as stand-alone candidate recommendations. We found that in 7 instances, the more than 10-year payback periods of initially stand-alone proposals tended to be masked after they were combined in such packages. For example, the group developed a proposal to move the Army Materiel Command from Fort Belvoir, Virginia, to Redstone Arsenal, Alabama, which showed a 20-year net present cost and a 100-year payback period by not having to spend about $71 million to construct a permanent facility for the headquarters at Fort Belvoir. Had the construction savings been included in the recommendation, the payback period would have been 32 years. Concurrently, the group developed a separate proposal to relocate various Army offices from leased and government-owned office space mostly onto Fort Sam Houston, Texas, which would result in a 20- year net present value savings of about $277.4 million and a 3-year payback period. The group decided to combine these two stand-alone proposals so that all Army headquarters related activities were addressed in one recommendation with an estimated 20-year net present value savings of about $123 million with a 10-year payback. The group proposed three recommendations that have an estimated payback period exceeding 10 years and one-time costs for implementation that greatly exceeds the expected 20-year net present value savings. The cost, savings, and expected benefits for these recommendations are described below: The recommendation to co-locate military department and DOD security clearance adjudication and appeals activities to Fort Meade, Maryland, has an estimated payback of 13 years, one-time cost exceeding $67 million, and 20-year net present value savings of only $11.3 million. According to the DOD final BRAC report, implementation of this recommendation would co-locate adjudication activities, reduce lease costs, and enhance security. The recommendation to consolidate the Defense Commissary Agency Eastern and Midwestern regions and a leased site in Hopewell, Virginia, to Fort Lee, Virginia, has an estimated 14-year payback period, one-time cost exceeding $47 million, and 20-year net present value savings of only $4.9 million. According to the DOD BRAC report, implementation of this recommendation would consolidate headquarters operations at single locations, enhance security, and reduce lease costs. The recommendation to establish joint regional correctional facilities has an estimated 16-year payback period, one-time cost of almost $179 million, and 20-year net present value savings of only $2.3 million. For example, the recommendation would establish the Midwest Joint Regional Correctional Facility by relocating correctional functions currently located at Lackland Air Force Base, Texas; Fort Knox, Kentucky; and Fort Sill, Oklahoma, to Fort Leavenworth, Kansas. According to the DOD BRAC report, implementation of this recommendation would improve jointness, centralize corrections training, and eliminate or significantly reduce old inefficient facilities. Three recommendations were initially approved by the group; two were later rejected by the ISG and another by the IEC. The ISG rejected the recommendation to relocate U.S. Southern Command, Miami, Florida, from its leased space to a state-owned leased space also in Miami with no explanation. Group officials stated the ISG rejected the U.S. Southern Command recommendation because costs associated with the relocation were too high. The ISG also rejected the relocation of U.S. Army Pacific Headquarters from Fort Shafter, Hawaii, to Pearl Harbor, Hawaii, because of Pacific Command Combatant Commander and the Army concerns regarding future requirements of U.S. Army Pacific Headquarters. The recommendation rejected by the IEC to co-locate military department and DOD medical activities to the National Medical Center, Bethesda, Maryland was discarded because of cost and long payback issues. In other cases, Headquarters and Support Activities Joint Cross-Service Group members considered proposals that could have fostered jointly operated support activities, but they were later dropped on the basis of cost considerations and perceived operational risks. For example, the group considered co-locating all military personnel offices at one location. However, in analyzing this proposal, the group determined that implementing the joint proposal would be very costly, while also citing concerns about the uncertain availability of skilled employees at a single location to operate the joint facility. Therefore, the group concluded that it was better to co-locate or consolidate personnel centers within the individual military departments. Similarly, for civilian personnel centers the group developed a proposal to consolidate 25 offices that are currently operated by the military departments and defense agencies into 10 DOD “joint” offices. However, the proposal was dropped after concerns were raised by one military department that the risks associated with implementing joint personnel offices concurrently with processing paperwork supporting other BRAC-related personnel moves and implementing a new standardized personnel data system were too high. Consequently, the IEC directed the group to revise its proposal. The group revised its proposal to provide for consolidating the 25 current offices into 12 offices—4 to be operated by the Army, 4 by the Navy, 1 by the Air Force, and 3 by a single agency providing support to the defense agencies. While DOD did not recommend the creation of joint military personnel offices or joint civilian personnel offices, it is important to note that each of the initial proposals included justifications citing ongoing efforts within the department to establish standardized personnel processes and systems. The recommendation to co-locate components of the U.S. Transportation Command does not include the Navy Military Sealift Command, one of the service component organizations. The group developed a proposal to move the Army and Navy component of the Transportation Command to Scott Air Force Base, Illinois. While the Army agreed to the proposal, the Navy did not believe that the group should be proposing to move the Military Sealift Command because it was considered an operational headquarters and not an administrative function under the purview of the Headquarters and Support Activities Joint Cross-Service Group. The ISG agreed with the Navy and deleted the Military Sealift Command from the recommendation, which reduced projected 20-year net present value savings from $1.30 billion to $1.28 billion. The Industrial Joint Cross-Service Group followed the common analytical framework established by the Office of the Secretary of Defense (OSD) for completing its review. The group initially produced 34 candidate recommendations; 3 were disapproved by the Infrastructure Executive Council (IEC); and several were subsequently integrated into larger military service recommendations. As a result, the group had 17 remaining recommendations that are addressed in this appendix. These 17 recommendations represent a mixture of closures and realignments with the realignments often encompassing the consolidation of various types of industrial workloads at fewer locations. Although some of the recommendations may be considered transformational, limited progress was made in recommending major actions to foster greater interservicing among the services. Industrial group officials said this was due to economic and military value considerations as well as the downsizing of maintenance facilities in prior BRAC rounds. Altogether, DOD projects these recommendations to produce about $7.6 billion in net present value savings over a 20-year period; nearly all are projected to have short payback periods (time required to recoup up-front investment costs) with expected savings offsetting expected implementation costs either immediately or within a few years. One recommendation has a payback period exceeding 10 years. However, uncertainty exists about the precision of the savings estimates because many estimates are based on efficiency gains that are yet to be validated and other factors. Further scrutiny by the BRAC Commission of this and other recommendations may be warranted to assess the impact of reductions against future force structure needs or capacity constraints. The DOD Inspector General and the military service audit agencies, which performed audits of the data, concluded that the data were sufficiently reliable for use during the BRAC process. The industrial group was composed of senior-level principal members from the installations directorates for each service, the Defense Logistics Agency (DLA), and the Joint Chiefs of Staff and was supported by staff from these organizations. The Under Secretary of Defense (Acquisition, Technology and Logistics) chaired the group, which in turn forwarded its proposed recommendations to the Infrastructure Steering Group (ISG) for its review and approval. The group organized its BRAC analyses around three subgroups: (1) maintenance, (2) ship overhaul and repair, and (3) munitions and armaments. All of the subgroups focused their work similarly on identifying opportunities for reducing excess capacity. The industrial group’s analytical process included a review of nine distinct industrial areas across each of the military services. They included: (1) ground vehicles, aircraft, and other depot maintenance; (2) ground vehicles, aircraft, and other intermediate maintenance; (3) ship depot maintenance; (4) ship intermediate maintenance; (5) munitions production; (6) munitions storage; (7) munitions demilitarization; (8) munitions maintenance; and (9) armaments production. As per the BRAC process outlined by OSD, capacity analysis and military value analysis provided the starting point for the cross-service group’s work. The DOD Inspector General and service audit agencies performed an important role in ensuring the accuracy of data used in these analyses through extensive audits of data gathered at various locations. To form the basis for its analysis, the group developed metrics in each of the nine industrial areas to measure current capacity and subsequently collected certified data linked to these metrics from various defense activities across the country whose missions resided within these categories. While the most predominate metric was direct labor hours— used by both the maintenance and ship overhaul and repair subgroups exclusively and by the munitions and armaments subgroup in some instances—the munitions and armaments subgroup also used other metrics for measuring capacity. For example, for measuring munitions production, the subgroup used pounds and units, and for measuring munitions storage, the subgroup used square feet and short tons. The disparate nature of the functions analyzed by the group did not lend itself to a “one size fits all” analytical approach and each of the three subgroups conducted its own capacity analysis. The munitions and armaments and ship overhaul and repair subgroups defined excess capacity as the difference between current capacity and current usage. For depot maintenance, the maintenance subgroup defined excess capacity as the difference between current capacity and the larger of current usage or core requirements. Core requirements are those workload needs that must be performed in organic rather than contractor facilities. For intermediate maintenance, the maintenance subgroup defined excess capacity as the difference between current capacity and current usage. The cross-service group’s capacity analysis showed that excess capacity existed within many of functional areas it examined, especially in those of munitions and armaments functions. As shown in table 27, the estimates of excess capacity ranged from 7 percent to 91 percent among individual functional categories. The three subgroups addressed surge requirements in their capacity analyses to varying degrees. For the maintenance subgroup, the excess percentages represent excess capacity above surge requirements, because the collected core requirements data included surge requirements and the excess capacity calculations were based on the larger of current usage or core requirements. For the munitions and armaments subgroup, the excess capacity percentages represent the capacity available to meet surge requirements. According to munitions and armaments subgroup officials, there are no over-arching, quantified, DOD-wide surge requirements for munitions and armaments. Instead, surge becomes a factor of how much excess capacity is available and can be addressed through multiple work shifts. Conversely, the percentages for ship repair and overhaul do not address surge requirements. According to ship overhaul and repair subgroup officials, because the Navy’s surge requirements are dictated by emergent deployments or ship repair requirements and because shipyards are normally workloaded to their workforce capacity, surge capability is limited to the use of overtime and delaying previously planned work. As table 27 shows, the data indicate that there was not much excess capacity in the ground vehicles, aircraft, and other depot maintenance area. Therefore, in that area the group focused much of its attention on minimizing sites by redistributing and consolidating workload. On the other hand, while many of the group’s ship overhaul and repair and munitions and armaments recommendations were directed toward reducing excess capacity, group officials did not calculate a percentage for the reduction in excess capacity made possible by implementing the recommendations. The military value of activities within the group played a predominant role in formulating recommendations. In completing its military value assessment, the industrial group assessed each activity across the four established military value criteria to more fully evaluate the potential for realignment and closure actions. As was the case with capacity analysis, the disparate nature of the industrial areas analyzed by the group precluded a uniform analytical approach among the three subgroups. As a result, the subgroups differed in the methodology they used to develop relative weights for the military value criteria for each of their functions. Table 28 shows the various weights assigned to each of the four military value criteria by the subgroups for their functions. The group’s military value analysis also included the development of attributes, metrics, and data call questions for each of the nine functional areas represented in the categories in the chart above which were linked back to the four military criteria. Figure 15 provides examples of these attributes, metrics, and data questions and shows how each of these was linked back to the criteria. Because of the disparate nature of the industrial areas analyzed by the industrial group, the subgroups also differed in the way they assigned military value scores to their respective activities. For instance, the maintenance subgroup determined military value by commodity only and did not develop an overall military value score for activities in the depot and intermediate maintenance functions. Because military value scores were only determined for activities by commodity, activities were only ranked within their respective commodities. For example, Rock Island Arsenal, Illinois, received a military value score for its combat vehicle maintenance workload and was ranked accordingly against all the other depot level activities that perform combat vehicle maintenance. In addition, because most activities involve multiple commodities, such as major maintenance functions like aircraft engines, electronics, etc., many of the activities received multiple military value scores. In the case of Rock Island Arsenal, it not only received a military value score for its combat vehicle maintenance work but also for its tactical vehicle maintenance work and its other equipment maintenance work. These military value scores were then used in an optimization model to determine the best locations to consolidate various like commodities among the three services. In all cases, the subgroup examined redistributing the workload to activities with a higher military value score for that commodity. According to the maintenance subgroup, determining military value by commodity allowed for more opportunities to create interservicing and consolidations of workload among the services. The maintenance subgroup’s process was focused on military value and available capacity without regard to service. The final recommendations were tempered by financial and operational considerations. However, as we discuss later, our analysis shows that while some interservicing may be achieved, most of the group’s recommendations remained relatively service-centric. The ship overhaul and repair and munitions and armaments subgroups, on the other hand, developed overall military value scores for activities within their respective functions and ranked their activities within those functions accordingly. For example, all shipyards were ranked together under the depot maintenance function, and all industrial activities that perform munitions production were ranked together under the munitions production function. The DOD Inspector General and the service audit agencies played important roles in ensuring that the data used in the industrial group’s data analyses were certified and properly supported. Through extensive audits of the data collected from field activities during the process, these audit agencies notified the group regarding any identified data discrepancies for the purpose of follow-on corrective action. While the process for detecting and correcting data errors was quite lengthy, the audit agencies ultimately deemed the industrial data to be sufficiently accurate for use in the BRAC process. The industrial group did not have complete capacity or military value data when it initiated the development of potential closure and realignment scenarios. Therefore, it had to rely on incomplete data as well as military judgment to determine which industrial areas had excess capacity and which could receive new workloads. As time progressed, however, the group obtained the needed data to inform and support its scenarios. The DOD Inspector General validated the data. The maintenance and munitions and armaments subgroups used an optimization model to help facilitate scenario development, while the ship overhaul and repair subgroup, which had similar data problems, also relied on incomplete data as well as military judgment to help formulate scenarios for consideration. This subgroup did not rely on the optimization model as extensively as the other subgroups due to the relatively small number of activities analyzed. Collectively, the subgroups initially developed 120 proposals and scenarios and with the maturation of the data, completion of Cost of Base Realignment Actions (COBRA) analyses, and elimination of alternative scenarios, the industrial group settled on 34 recommendations that were forwarded to the ISG with all but 3 being ultimately approved by the IEC. Despite having incomplete data, the maintenance subgroup began its scenario development by generating several ideas as potential scenarios. In testing the feasibility of these ideas, the maintenance subgroup found it useful to use an optimization model, because the subgroup was dealing with a universe of 57 commodities across 28 depot level activities and 11 commodities across over 200 intermediate level activities which made it extremely difficult to determine where workload could be consolidated or redistributed. For realignment considerations, officials told us the preferred method was to consolidate workload at the highest military value sites that remained open in the optimization results, but military judgment also played a role in finalizing the sites. In some instances, military judgment was used to override the results of the optimization model. For example, the subgroup chose not to realign the rotary aircraft workload from the Naval Air Depot at Cherry Point, North Carolina, to the Corpus Christi Army Depot, Texas, even though it was proposed for realignment under the optimization model because of concerns about establishing a single point of failure or vulnerability for DOD’s rotary aircraft workload. One issue that the maintenance subgroup dealt with during its scenario development was that the current DOD capacity baseline for its maintenance work was based on a single shift 40 hours per week workload. According to the subgroup, when using the optimization model, it found that existing capacity as measured on this basis would constrain its ability to identify options for achieving more economical operations. Further, recognizing that such a baseline was inconsistent with industry practice, the subgroup modified the capacity baseline to one and a half shifts with a 60 hours weekly workload, thus increasing available capacity at its industrial activities and the potential for consolidating work at fewer locations. As we reported after the 1995 BRAC round, a capacity baseline of a single shift 40 hours per week workload is a conservative projection of capacity because the private sector frequently uses a capacity baseline of two or two and a half shifts. In addition, based on more current information of private sector capacity utilization, we still believe that a single shift is a conservative projection of capacity, since many firms today work multiple shifts. Like the maintenance subgroup, the munitions and armaments subgroup also used the optimization model to test the feasibility of its ideas and to facilitate its scenario development and analysis. Its emphasis was on increasing multi-functional activities, (i.e., those activities that have the capability to do more than one munitions and armaments function). During scenario development, the subgroup’s preference was to eliminate excess capacity through closure versus realignment. The ship overhaul and repair subgroup, on the other hand, used mostly capacity and military value data in combination with military judgment in developing and analyzing its scenarios. Due to the small number of activities analyzed—22 depot and intermediate level ship overhaul and repair activities—the subgroup did not have to rely on the optimization model to determine where workload could be potentially consolidated or redistributed. While it did use the model primarily to check the feasibility and rationalization of scenarios, military judgment was required because most of the subgroup’s scenarios were influenced by Navy force structure changes and planned changes in the homeports of ships. According to industrial group officials, expected out-year changes in Navy force structure—specifically expected reductions in the number of ships— allowed them to recommend the closure of a shipyard. Expected changes in the homeports of ships also influenced the subgroup’s intermediate level scenarios because the Navy’s intermediate level maintenance is generally performed where ships are homeported. The industrial group’s 17 recommendations are estimated to produce an estimated $7.6 billion in 20-year net present value savings. Table 29 provides a summary of the financial aspects of the group’s recommendations. Most of the projected savings from the group’s recommendations are concentrated in relatively few recommendations and nearly all have an immediate or moderately short payback period where projected savings are anticipated to offset the implementation costs either immediately or within a few years. The recommendation regarding the establishment of Navy fleet readiness centers is by far the largest in terms of overall savings, accounting for about $341 million, or about 56 percent, of the total estimated net annual recurring savings. As discussed later, only one recommendation—the realignment of the Watervliet Arsenal, New York, has a lengthy payback period exceeding 10 years. Of the industrial joint cross-service group’s 17 recommendations, 8 are closures and 9 are realignments. However, contained within these recommendations are 40 smaller, individual realignment actions and several recommendations involve installations with less than 300 personnel that could be but were not required to be proposed under BRAC. The following summarizes some of our overall observations about the group’s recommendations. Interservicing: Despite setting up its military value scoring for maintenance by commodity to foster opportunities for interservicing, the industrial group actually developed few recommendations that proposed greater interservicing. Of the 9 realignment recommendations, we consider only three to involve interservicing—(1) realigning the Air Force’s depot maintenance workload at Lackland Air Force Base, Texas, to Tobyhanna Army Depot, Pennsylvania, (2) realigning the Navy’s depot maintenance at Naval Weapons Station Seal Beach, California to several other service depots, and (3) realigning Lima Army Tank Plant, Ohio, to support, in part, the future manufacturing of the Marine Corps expeditionary force vehicle. DOD has stated recently that there is some interservicing of ground maintenance work already being performed at the major depots. However, while there is significant interservicing of electronics work at Tobyhanna Army Depot, Pennsylvania and of rotary work at Corpus Christi Army Depot, Texas, our analysis shows that interservicing at the major ground vehicle maintenance depots is very limited. For example, in fiscal year 2003, only 3 percent of Anniston Army Depot’s total workload was for the Marine Corps and only 3 percent of Marine Corps Logistics Base Barstow’s and Marine Corps Logistics Base Albany’s workloads was for the Army. Moreover, out of 17 major maintenance depots across the services, the group only proposed the closure of three—Portsmouth Naval Shipyard, Maine, Red River Army Depot, Texas and Marine Corps Logistics Base Barstow, California—with Barstow ultimately becoming a realignment. No recommendations were developed regarding the Air Force’s three relatively large air logistics centers and only Navy-centric recommendations were developed regarding the Navy’s three naval air depots, despite that the industrial group had registered scenarios consolidating similar types of work from a naval air depot into air logistics centers. According to group officials, they decided not to propose these as recommendations because of the Navy’s desire to combine its aircraft depot and intermediate work into fleet readiness centers and because this recommendation offered greater financial benefits. As a result, this essentially removed the naval air depots from the BRAC analysis in considering opportunities for more interservicing. While not considered an industrial group recommendation or otherwise addressed in this appendix, the industrial group’s work also helped the Navy develop a recommendation realigning some of the workload at Marine Corps Logistics Base Barstow to Army depots. This recommendation is discussed in appendix IV. Closures: Regarding eight closures, four involve underutilized Army ammunition facilities, and three are chemical demilitarization facilities where the primary mission is slated to disappear in the coming years. Savings: Essentially all of the projected savings from the group’s recommendations are based on reducing overhead and eliminating civilian and military personnel as installations are closed and functions are realigned between installations. For example, 63 percent of the group’s total projected net annual recurring savings is from reductions in overhead and 37 percent is from personnel eliminations with civilians making up 21 percent of total net annual recurring savings and military personnel 16 percent. Taken individually, the recommendation that the industrial group expects will generate the greatest amount of savings is the establishment of the Navy’s fleet readiness centers, which is estimated to produce net annual recurring savings of $341 million or 56 percent of the group’s total net annual recurring savings and an estimated 20-year net present value savings of $4.7 billion or 62 percent of the group’s estimated total net present value savings. This realignment recommendation differs from the other realignments in that it proposes a significant business process reengineering effort to integrate the Navy’s non-deployable, intermediate and depot level aircraft maintenance rather than a consolidation or realignment of workload. While the changes proposed would appear to have the potential for significant savings, as explained below, some uncertainty exists about the full magnitude of the savings estimate for this recommendation because most of the group’s projected savings are based on efficiency gains that have yet to be validated. For example, based on our analysis, over 63 percent of the estimated net annual recurring savings for this recommendation are miscellaneous recurring savings projected to accrue from overhead efficiencies, such as reduced repair time and charges, while 12 percent of the annual recurring savings is produced from reductions in military personnel and 24 percent of the savings is derived from reductions in civilian personnel. These efficiencies are expected to be gained from integrating intermediate and depot levels of maintenance and not having to ship as many items to faraway depots for repair. In addition, 34 percent of the group’s net implementation savings for this recommendation is derived from other one-time unique savings accrued from one-time reductions in spare parts inventories. Time did not permit us to assess the operational impact of each of the industrial group’s recommendations that was approved by DOD, particularly those with minimal financial impact and where minimal realignment and consolidation of workload was proposed. At the same time, however, we offer a number of broad-based observations about selected proposed recommendations regarding high payback periods and uncertain savings that the BRAC Commission may want to consider in its review. The recommendation on fleet readiness centers is essentially a Navy business process reengineering effort to transform the way the Navy conducts aircraft maintenance by integrating existing, non-deployable, intermediate and depot maintenance levels into a single, seamless maintenance level. The fleet readiness center construct focuses on the philosophy that some depot level maintenance actions are best accomplished at or near the operational fleet. Although the data suggests the potential for savings, we believe there is some uncertainty regarding the magnitude of the industrial group’s expected savings for these readiness centers because its estimates are based on assumptions that have undergone limited testing, and full savings realization depends upon the transformation of the Navy’s supply system. In determining the amount of savings resulting from the establishment of the fleet readiness centers, the industrial group and the Navy made a series of assumptions that focused on combining depot and intermediate maintenance in a way that would reduce the time an item is being repaired at the intermediate level, which in turn, would simultaneously reduce the number of items needed to be kept in inventory and the number of items sent to a depot for repair. These assumptions, which were the major determinant of realignment savings, were based on historical data and pilot projects and have not been independently reviewed or verified by the Naval Audit Service, DOD Inspector General, or us. Moreover, how well these actions, if approved, are implemented will be key to determining the amount of savings realized. According to the group, two types of savings account for the majority of the projected savings from the fleet readiness center recommendation. First, one-time savings are projected to accrue from reductions in inventory maintained at several Navy shore locations because item repair cycle time for components is reduced with more depot level maintenance being performed at or near the fleet, generally at an intermediate facility. According to group officials, this reduction is accomplished by stationing several depot level repair personnel at an intermediate facility to assist in repairing an item on site rather that spending time re-packing and shipping the item to a depot for repair. By reducing the turnaround time for an item—that is, time spent in transit to and from a depot level repair facility, group officials estimate that the average time an item is in the repair pipeline will decrease from 28 hours to 9 hours, with nearly all that time spent on the actual repair. The industrial group maintains this reduction in turnaround time will allow for savings since fewer items will need to be kept in the shore based aviation consolidated inventory because items will be getting repaired quicker and returned to the inventory faster. The second type of savings is recurring overhead savings that are projected to accrue from fewer items being sent to depots for repair. According to group officials, establishing fleet readiness centers will result in fewer items being sent to a depot to be repaired, thus reducing per item maintenance costs. These savings are captured in the COBRA model under overhead as miscellaneous recurring savings. As explained by group officials, when an item is sent to a depot, two charges are applied to the cost to repair the item—a component unit price and a cost recovery rate. So, if fewer items are sent to a depot, then fewer repair charges are incurred and less overhead costs are incurred. However, according to an industrial group official, since the depots will have fewer items to repair, they will have fewer opportunities to generate revenue to support their working capital fund operations. This situation, in turn, could create an incentive for the depot to increase its cost recovery rate for items it does repair to make up for reduced revenues. If this were to occur, then the projected savings would not materialize because most of the fleet readiness center savings are based on a reduction in the number of items sent to depots and are contingent on the supply system not drastically raising the cost recovery rate. According to industrial group officials, it will be important to overall transformation efforts that DOD follow through on eliminating management structures and duplicate layers of inventory in the supply system. Also, according to these officials, some of this supply-side transformation is already underway at the retail level in the form of a partnership between fleet industrial supply centers and the naval air depots where material management for the depots was handed over to the supply centers to standardize supply chain processes, improve material availability, and reduce the material excesses that have been a difficult problem for the naval air depots. In addition, group officials stated that the supply and storage joint cross-service group’s recommendation to realign supply, storage, and distribution management should also further this transformation by eliminating unnecessary redundancies and duplication and by streamlining supply and storage processes, which will reduce costs and help prevent a large increase in the cost recovery rate. In addition, we believe there is some potential risk in properly accounting for depot level work to meet legislatively mandated reporting requirements on the percentage of depot workload performed in government and contractor facilities, absent efforts to ensure adequate differentiation of work completed for intermediate and depot level maintenance. We previously reported on similar difficulties in 2001 involving a consolidation of intermediate and depot level work at Pearl Harbor Naval Shipyard, Hawaii. We noted that, prior to consolidation, the Navy’s determination of depot and intermediate maintenance work was based on which facility performed it—the former Pearl Harbor shipyard performed depot work, and the former intermediate maintenance facility performed intermediate work. However, because Pacific Fleet and Pearl Harbor officials asserted that all work was considered and classified the same at the consolidated facility, the management and financial systems did not differentiate between depot and intermediate categories of work. As a result, the lines between what was considered intermediate and depot maintenance became blurred, making it harder to report what was intermediate and depot maintenance. The industrial group maintains that during the first few years of implementing the fleet readiness center recommendation, the Navy will continue to operate depot maintenance within the working capital fund (setting up a separate holding account) and perform intermediate maintenance with mission funding. During this period, depot maintenance will be reported as depot maintenance and intermediate maintenance will be reported as intermediate maintenance. While this should mitigate the accounting issue in the short-term, it is unclear to what extent longer term measures will be needed to ensure proper reporting of depot work to meet statutory requirements. The net annual recurring savings may be overstated for the three chemical depots recommended for closure—Newport, Umatilla, and Deseret—and it is unclear whether such facilities are appropriately included in the BRAC process. The industrial group estimated net annual recurring savings of $127 million for the three chemical demilitarization facilities, $20 million of which is from anticipated savings by not recapitalizing these closed BRAC installations. However, the current missions of each of these installations are focused on the destruction of existing chemical weapons stockpiles, and after the stockpiles are destroyed, the destruction facilities themselves are scheduled to be dismantled and disposed of in accordance with applicable laws and agreements with the governors of the states in which they are located. With the exception of the recommended transfer of storage igloos and magazines from Deseret to Tooele Army Depot, Utah, Army officials have not identified any existing plans for future missions at these depots once the chemical destruction mission is complete. Consequently, it is unclear how the closure of the depots will result in recapitalization savings. Additionally, given the general delays in the Army’s chemical weapons destruction program it is uncertain that it will be able to complete the chemical weapons destruction mission and close these depots within the 6-year BRAC statutory implementation period. There is uncertainty surrounding the Army’s ability to close the Hawthorne Army Depot, Nevada, by 2011, the final year as prescribed by the BRAC legislation for implementing BRAC actions. The Army may be unable to demilitarize all the unserviceable munitions stored at the depot by 2011, thereby placing the Army at risk for closing the depot by that date. Army officials told us that demilitarization funds have not been fully used for demilitarization purposes in recent years, but for other purposes. As a result, the stockpile of unserviceable munitions is growing. The funding situation is of such concern that an Army official told us they intend to request the DOD Comptroller issue a memorandum that would administratively “fence” funding in the demilitarization account to better ensure that the funds will be used for reducing the stockpiles of unserviceable munitions. This official also told us that this funding situation could be further exacerbated with the potential for the return to the United States of additional unserviceable munition stockpiles that are currently stored in Korea, even though the group considered these stocks in its analysis. This official stated that if these unserviceable munitions are returned for demilitarization to Hawthorne, there will be added pressure to finish the demilitarization process in time to close the facility by 2011. Currently, the Army leases some property at its ammunition plants through the Army’s program called the Armament Retooling and Manufacturing Support Initiative. DOD has recommended for closure four ammunition plants that are part of this initiative—Mississippi, Kansas, Lone Star, and Riverbank. We previously reported that, while this initiative has offset some of the Army’s maintenance costs, maintaining ammunition plants in an inactive status still represents a significant cost to the federal government. Through this initiative, the Army contracts with an operating contractor that conducts maintenance, repair, restoration, and remediation in return for use of the inactive part of the facility. The operating contractor, in turn, locates and negotiates with tenants regarding lease rates, facility improvements, and contract terms. However, the effect on these tenants of closing the four ammunition plants involved with the initiative is currently unknown. Army officials responsible for the initiative told us that past transfers of such property outside of the BRAC process have been handled poorly in that the General Services Administration or Army Corps of Engineers, the agencies responsible for transferring excess property, evicted the tenants and then sold the property separately, as was the case in past closures such as the Indiana Army Ammunition Plant. Army officials said that property transfers conducted in this manner could be costly because the government must incur some costs that were paid by the tenants, such as for security and maintenance. For example, an Army analysis showed that retaining the ARMS tenants on Indiana Army Ammunition plant rather than evicting them would have saved about $41 million. Additionally, DOD may incur some costs if leases are terminated early. An industrial group official told us that the group included termination costs for leases that extended past the proposed closure date but only for tenants performing DOD work, not for other tenants. We believe that lease termination costs should have been included for any tenant’s lease that extends past the proposed closure date, since there may be a cost incurred for breaking the lease early. However Army officials said that it would be difficult to estimate such potential costs at this time. Despite having a payback period of 18 years, the industrial group proposed the realignment of Watervliet Arsenal, New York, because it has considerable excess capacity and DOD will no longer require some of its capabilities. The group had originally considered either moving the entire workload of the arsenal to Rock Island Arsenal, Illinois, or moving the entire workload of Rock Island Arsenal to Watervliet Arsenal. However, according to industrial group officials environmental issues regarding potential chromium discharges into the Mississippi River and costs associated with moving heavy industrial equipment precluded a cost- effective realignment of moving the work at Watervliet Arsenal to Rock Island Arsenal. Similarly, air quality issues regarding sulfur dioxide emissions along with the costs to move equipment precluded a cost- effective realignment of moving the work at Rock Island Arsenal work to Watervliet Arsenal, since the Northeast region already exceeds allowable limits for sulfur dioxide emissions. As shown in the table 29, the Watervliet recommendation has a payback period of 18 years, with about $63.7 million in one time unique costs and only $5.2 million in net annual recurring savings. According to industrial group officials, these one-time costs reflect the costs of “shrinking the footprint,” (i.e., moving out of buildings and eliminating and moving excess equipment at both the arsenal and the accompanying research laboratories also located at the arsenal). The Intelligence Joint Cross-Service Group followed the common analytical framework established by the Office of the Secretary of Defense (OSD) in reviewing its functions and facilities. The Intelligence Joint Cross- Service Group produced two recommendations that it projects will yield about $588 million in 20-year net present value savings, with a payback period of 8 years for each recommendation. The majority of savings in the two recommendations result from lease terminations. Unlike the services or other groups, there is little savings projected from personnel reductions because, according to officials, almost all of the personnel will relocate and end strength is projected to increase as a result of program growth. The DOD Inspector General and service audit agencies, which performed audits of the data, concluded that the data were sufficiently reliable for use during the BRAC process. The intelligence group was responsible for reviewing intelligence functions throughout DOD. Previous BRAC rounds did not involve the participation of any joint cross-service group dedicated to analyzing intelligence functions. The intelligence group was chaired by the Deputy Under Secretary of Defense (Counterintelligence & Security). The Group’s principals included senior members from the Defense Intelligence Agency, National Geospatial-Intelligence Agency, National Reconnaissance Office, National Security Agency, each military department, and the Joint Staff Directorate for Intelligence, along with representation from the offices of the Director, Central Intelligence Community Management Staff, and the Department of Defense Inspector General. The intelligence group formed four functional subgroups: Sources and Methods; Correlation, Collaboration, Analysis, and Access; Management Activities; and National Decisionmaking and Warfighting Capabilities. The first three subgroups each created an analytical construct for measuring defense intelligence capacity that resulted in a capacity data call. These subgroups were eventually replaced by a single Core Team that included membership from each organization represented in the Intelligence Joint Cross-Service Group. This team created a single, consolidated analytical construct for measuring the military value of defense intelligence facilities. The team also performed detailed capacity and military value analysis, evaluated scenario ideas, executed scenario data calls, and prepared Intelligence Joint Cross-Service Group candidate recommendations for deliberation. The overarching intelligence principle the group worked to support was that DOD needs intelligence capabilities to support the National Military Strategy by delivering predictive analyses, warning of impending crises, providing persistent surveillance of our most critical targets, and achieving “horizontal” (that is, interagency) integration of networks and databases. To do so, the group focused on four key objectives: Locating and upgrading facilities on protected installations as appropriate. Reducing vulnerable commercial leased space. Realigning selected intelligence functions/activities and establishing facilities to support continuity of operations and mission assurance requirements. Providing infrastructure to facilitate robust information flow between analysts, collectors, and operators at all echelons and achieve mission synergy. The group conducted an assessment of defense intelligence for buildings, facilities, and personnel performing the intelligence function. The objective was to project an alignment of present capabilities, with current organizational compositions and business processes, to desired future operational capabilities, using DOD’s transformational concepts and preferred organizational construct. The intelligence group initially identified five broad functions to analyze in defense intelligence: Sources and Methods (Acquisition and Collection); Analysis; Dissemination; Management Activities; and Sustainability. Based on subsequent Infrastructure Steering Group guidance, these five broad functions were consolidated into a single function—defense intelligence— in the final military value scoring plan. Capacity analysis and then military value analysis were the starting points for the BRAC analytical process. The DOD Inspector General and service audit agencies performed an important role in ensuring the accuracy of data used in these analyses through extensive audits of data gathered at various locations. To assess capacity, the intelligence group identified buildings and facilities performing the intelligence function and developed related attributes, metrics, and questions for analysis. Data calls were issued to the defense intelligence community to gather certified data on intelligence buildings and facilities. The capacity analysis identified limited excess capacity in some organizations, but no overall excess capacity, as shown in table 30. The negative excess capacity shown in table 30 differs from the group’s initial capacity data results, which showed an overall excess capacity of 18 percent. However, after reviewing the initial data, the intelligence group made two adjustments. First, the group removed buildings with no direct intelligence mission, such as barracks, pump houses, tunnels, or warehouses. Then the group increased its estimate of the area of square feet required for personnel temporarily working at another intelligence entity and for contractor personnel by 50 percent. The group did not identify any known documented requirements for the defense intelligence community to set aside space or facilities for surge. The intelligence community has historically handled surge operations by reassigning and reallocating existing resources within the current square footage. All BRAC 2005 selection criteria were applied by the intelligence group across the defense intelligence functional support area and used with the force structure plan and infrastructure inventory to perform analyses. Priority consideration was given to military value by evaluating and scoring activities based on the first four selection criteria. Table 31 below shows the weighted value the intelligence group gave to the criteria, based on a 100-point scale. The intelligence group assessed the military value of its facilities based on those facilities’ capabilities to support the intelligence function. A single scoring plan measured the value of both the infrastructure and the personnel performing the defense intelligence function at a given facility. Attributes and weighted metrics were used to compute the military value of a building by assessing the facility’s physical infrastructure and locations as they related to selection criteria 1 through 4. After computing military value scores, a rank-ordered listing of the 267 intelligence facilities was developed for the defense intelligence function. Subsequently, strategy- driven scenarios were validated by analyses of military value data and military judgment. Figure 16 illustrates how the military value attributes, metrics, and data questions were linked to the military value criteria using selected attributes, metrics, and questions. A similar process was followed for all of the 267 intelligence facilities. The DOD Inspector General and service audit agencies reviewed the data and processes used by the Intelligence Joint Cross-Service Group to develop its recommendations. The overall objective was to evaluate the validity, integrity, and documentation of data used by the subgroups. The DOD Inspector General and service audit agencies used real-time audit coverage of data collection and analysis processes to ensure that the data used in the groups’ capacity analysis, military value analysis, and use of optimization models was certified and was used as intended. Through extensive audits of the data collected from field activities during the process, the DOD Inspector General notified the group of data discrepancies for the purpose of follow-on corrective action. The DOD Inspector General ultimately determined, once the corrections to all the discrepancies were noted, the intelligence data to be sufficiently reliable for use in the BRAC process. The Intelligence Joint Cross-Service Group developed 13 scenarios, which after further analysis led to 6 candidate recommendations being presented to the Infrastructure Steering Group and the Infrastructure Executive Council, the latter of which approved 3 candidate recommendations. One of these 3 approved candidate recommendations was subsequently incorporated into a recommendation proposed by the headquarters group. Some scenarios were eliminated because they were alternatives to a proposed recommendation. Other scenarios were eliminated because of concerns over high implementation costs and long payback periods—that is, the length of time required for the savings to offset closure costs. For example, the group developed a scenario to establish selected continuity of operations and mission assurance functions at White Sands Missile Range, New Mexico, but it was disapproved by the Infrastructure Executive Council because it had a one-time cost of $1.8 billion and a projected payback period of never. The Intelligence Joint Cross-Service Group projects that its two recommendations will produce almost $588 million in 20-year net present value savings, and almost $138 million in net annual recurring savings. Table 32 below provides a summary of the financial aspects of the group’s recommendations. The majority of the net annual recurring savings in the two recommendations is from the avoidance of future leased cost when activities move from leased space to military installations. Intelligence Joint Cross-Service Group officials noted that about one-half of the estimated $1.1 billion one-time costs for the National Geospatial- Intelligence Agency move will be paid from National Intelligence Program funds. The recommendation to move the National Geospatial-Intelligence Agency from various leased sites to Fort Belvoir, Virginia, will have a significant impact on the local community when added to other proposals to move activities to Fort Belvoir. This one proposal would move about 8,500 personnel to Fort Belvoir from Bethesda, Maryland, Washington, DC and the northern Virginia area. The BRAC Commission may wish to consider the impact on the local community infrastructure, such as roads and public transportation, when evaluating this and other proposals affecting Fort Belvoir. The Medical Joint Cross-Service Group followed the common analytical framework established by the Office of the Secretary of Defense (OSD) for reviewing the military health care system. It produced 22 candidate recommendations; one was disapproved late in the process by the Infrastructure Executive Council (IEC), and one was integrated with a service recommendation. The remaining 20 recommendations were combined into 6 recommendations that were ultimately approved by DOD. These 6 recommendations are projected to produce about $2.7 billion in estimated net present value savings over a 20-year period. The expected payback period, or length of time for the savings to offset costs associated with the recommendations, varies from immediately to 10 years. We have identified various issues regarding the recommendations that may warrant further attention by the BRAC Commission. These include the likelihood that some estimated savings could be less than projected, lengthy or no payback periods for certain proposed actions imbedded within the more complex recommendations, and uncertainties about future requirements and their impact on the viability of the recommendations. While the group encountered some challenges in obtaining accurate and consistent certified data on a cross-service basis, the DOD Inspector General and the military service audit agencies ultimately concluded that the data used by the medical group were sufficiently reliable for use in the BRAC process. maintain and improve access to care for all beneficiaries, including identify and maximize synergies from co-location or consolidation, and examine outsourcing opportunities, such as increasing the use of civilian care providers, to allow DOD to leverage its efforts across the overall United States health care system. The medical group organized and conducted its BRAC analyses of DOD’s military health care system focusing on three broad functions: (1) health care services; (2) health care education and training; and (3) medical and dental research, development, and acquisition. As with other military services and joint cross-service groups, capacity and military value analyses were the starting points for the group’s analyses. The DOD Inspector General and service audit agencies performed an important role in ensuring the accuracy of data used in these analyses through extensive audits of data gathered at various locations. In establishing the analytical framework for developing its recommendations, the medical group analyzed the military health system’s capacity in terms of services, workloads, and facilities. The group developed specific functional area metrics for measuring capacity and collected certified data associated with these metrics from military installations across the country. It used a range of metrics, depending on the functional area being assessed, such as military health care population and workloads, number of hospital beds, available and currently used building space, length and frequency of education and training programs, personnel requirements, and equipment usage, to measure capacity. group’s capacity analysis report acknowledged that even though adjustments have been made to the health care system since the BRAC 1995 round, the medical system infrastructure is still generally based on a Cold War strategy with minimal reliance on civilian health care providers. TRICARE network, civilian medical education and training programs, and extended operations. According to DOD medical officials, the Department of Health and Human Services, rather than DOD, is responsible for domestic homeland medical support, but defense medical personnel and infrastructure could be used to assist in handling domestic medical emergency situations. According to DOD officials, since this support is not part of DOD’s defined mission, it was not included in the medical group’s analysis. However, DOD officials also told us that the Joint Chiefs of Staff and the OSD had coordinated the BRAC analysis with major commands that would be impacted by BRAC proposals, including the U.S. Northern Command, which is responsible for the homeland defense mission. DOD is in the process of reviewing the military health care system’s ability to meet future medical readiness requirements, including an evaluation of medical infrastructure at various levels of operations from contingencies to full operational surges. DOD intends to include Department of Homeland Security policies in this review. According to DOD officials, the results of this ongoing assessment were not included in the medical group’s capacity analysis because the assessment is not expected to be completed until after the BRAC recommendations are finalized, following reviews by the BRAC Commission, the President, and Congress. Nevertheless, the medical group made a determination that the current medical force size was adequate to meet the requirements of various war plans, and after reviewing the fiscal year 2006 program objective memorandum and the 20-year force structure plan, it decided to use the current force structure for its analysis. Further, the group concluded that deployment force sizing, a readiness issue, did not have direct influence on determining excess facility capacity. The medical group estimates that its recommendations, if adopted, would result in a 12 percent reduction in excess inpatient medical capacity and an approximately 7.4 million square feet net reduction in overall facility space. The medical group’s assessment of military value, like its excess capacity assessment, focused on the same three functional areas: (1) health care services; (2) health care education and training; and (3) medical and dental research, development, and acquisition. The military value analysis helped to establish the basis for realigning medical functions across the various installations or closing specific activities within the medical infrastructure. It also helped to gauge the impact of the group’s proposed scenarios on the overall DOD health care system. The military value methodology for this BRAC round was similar, in many respects, to the one used in the 1995 round, especially for medical functions. For example, both rounds identified affected populations and local civilian providers within catchment areas. In both rounds, military value played a predominant role in formulating recommendations. Moreover, during the 2005 round, the medical group considered the impact on local beneficiaries, such as military retirees, from downsizing or eliminating medical facilities, which included input from a DOD-chartered military health benefit working group. This working group included independent members who represented TRICARE regions throughout the United States. The medical group’s functional military value analysis assessed the relative capabilities of various activities and facilities supporting the military health care system’s mission and operational needs. Its military value analysis was directly linked to the four military value criteria required by the BRAC legislation. For example, the military value analysis gave greater weight to services supporting active duty members in order to emphasize force readiness. Table 34 shows the relative weights that the group developed for each of the four selection criteria that relate to military value. Medical and dental research, development, and acquisition weight 1. The current and future mission capabilities and the impact on operational readiness of the total force of the Department of Defense, including the impact on joint warfighting, training, and readiness. 2. The availability and condition of land, facilities, and associated airspace (including training areas suitable for maneuver by ground, naval, or air forces throughout a diversity of climate and terrain areas and staging areas for the use of the Armed Forces in homeland defense missions) at both existing and potential receiving locations. 3. The ability to accommodate contingency, mobilization, surge, and future total force requirements at both existing and potential receiving locations to support operations and training. 4. The cost of operations and the manpower implications. In developing its analysis in accordance with the criteria above, the group developed specific functional area attributes, metrics, and data call questions to assist in assessing military value. Figure 17 provides an example of such analysis for the health care services functional area and its linkage to the BRAC legislation. Population—active duty, dependents, and other beneficiaries—eligible to receive medical care from the military health system. Age and condition of medical treatment facilities. Hospital potential capabilities for providing inpatient care to casualties. Total costs for inpatient and outpatient services. The BRAC military value criteria are the first four BRAC selection criteria. The DOD Inspector General and the service audit agencies played important roles in ensuring that the data used in the medical group’s analyses were certified and properly supported. The involvement of these audit groups included validation of data submitted by the military services, compliance with data certification requirements, the integrity of the group’s databases, accuracy of the analytical process in terms of calculations, and the adequacy of supporting documentation. These audit groups conducted extensive audits of the data collected from the military installations, and in some instances data discrepancies were identified for follow-on corrective actions. While the process for detecting and correcting data errors was quite lengthy, the DOD Inspector General and audit agencies determined that the medical-related data were sufficiently reliable for use in the BRAC process. The medical group’s study objectives, military judgment, and capacity and military value analyses helped to identify closure and realignment scenarios for consideration. Identification and evaluation of scenarios was also facilitated by use of an optimization model to identify recommendations that could aid in optimizing medical health care workloads and infrastructure. The group also developed scenarios that included establishing a minimum level of average daily patient workload for inpatient facilities and by reducing excess capacity in multiservice markets to achieve efficiencies. It also used the Cost of Base Realignment Actions (COBRA) model to estimate the potential net costs or savings for its scenario proposals. The group also considered the scenarios’ impact on the local economy, the DOD medical beneficiary population and graduate medical education requirements, and the environment. The medical group submitted 22 recommendations to the IEC, which disapproved one of the recommendations—the proposal to close the Uniformed Services University of the Health Sciences at Bethesda, Maryland. This matter is discussed further in the next section of this appendix. Further, another recommendation was integrated with a service realignment and closure action. The remaining 20 recommendations were combined into 6 recommendations that were ultimately approved by DOD. The group produced 6 recommendations which they reported will yield an estimated $2.7 billion in 20-year net present value savings and $412 million in net annual recurring savings. Table 35 below provides a summary of the financial aspects of the group’s recommendations. However, the group acknowledges that it incorrectly reported certain financial data for its recommendation involving the Walter Reed Army Medical Center. Based on our analysis, the revised estimates are shown as a note to table 35. Appendix X Medical Joint Cross-Service Group Selection Process and Recommendations One-time (costs) Payback period (years) Close Brooks City-Base, San Antonio, TX, by relocating functions to Randolph Air Force Base, Wright-Patterson Air Force Base, Lackland Air Force Base, Fort Sam Houston, and Aberdeen Proving Ground ($325.3) ($45.9) Realign various activities by converting inpatient services to clinics at Marine Corps Air Station Cherry Point, Fort Eustis, U.S. Air Force Academy, Andrews Air Force Base, MacDill Air Force Base, Keesler Air Force Base, Scott Air Force Base, Naval Station Great Lakes, and Fort Knox (12.9) Establish San Antonio Regional Medical Center at Fort Sam Houston, Brooke Army Medical Center; and realign basic and specialty enlisted medical training to Fort Sam Houston (1,040.9) (826.7) Realign Walter Reed Army Medical Center (all tertiary care to Bethesda National Naval Medical Center and primary and specialty care to Fort Belvoir) (988.8) (724.2) Realign McChord Air Force Base by relocating all medical functions to Fort Lewis (1.1) Realign various activities to create joint centers of excellence for chemical, biological, and medical research, development, and acquisition (at Fort Sam Houston, Walter Reed Army Medical Center— Forrest Glen Annex, Wright-Patterson Air Force Base, Fort Detrick, and Aberdeen Proving Ground) (73.9) (45.9) ($2,442.9) ($1,336.7) result in nearly all of the expected savings—over 90 percent of the total estimated 20-year net present value savings of about $2.7 billion, and of the net annual recurring savings of about $411.7 million. Two of the six recommendations have high one-time upfront costs—about $2 billion, or over 80 percent of the total one-time costs for the six recommendations. Two multiservice market area recommendations—the establishment of the San Antonio Regional Medical Center in Texas and realignment of the Walter Reed Army Medical Center in Washington, D.C.—are ultimately expected to (1) produce over 50 percent of the net annual recurring savings and (2) incur most of the up-front costs for the recommendations as a whole. The group’s primary motivation for these recommendations was to transform the existing medical infrastructure into premier modernized joint operational medical centers. In the case of the Walter Reed Medical Center recommendation, the group also justified the recommendation based on a shift in the beneficiary population from the northern tier of the Washington, D.C., area to the southern tier near Fort Belvoir, Virginia. Another recommendation with substantial estimated net annual recurring savings is the closure of the Brooks City-Base in Texas, which is projected to achieve efficiencies in research, development, and acquisition by relocating similar functions to a single location. However, as discussed below, a significant portion of the savings from this as well as other recommendations involve claimed military personnel savings, which are somewhat uncertain. The recommendation that involves the downsizing of inpatient facilities at nine locations is expected to achieve efficiencies and reduce personnel as well as provide enhanced training opportunities for medical personnel transferring to other locations. base will help foster jointness in the long term. Based on our analysis, it is not obvious whether some of these proposed realignments will truly result in joint military operations. Time did not permit us to assess the operational impact of each of the medical group’s recommendations, particularly where operations proposed for consolidations or realignments extend across functional areas, geographical areas, or both. At the same time, we offer a number of broad- based observations about some of the proposed recommendations as they relate to military medical personnel savings, payback periods, jointness, and medical wartime requirements that may warrant further review by the BRAC Commission. Our analysis shows that military personnel savings account for about $201 million or nearly 50 percent of the group’s estimated net annual recurring savings. However, the amount of projected dollar savings is uncertain because the medical group indicated that reductions in end strength are not planned. Indirectly, some savings could occur based on the group’s expectation that medical personnel would be reassigned on an individual basis to specific and varied locations, depending on where the need exists for military medical specialists. In some cases, the group noted that these military personnel reassignments could displace civilian and/or contractor medical providers. When or to what extent these reallocations would occur has not yet been determined. At the time of the group’s analysis, these specific moves had not been identified and thus the group did not estimate costs related to such potential moves in its cost and savings analysis. period was determined to be 10 years. The common linkage of the two recommendations is location, with the expectation that the enlisted medics will benefit from the location of the Brooke Army Medical Center in Texas, which has a trauma center suited for combat casualty training. Another example is the initial realignment of medical research, development, and acquisition functions at Brooks City-Base, which had no payback before DOD combined this recommendation with other related recommendations to close the base. DOD’s ongoing assessment of its future wartime medical requirements, as mentioned earlier, will not be completed until after BRAC decisions are finalized, following reviews by the BRAC Commission, the President, and Congress; therefore, this assessment was not included in the medical group’s analysis. Without having such requirements available during the BRAC process, it is difficult for DOD to identify the appropriate medical infrastructure changes that are needed or to determine the appropriate size of the military health care system. Also, the group recognized that medical operations are changing with casualties rapidly moved to medical facilities outside the theater of operations and that these changes may affect the future sizing of medical forces. Nevertheless, the group expressed belief that the current medical force size was adequate to meet the requirements of the various war plans despite the group’s recommendations that will reduce system-wide excess inpatient capacity by 622 beds. existing medical facilities. While the official told us that VA involvement had the potential for providing services and benefiting the department, another official added that the group’s analysis indicated that sufficient capacity exists, without VA support, within the private sector to accommodate military beneficiaries in those locations where inpatient care at the military facilities is being eliminated. However, we were unable to verify the results of this analysis because the group did not fully document its analysis. The medical group had initially developed a candidate recommendation to close DOD’s medical school, known as the Uniformed Services University of the Health Sciences, which is located on the grounds of the National Naval Medical Center in Bethesda, Maryland. The group had concluded that it was more costly than alternative scholarship programs, and that the department could rely on civilian universities to educate military physicians. The group projected the closure will yield net annual recurring savings of about $58 million, and 20-year net present value savings of approximately $575 million. In a series of reports from 1995 through 2000, we also concluded at the time that the university was a more costly way to educate military physicians. associated with this medical facility in order for it to be a world-class medical center. According to another official, DOD will need to make investments in the university in order to elevate its status and attract leading medical scholars who could make the university more competitive. The Supply and Storage Joint Cross-Service Group followed the common analytical framework established by the Office of the Secretary of Defense (OSD) for reviewing the supply, storage, and distribution system within DOD. The group initially produced five recommendations that were presented to the Infrastructure Steering Group (ISG) and the Infrastructure Executive Council (IEC). Three of the five recommendations were merged into one recommendation by the IEC. If adopted, the three approved recommendations are projected to generate about $5.6 billion in estimated 20-year net present value savings and $406 million in net annual recurring savings for the department with an immediate payback (i.e., time required to recoup up-front investment costs) on the cost of implementing these recommendations. While the number of recommendations is small, each encompasses multiple realignment actions of workloads affecting many locations. Our analysis shows that the anticipated savings would result primarily from business process reengineering—expanded use of performance-based logistics—, infrastructure and inventory reductions, and reduced civilian personnel costs. We identified a number of issues associated with several recommendations that may warrant additional attention by the BRAC Commission. The group encountered some challenges in obtaining accurate and consistent certified data, but the DOD Inspector General and the military service audit agencies, which performed audits of the data, ultimately concluded that the data were sufficiently reliable for use during the BRAC process. The supply and storage group consisted of six senior-level principal members from the logistics directorates for each service, the Defense Logistics Agency (DLA), and the Joint Chiefs of Staff, and was supported by staff from these organizations. The Director, DLA, chaired the group, following the retirement of the original chairman from the Joint Staff. The group’s overarching goal was to identify potential closures, realignments, or both that would enhance economies and efficiencies in operations as traditional military forces and logistics processes become more joint and increasingly take on expeditionary characteristics. The group organized its BRAC efforts around the three core logistics functions of supply, storage, and distribution. These functions are inherent in the military services’ operations as well as for DLA, whose mission is to provide wholesale-level support in these functions for the services in common supply classes. In collecting and analyzing data to formulate its recommendations, the group sought to assess the supply and storage infrastructure in the following four distinct activity areas: (1) military service and DLA inventory control points (2) defense distribution depots, (3) defense reutilization and marketing offices and (4) other activities such as installation-level supply operations. As with other military services and joint cross-service groups, capacity and military value analyses served as starting points for the group’s analyses. While the group initially tried to analyze both the wholesale and retail supply and storage activities, it later terminated most retail-level efforts because of difficulties in collecting reliable data and a desire by the group’s principals to not impact the retail support to operational and other deploying units. The DOD Inspector General and service audit agencies performed an important role in ensuring the accuracy of data used in these analyses through extensive audits of data gathered at various locations. exception, from further consideration in the succeeding analyses leading to recommended actions. The group’s capacity analysis showed that excess capacity exists, even when surge factors were considered, within three of the four supply and storage activity areas it examined. As shown in table 36, the excesses ranged from 20 percent to 75 percent under normal demand conditions across various capacity metrics in the functional areas, with the excesses somewhat less under surge conditions. According to the group’s staff, its recommendation regarding restructuring of defense distribution depots, if approved and implemented, is expected to reduce current covered storage of about 51 million square feet (both regular and special) by over 50 percent, or about 27 million square feet. In addition, the recommendation regarding inventory control points is expected to increase infrastructure by about 4,700 square feet because the inventory control points would be absorbing more space than they would be vacating. The group has no recommendations that would affect the capacity of DLA’s defense marketing and reutilization offices. The supply and storage group’s assessment of military value, like its excess capacity assessment, focused on the same three core logistics functions of supply, storage, and distribution. By linking its military analysis directly to OSD’s four military selection criteria required by the BRAC legislation, the group established a sound basis for developing its recommendations. As shown in table 37, the group developed a weighting system for the military value criteria with the first and third criteria having relatively larger weights, or importance, than the remaining two criteria. As with the capacity analysis, the group’s assessment of military value included development of attributes and metrics in each of the core functional areas to measure military value, and it subsequently sought to collect certified data linked to these metrics from various defense activities whose missions resided within these categories. The group developed 55 individual metrics within the three functional areas, addressing information such as the percentage of demand for stocked items and cost of operations per person. The attributes and metrics were linked back to the military value selection criteria, as illustrated in figure 18. retail level to complete a military value analysis at that level. In many respects, the military value methodology for this round was comparable to that used in the 1995 BRAC round, particularly for DLA activities. In both BRAC rounds, the military value ranking of an activity played a predominant role in formulating recommendations. The DOD Inspector General and the service audit agencies played important roles in helping to ensure that the data used in the group’s data analyses were certified and properly supported and that decision-making models (e.g., military value and optimization) were logically designed and operating as intended. Through extensive audits of the data collected from field activities during the process, these audit agencies notified the group when they identified data discrepancies for follow-on corrective action. While the process for detecting and correcting data errors was quite lengthy and challenging, the audit agencies ultimately deemed the supply and storage-related data to be sufficiently reliable for use in the BRAC process. The Supply and Storage Joint Cross-Service Group did not have accurate and complete capacity and military value data when it initially started developing potential closure and realignment scenarios and, therefore, had to rely on incomplete data, as well as military judgment based on the group’s collective knowledge of the supply and storage area, to formulate its initial closure and realignment scenarios for evaluation. Although the data improved as additional information was requested and received from field locations, the lack of useable data initially limited the use of an optimization model to help identify and analyze scenarios. As time progressed, however, the group obtained the needed data, for the most part, to inform and support its scenarios. The DOD Inspector General validated the data. The group also focused on a number of OSD supplied transformational options, as outlined below, to guide its efforts in the recommendation development process: Establishing a consolidated multi-service supply, storage, and distribution system focused on creating joint activities in areas with heavy DOD concentration. Privatizing the wholesale storage and distribution processes. Migrating oversight and management of all service depot-level reparables to a single DOD agency/activity. Establishing a single inventory control point within each service or consolidate into a joint activity. Examining the effect of reducing functions by 20, 30, and 40 percent from the existing baseline, or reducing excess capacity by an additional 5 percent beyond the analyzed excess capacity. The group developed a total of 51 scenarios based on these transformational options. With the maturation of the data and the application of the COBRA model to estimate costs and savings, along with military judgment, the group was able to narrow its proposals to five candidate recommendations that were forwarded to the ISG and ultimately approved by the IEC. Further integration of three of these recommendations into a single recommendation left the group with three approved recommendations. The group’s recommendations are projected to produce substantial savings—about $406 million in estimated net annual recurring savings and about $5.6 billion in estimated net present value savings for DOD over the next 20 years. All are realignment actions, even though one of the recommended actions will close two defense distribution depots at Columbus, Ohio, and Texarkana, Texas and another one will close four inventory control points at Fort Huachuca, Arizona; Fort Monmouth, New Jersey; Rock Island, Illinois; and Lackland Air Force Base, Texas, while, at the same time, opening a new one at Aberdeen Proving Ground, Maryland. The group’s recommendations also helped facilitate the closures of Fort Monmouth, New Jersey, and Red River Army Depot, Texas, both of which are reported in the Army’s BRAC report. Table 38 provides a summary of the financial aspects of the group’s three DOD-approved recommendations. industrial customers, such as maintenance depots, shipyards, and air logistics centers. The strategic distribution sites are located at Susquehanna, Pennsylvania; Warner Robins, Georgia; Oklahoma City, Oklahoma; and San Joaquin, California. It is also designed to realign service retail supply and storage functions along with personnel and infrastructure for these industrial customers in an “in-place, no-cost transfer” to DLA. This recommendation supports the closures of the defense distribution depots at Columbus, Ohio, and Texarkana, Texas, and realigns each of the remaining 17 defense distribution depots. The recommendation regarding the realignment of the inventory control points transfers certain inventory control point functions, such as contracting, budgeting and inventory management, to DLA and allows further consolidation of service and DLA inventory control points by the supply chains they manage. In addition, it supports the movement of the management of essentially all service consumable items and the procurement management and related support functions for the procurement of essentially all depot level reparables from the military services to DLA. This recommendation realigns all 16 of the current DLA and service inventory control points and closes 4 through consolidation— Fort Huachuca, Arizona; Fort Monmouth, New Jersey; Rock Island, Illinois; and Lackland Air Force Base, Texas—while opening a new inventory control point at Aberdeen Proving Ground, Maryland. The recommendation also supports the Army’s closure of Fort Monmouth by moving supply and storage functions to other locations. The recommendation regarding the realignment of commodity management disestablishes the wholesale supply, storage, and distribution functions within the department for all tires; packaged petroleum, oils, and lubricants; and compressed gases used by DOD. As a result, these commodities will be supplied directly by private industry, which will free up space and personnel used to manage these items. It realigns all of the remaining defense distribution depots by disestablishing all storage and distribution for the commodities. Although time did not permit us to fully assess the operational impact of each recommendation, particularly where operations proposed consolidation across multiple and varied locations, available information suggests these recommendations have the potential for more efficient operations within DOD. At the same time, there are some issues we identified that we believe the BRAC Commission may wish to consider during its review process because of potentially overstated savings estimates. In this regard, the supply and storage group claimed savings for future cost avoidances for sustainment and facilities’ recapitalization related to the facilities’ space that is expected to be vacated under the recommended actions. However, as discussed below, it is uncertain whether these savings will actually materialize if these facilities are not closed and remain open—even with reduced usage of the space. Additionally, the group did not develop recommendations for several areas within the scope of its responsibility that may have further contributed to the accomplishment of DOD’s BRAC objectives, such as additional consolidations in DLA and service inventory control points. required to be held in inventory. Although the group had some supporting documentation for its assumptions, time did not allow us to fully evaluate the documentation. Nevertheless, the full magnitude of savings likely to be realized will depend on how well the actions, if approved, are implemented in line with the assumptions made. All of the supply and storage group’s recommendations taken together show significant projected savings from expected reductions to excess or unnecessary infrastructure. According to the group’s estimates, it is claiming BRAC savings on about 27 million square feet of vacated space— an estimated savings of about $100 million annually or about 25 percent of the group’s total net annual recurring savings. In developing its costs and savings estimates, the group assumed that all of the excess infrastructure that was made possible by the recommendations would generate BRAC savings because it was further assumed that the infrastructure would no longer be used and therefore would not require sustainment and recapitalization funding. However, we believe these assumptions are not necessarily valid because it is not clear that the freed-up infrastructure will be eliminated and could potentially be occupied by other users following the BRAC process. At present the group does not have plans for this space. Under the BRAC process, if these vacated facilities or portions thereof are reoccupied by other defense organizations, there is a corresponding cost for this reoccupation. Likewise, additional BRAC costs are required for facilities that remain empty to minimally maintain them, and costs are incurred if buildings are demolished. Supply and storage officials told us they were aware of this issue and said that their goal is to vacate as much space as possible by re-warehousing inventory and by reducing personnel spaces, but they do not have a specific plan for what will happen to the space once it is vacated. In addition, until these recommendations are ultimately approved and implemented, it will not be known exactly how much space is available or how this space will be disposed of or utilized. As a result, it is unclear as to how much of the estimated $100 million net annual recurring savings will actually occur. 6,500 service staff to DLA and was estimated by the group to save $2.9 billion over the same 20-year period. The latter scenario would leave nearly 3,900 service technical and engineering support personnel of the more than 10,300 service staff at existing service inventory control points. Senior-level principal members of the supply and storage group consider the technical and engineering support personnel positions to be more closely related to weapon system readiness and support to the warfighter than other inventory control point functions, such as contracting, budgeting, and inventory management, which are being transferred to DLA. These officials were not willing to suggest transferring the technical positions to DLA because of the perceived additional risk involved of not being able to supply the critical parts to the warfighter when needed. Therefore, they approved the recommendation that generated less savings, but also less risk to weapon system readiness and moved fewer inventory control point functions and fewer service staff to DLA. The Commission may wish to further examine the potential for greater savings regarding the transfer of more inventory control point functions versus the potential risk of not being able to supply critical parts when needed. The group also did not pursue the development of recommendations regarding the defense reutilization and marketing office activities, even though considerable excess capacity exists, as shown in table 36, in that area. Group officials told us that these activities, which are managed by DLA, are considered follower organizations that are currently undergoing an extensive A-76 initiative outside the BRAC process that is expected to either close or consolidate several activities and reduce staff levels at others. DLA data indicate that 61 of the 67 reutilization and marketing office activities analyzed by the supply and storage group are involved in the effort and that the agency expects to save about $36 million through 2011 with the A-76 effort. The Technical Joint Cross-Service Group followed the common analytical framework established by the Office of the Secretary of Defense (OSD) in reviewing its functions and facilities. The group included in its report 13 recommendations that it projects would generate about $2.2 billion in 20- year net present value savings for DOD. These 13 recommendations incorporate a total of 6 closures, 62 realignments, and 1 disestablishment action. Additionally, the technical group transferred parts of nine recommendations to other joint cross-service groups or military services, which combined with other actions resulting in three additional closures. The majority of the projected annual recurring savings result from eliminating civilian and contractor personnel and vacating leased space. The recommendations have payback periods—the time required for savings to offset closure and realignment costs—ranging from 1 to 26 years. Limited progress was made to foster greater jointness and transformation. The DOD Inspector General and the military service audit agencies, which performed audits of the data used in the process, concluded that the data were sufficiently reliable for use during the BRAC process. While available data supporting the recommendations suggest their implementation should provide for more efficient operations within the department, we believe there are some issues that the BRAC Commission may wish to examine more closely during its review process. The technical group was chaired by the Director, Defense Research and Engineering; it consisted of senior members from each military department and the Joint Chiefs of Staff. The group created five subgroups to evaluate the technical facilities: (1) Command, Control, Communications, and Computers, Intelligence, Surveillance, and Reconnaissance (C4ISR); (2) Air, Land, Sea, and Space Systems; (3) Weapons and Armaments; (4) Innovative Systems; and (5) Enabling Technologies. In addition, the group also created a Capabilities Integration Team and an Analytical Team to support the efforts of the subgroups. The technical group established two principles to guide its analysis and recommendation development: (1) provide efficiency of operations by consolidating technical facilities to enhance synergy and reduce excess capacity and (2) maintain competition of ideas by retaining at least two geographically separated sites. The group analyzed three functional areas within DOD: research, development and acquisition, and test and evaluation. It focused its analysis of the 3 functions across 13 technical capability areas—air platforms; battlespace environments; biomedical; chemical and biological defense; ground vehicles; human systems; information systems; materials and processes; nuclear technology; sea vehicles; sensors, electronics, and electronic warfare; space platforms; and weapons and armaments. Each of the military services and some defense agencies perform work in the functions and technical capability areas. The group developed a strategic framework based on its two principles that focused on establishing multifunctional and multidisciplinary centers of excellence, which served as the starting point for developing scenarios. These strategy-driven scenarios were later confirmed by capacity and military value data and military judgment. The DOD Inspector General and service audit agencies performed an important role in ensuring the accuracy of data used in these analyses through extensive audits of data gathered at various locations. and subsequently collected certified data on these measures from the technical facilities performing work in each of the technical facility categories. Excess capacity was defined as the difference between current usage plus a surge factor and peak capacity. Current usage was defined as the average usage for fiscal years 2001 through 2003, and peak capacity was defined as the maximum capacity for the measure. The group set the surge factor at 10 percent of current capacity, based on military judgment of how the technical community has approached surge in the past. The group calculated excess capacity for each of the 39 technical facility categories; however, the aggregated data provide more insight into the amount of excess capacity. Table 39 shows the excess capacity that the technical group found through its analysis. The group reported that the current required capacity, including surge, across all technical capability areas and functions is 169,596 work years. The group found the equivalent of 13,368 work years, or 7.9 percent, excess capacity across the three functions. The group reports that its recommendations eliminate approximately 3,000 work years. Based on these calculations, approximately 6 percent excess capacity would remain if all of the group’s recommended actions are implemented. The work year reductions include the reductions made through the technical group’s 13 recommendations. The work year reductions do not include reductions in technical excess capacity through the closure of Fort Monmouth, New Jersey, and Brooks City-Base, Texas, for example, which are included in the Army and Medical Joint Cross-Service Group recommendations, respectively. As with capacity analysis, the technical group’s assessment of military value included an assessment of the technical infrastructure across the 39 technical facility categories. The group weighted each of the four military value criteria based on the importance of the criterion to the technical function. The group used the same weights for the research and development and acquisition functions, but different weights for the test and evaluation function due to differences in the type of work conducted at these facilities. Table 40 shows the weights for the three functions. people, which measures intellectual capital; physical environment, which measures special features of technical physical structures and equipment, which measure the presence of physical structures unique within DOD and the value, condition, and use of these structures; operational impact, which measures the output of the three functional areas (research, development and acquisition, and test and evaluation); and synergy, which measures working on multiple technical capability areas and functions and jointness. The technical group developed weights for the 5 attributes that were applied to each of the criteria and 30 metrics divided among the 5 attributes. While the group allowed the evaluative weights for the metrics to vary across its subgroups, it used the same weights for the five attributes. The evaluative weight assigned to attributes varied among the three functions because a particular attribute could have greater importance for one function than another. For example, the technical group weighted the people attribute for criterion 1 at 17 percent of the total military value score for research, 13 percent for development and acquisition, and 16 percent for test and evaluation. While the attribute weights were the same for activities across subgroups, the metric weights varied by subgroup. For example, the Air, Land, Sea, and Space Systems subgroup weighted the patents, publications, and awards metric of criterion 1 for the research function at 30 percent of the total for the people attribute, while the Weapons and Armaments subgroup weighted the same metric at 18 percent. Figure 19 provides an example of the technical group’s military value attributes, metrics, data sources, and their link to the four BRAC military value criteria. Highest education level for professional/technical workforce. Number and funding for other services’ programs executed at the facility. The BRAC military value criteria are the first four BRAC selection criteria. All technical facilities were analyzed using the technical group’s military value approach, regardless of whether the recommendation ended up with the technical group’s 13 recommendations or in another services’ or joint cross-service groups’ recommendations. For example, part of the Army’s recommendation to close Fort Monmouth relocates the information systems research and development and acquisition to Aberdeen Proving Ground, Maryland. The technical group followed the same process in gathering data and calculating a military value score for these functions as they did all other technical functions. Inspector General found that certified data were used for the group’s capacity and military value analyses, and there was an adequate audit trail for the capacity and military value analyses, and COBRA input data. Through extensive audits of the data collected from technical facilities during the process, the service audit agencies notified the technical facility of identified data discrepancies and the technical facility was to take corrective action. While the process for detecting and correcting data errors was quite lengthy and challenging, the DOD Inspector General and service audit agencies deemed the technical data to be sufficiently reliable for use in the BRAC process. their implementation should provide for more efficient operations within the department, we believe there are some issues that the BRAC Commission may wish to examine more closely during its review process. The technical group’s proposed recommendations result in a total projected net savings of $2.2 billion over 20 years, with net annual recurring savings of $265.5 million per year. Table 41 provides a summary of the financial aspects of the group’s recommendations, most of which are realignment actions. The majority of the projected net annual recurring savings are based on eliminating civilian and contractor personnel ($167.7 million) as functions are realigned between installations and vacating leased space ($51.8 million). On the other hand, the majority of the projected costs are for constructing new facilities ($644.6 million) and moving personnel and equipment ($326.7 million) to the gaining installations. The group’s 13 recommendations include 6 closures, 62 realignments, and 1 disestablishment for a total of 69 actions. For example, the group’s recommendation to consolidate maritime C4ISR research, development and acquisition, and test and evaluation includes 16 realignment actions and 1 disestablishment action. The technical group’s recommendations support, to a limited extent, the goals of maximizing jointness and furthering transformation efforts within the department. Eight of the group’s 13 recommendations move functions from one service or defense agency’s installation to another service’s installation. For example, the recommendation to create an integrated weapons and armaments specialty site for guns and ammunition moves seven Navy functions to an Army installation. While the chairman of the group’s Capabilities Integration Team told us that all of the group’s recommendations were transformational, the supporting information often suggested the recommendations were more focused on combining like work at a single location without a clear indication of how it provided for transformation. Two of the group’s recommendations specifically mention transformation in their justification statements, but the transformational effects are not clear in the documentation. For example, the recommendation to create an air integrated weapons and armaments research, development and acquisition, and test and evaluation center states that it supports transformation because it moves and consolidates smaller weapons and armaments efforts into high military value integrated centers and leverages synergy among the three functions; however, the documentation does not discuss how these actions are transformational. Time did not permit us to assess the operational impact of each of the technical group’s recommendations, particularly where operations proposed for consolidation extend across multiple locations outside of a single geographic area. At the same time, we offer a number of broad-based observations about the proposed recommendations. there are some issues that the BRAC Commission may wish to consider during its review process. Specifically, the Commission may want to consider whether the level of personnel reductions is attainable, issues related to projected savings from vacating leased space, the long payback period and relatively small savings for some recommendations, and the economic impact of one recommendation. The technical group developed a standard assumption to eliminate 15 percent of military and civilian personnel affected by the recommendation for consolidation and joint actions based on personnel eliminations at technical facilities in previous BRAC rounds. The group used a different assumption (5.5 percent reduction in affected military and civilian personnel) for co-location actions because it is believed that there are likely to be fewer efficiency gains for co-locations than consolidations or joint actions. A technical group official told us that in some cases the group used higher personnel reduction estimates than the standard because the military department provided for higher estimated personnel reductions in the certified data, and the military services agreed with all personnel eliminations in the group’s recommendations. We believe there is some uncertainty regarding the magnitude of the group’s expected savings for these personnel reductions because its estimates are based on assumptions that have undergone limited testing and full savings realization depends upon the attainment of these personnel reductions. Eight of the group’s 13 recommendations eliminate at least 15 percent of military and civilian personnel positions affected by the recommendation. Personnel savings account for at least 40 percent, and as much as 100 percent, of the group’s projected annual recurring savings for each of these 8 recommendations. Almost three-quarters of all personnel savings come from civilian personnel eliminations. Similar to military and civilian personnel, the technical group developed a standard assumption that the subgroups could eliminate 15 percent of contractor personnel and could take $200,000 in recurring savings for each contractor position eliminated. It is unclear from the data what percentage of contractor positions were eliminated because the total number of contractor personnel is not included in the COBRA data. Seven of the group’s recommendations include savings from eliminating contractor personnel, for a total of $53.9 million in net annual recurring savings. In contrast, the data on economic impact (criterion 6 of the BRAC selection criteria) show a net loss of 508 contractor personnel in 10 recommendations, which would have totaled $101.6 million in net annual recurring savings. Technical group officials told us that both sets of numbers are based on certified data from the services; however, they added that the contractor data were difficult to collect because they were provided by the services through the scenario data calls, rather than as standard data in the COBRA model. It is unclear to what extent the personnel reductions assumed in the group’s recommendations will be attained, largely because of uncertainties associated with the group’s assumptions. For example, the group’s recommendation to create a naval integrated weapons and armaments research, development and acquisition, and test and evaluation center includes the reduction of 15 percent of military and civilian personnel. As mentioned above, the technical group assumed a standard 15 percent reduction in military and civilian personnel for consolidation and joint actions and a 5.5 percent reduction in military and civilian personnel for co- location actions. Because we are uncertain whether the 15 percent reduction in military and civilian personnel for consolidations and joint actions is attainable, we determined the costs and savings of the recommendation with the 5.5 percent personnel reduction for co-locations. Table 42 shows the financial aspects of DOD’s original recommendation with a 15 percent reduction in military and civilian personnel, our analysis of the recommendation with a 5.5 percent reduction in military and civilian personnel, and the difference between the two recommendations. Payback period (years) 20-year net present value savings (cost) GAO’s analysis (5.5 percent military and civilian personnel reduction) (3) Our analysis identified some inconsistencies in projecting annual recurring savings and one-time savings in three recommendations to move activities from leased space. The technical group used two different methodologies to project annual recurring savings from vacating leased space. In one recommendation, the group projected annual recurring savings based on future leased costs while in the other two, the group used actual lease costs data provided by the military services and defense agencies. Furthermore, the recommendation to co-locate the extramural research program managers also includes $2.7 million in annual recurring savings for the Defense Threat Reduction Agency vacating leased space; however, the agency is already scheduled to move to Fort Belvoir, Virginia, in January 2006. The technical group also included $14.5 million in one-time savings for seven of the eight activities vacating leased space for the cost of upgrading existing leased space to meet DOD’s antiterrorism and force protection standards. The group did not collect data that would indicate whether existing leases met the antiterrorism and force protection standards. Our analysis indicates that excluding these one-time savings would have minimal impact on the overall projected savings of the technical group’s recommendations. Only 3 of the 13 recommendations achieve savings during the 6-year implementation period, and 3 of the group’s recommendations take longer than 10 years to achieve savings, far longer than typically occurred in the 1995 BRAC round. According to a technical group official, the recommendation to establish a center for rotary wing air platform research, development and acquisition, and test and evaluation, which has a 26-year payback, was retained because it realigns the technical-related work away from a test range at Fort Rucker, Alabama, which will provide for expanded training space. An Army official agreed that a potential benefit of realigning the test range at Fort Rucker is that it would make available hangars, facilities, and airspace for trainers. For example, the Army said that the vacated hangar space could potentially be used to accommodate the Aviation Logistic School’s proposed move to Fort Rucker and the reduced demand for airspace will make additional airspace available to meet the current and future needs for manned and unmanned aviation training. The group’s recommendation to create an integrated weapons and armaments specialty site for guns and ammunition, which has one-time costs of $116.3 million and a 20-year net present value savings of $32.6 million, has a payback of 13 years. Technical group officials told us that this recommendation was determined to be worth the costs and longer payback period because it provides synergy and jointness, as well as eliminating some duplication, in research and development and acquisition of guns and ammunition for the Army and Navy. According to a group official, the group’s recommendation regarding Navy sensors, electronic warfare, and electronics research, development and acquisition, and test and evaluation, which has a 12-year payback period, is beneficial because it consolidates similar work currently performed at locations that are in proximity to each other and clears out laboratory space at Naval Air Station Point Mugu, California, that is needed for personnel moving in from Naval Support Activity Corona, California, through a Navy recommendation. The official added that while the payback for this recommendation is long, it should be put into perspective with the savings from closing Naval Support Activity Corona because the savings from closing that facility (net annual recurring savings of $6.0 million and a 20-year net present value of $0.4 million) would be smaller had the laboratory space not been available at Point Mugu. One of DOD’s BRAC selection criteria, criterion 6, required the department to consider the economic impact on existing communities in the vicinity of military installations when determining realignments and closures. In most cases, the group’s recommendations had a cumulative impact on communities of less than 1 percent as measured by direct and indirect job loss as a percentage of employment for the economic area of the military installation. However, the exception is the recommendations that realign activities from Naval Surface Warfare Center Crane, Indiana, which would result in an economic impact of 9.3 percent. A technical group official stated that realigning the technical infrastructure to respond to defense needs over the next 20 years took priority over the economic impact of the proposed recommendation. Two of the group’s recommendations realign or eliminate approximately 460 military and civilian personnel and 80 contractor personnel from Naval Surface Warfare Center Crane, for a cumulative reduction of 9.3 percent of employment in Martin County, Indiana, when direct and indirect jobs are considered. There is some uncertainty on the number of civilian personnel that would be realigned in the technical group’s recommendation to create a naval integrated weapons and armaments research, development and acquisition, and test and evaluation center. The recommendation proposes to realign about 1,400 civilian employees from Naval Air Station Point Mugu, California, to Naval Air Weapons Station China Lake, California. However, in its data call submission, Naval Air Station Point Mugu identified 505 civilian employees that operate or support an outdoor range that it believes should remain at Point Mugu; however the technical group’s recommendation proposes to move these personnel to China Lake. A Navy official said that if the recommendation is approved the Navy will decide the best way to manage the range, including the appropriate number of employees to retain at Point Mugu, during implementation. Our analysis indicates that if the 505 civilian employees remain at Point Mugu, the 20- year net present value savings decreases by about $87.4 million but the payback period remains at 7 years. The technical group developed a scenario that would have allowed the Air Force to close Los Angeles Air Force Base, California, which may have further contributed to the accomplishment of BRAC objectives; however, the Air Force Base Closure Executive Group did not approve this scenario due to the base’s relatively high military value and perceived operational risk due to a potential for schedule and performance disruption. Table 43 provides a summary of the financial aspects of this scenario. Payback period (years) 20-year net present value (costs) or savings($305.1) ($161.1) development and acquisition—its military value in space development and acquisition is four times higher than that of Peterson—and (2) the closure has a near-term operational risk due to a potential for schedule and performance disruption to development and acquisition programs and activities, intellectual capital, and synergy with industry based in Los Angeles and surrounding areas. Technical group officials told us that there are several reasons to close Los Angeles Air Force Base in addition to the net recurring savings ($52.9 million) and relatively high 20-year net present value savings ($358.5 million). Los Angeles Air Force Base is a single-service installation that primarily performs one function in one technical capability area— development and acquisition of space platforms. The technical group sought to identify opportunities to consolidate smaller single-function locations to larger multifunction facilities, so closing Los Angeles Air Force Base would meet this goal. The group proposed to move the functions at Los Angeles Air Force Base to Peterson Air Force Base to co-locate the development and acquisition function with the operational user. Other alternatives could achieve other goals. For example, moving the space development and acquisition function from Los Angeles Air Force Base to Kirtland Air Force Base, New Mexico, which performs research on space platforms, could expedite the transition of technology from the research phase to development and acquisition. Alternatively, there could be increased jointness among the services if the functions at Los Angeles Air Force Base were moved to Redstone Arsenal, Alabama, where much of the Army’s space platform development and acquisition work is done. DOD used a quantitative model, known as the Cost of Base Realignment Actions (COBRA) model, to provide consistency across the military services and the joint cross-service groups in estimating the costs and savings associated with BRAC recommendations. DOD has used the COBRA model in all previous BRAC rounds and over time has made improvements designed to provide better estimating capability. Similarly, DOD has continued to improve the model for its use in the 2005 BRAC round. We have examined COBRA in the past and during this review and have found it to be a generally reasonable estimator for comparing potential costs and savings among candidate alternatives. As with any model, the quality of the output is a direct function of the input data. Also, as in previous rounds, the COBRA model, which relies to a large extent on standard factors and averages, does not represent budget quality estimates that will be developed once BRAC decisions are made and detailed implementation plans are developed. The COBRA model also does not include estimated costs of environmental restoration as DOD considers these costs a liability that must be addressed whether or not an installation is closed. costs for the actions; (4) annual recurring savings; and (5) the net present value of BRAC actions, calculated over a 20-year time frame. Collectively, this financial information provides important input into the selection process as decision makers weigh the financial implications for various BRAC actions along with military value and other factors (for example, military judgment) in arriving at final decisions regarding the suitability of BRAC recommendations. The COBRA model uses a set of formulas, or algorithms, that rely on standardized data as well as base- and scenario-dependent data to perform its calculations. Standard factors are common to a class of bases and are applicable for all recommendations that involve those bases. Some standard factors apply only to one DOD component or a subset of a component’s bases, while others are applicable to all bases DOD-wide. Typical standard factors include, for example, average personnel salaries and costs per mile and per ton for moving personnel and equipment. Base- and recommendation-specific data, which were to be certified in accordance with the BRAC statute, include, for example, the number of authorized personnel on a base, the size of the base, and annual sustainment costs. As with any model, the quality of the output is a direct function of the quality of the input data. For this reason, the data used in COBRA were to be certified, in a manner similar to that employed for the capacity and military value data, as to their accuracy. The COBRA model has been used in the base closure process since 1988, and in the intervening years it has been consistently revised to address the problems we and others have identified after each round. DOD has once again made improvements to the model, as shown in table 44, that are designed to further refine its estimating capability. enclaves created during the prior BRAC rounds, thereby having the effect of overstating the savings for those particular BRAC actions. Consequently, the Joint Process Action Team provided for the inclusion of these costs in the COBRA model. In another case, the Joint Process Action Team developed an approach to incorporate longer term estimated facility recapitalization costs in COBRA, thus overcoming a COBRA shortcoming that we identified in our 1997 report on lessons learned from the prior BRAC rounds. As was done in the 1995 BRAC round, the Army Audit Agency examined the improved COBRA model to determine whether the model accurately calculated cost and savings estimates as described in the user’s manual. The Army Audit Agency assumed this responsibility at the request of The Army Basing Study Group since the Army serves as the executive agent for the COBRA model. The Army Audit Agency tested all 340 algorithms in the model as presented in the user’s manual and reported in September 2004 that COBRA accurately calculated costs and savings as prescribed in the manual. Following the audit, however, multiple revisions were made to the model to include changes to the TRICARE and privatization algorithms because of programming errors in the model. The Army Audit Agency subsequently reexamined the revisions where these algorithms were modified and concluded in a similar fashion that the model accurately calculated the estimates. In addition, the Army Audit Agency validated the certified data and documentation supporting the standard factors used in the model. accuracy of the input data, and the flexibility provided to users of the model to consider additional input data that can affect cost and savings estimates. The following are examples of cases where the specific application of the model can have an effect on the estimates: The COBRA model generates a dollar amount attributable to the reduction or elimination of military personnel at realigning or closing bases. While it has been DOD’s practice to classify these reductions or eliminations as recurring savings, we have consistently taken the view that these actions should not be counted as savings that can be used outside the military services’ personnel accounts unless commensurate reductions are made in the affected military services’ end strengths. We acknowledge that these actions may afford DOD the opportunity to redirect these personnel to serve in other roles that would benefit DOD. Our analysis of DOD data indicate that about 47 percent—about $2.6 billion—of the expected net annual recurring savings of nearly $5.5 billion for the 2005 round are attributable to these military personnel actions, for which reductions in the military personnel end- strength levels are not planned. The COBRA model provides users with considerable flexibility in estimating one-time and miscellaneous recurring costs or savings of various recommendations by allowing them to consider what actions might constitute a cost or savings and what the expected dollar amounts should be. Validating the level of projected savings is less clear-cut for recommendations that, instead of closing facilities, realign workloads from one location to another, or that estimate savings in overhead or other consolidation efficiencies. The dollar amounts could be based on specific assumptions as well as certified data but nonetheless be subject to greater degrees of uncertainty pending implementation than would be actions resulting in facility closures where expected reductions are more clear-cut. Our analysis of the BRAC recommendations showed inconsistencies across some of the services and joint cross-service groups in applying COBRA in this area. would be either understated or overstated. Time did not permit us to determine the extent to which this might be the case in the proposed recommendations. Although COBRA has provided DOD with a standard quantitative approach enabling it to compare the estimated costs and savings associated with various proposed BRAC recommendations, it should be noted that it does not necessarily reflect with a high degree of precision the actual costs or savings that are ultimately associated with the implementation of a particular BRAC action. COBRA is not intended to produce budget-quality data and is not used to develop the budgets for implementing BRAC actions, which are formulated following the BRAC decision-making process. COBRA estimates may vary from the actual costs and savings of BRAC actions for a variety of reasons, including the following: COBRA estimates, particularly those based on standard cost factors, are imprecise and are later refined during implementation planning for budget purposes. The use of averages has an effect on precision. For example, as noted previously, COBRA uses authorized, rather than actual, base civilian personnel figures in its calculations. Our work has shown that the actual number of personnel may be lower or higher than that which is authorized. The authorized personnel levels are documented estimates, which can be readily audited. COBRA also uses a median national civilian personnel salary figure (adjusted by locality pay), rather than average pay at a particular base, in its calculations. Further, COBRA estimates are expressed in constant-year dollars, whereas budgets are expressed in then-year dollars. 36 percent, or $8.3 million, of the $23.3 million in costs incurred through fiscal year 2003 for implementing BRAC actions for the previous four BRAC rounds. Further, COBRA does not include estimates for some other costs to the federal government, particularly those related to other federal agencies or DOD providing assistance to BRAC-affected communities. That is because assistance costs depend on specific implementation plans that are unknown at the time COBRA estimates are developed. In our January 2005 report on the previous BRAC rounds, we noted that about $1.9 billion in such costs had been incurred through fiscal year 2004. Some savings are not fully captured in COBRA as well. COBRA does not include estimates, for example, for anticipated sales of BRAC surplus property or other revenue that may be collected in the future through property leasing arrangements with BRAC-affected entities. These revenues can help offset some of the costs incurred in implementing BRAC actions. While such estimates had been included in COBRA in the previous rounds, the Joint Process Action Team decided not to include any such estimates for the 2005 round because of the difficulty in estimating the amount of these revenues. Nonetheless, while COBRA estimates do not necessarily reflect the actual costs and savings ultimately attributable to BRAC, we have recognized in the past and continue to believe that COBRA is a reasonably effective tool for the purpose for which it was designed: to aid in BRAC decision making. It provides a means for comparing cost and savings estimates across alternative closure and realignment recommendations. One of the eight selection criteria used to make BRAC decisions was the economic impact on existing communities in the vicinity of military installations coming from BRAC recommendations. DOD measured the economic impact of BRAC recommendations on the affected community’s economy in terms of total potential job change—measured both in absolute terms (estimated total job changes) and relative terms (total job changes as a percentage of the economic area’s total employment). This approach to measuring economic impact is essentially the same approach DOD used in the 1995 BRAC round. In a series of reports, that examine the progress in implementing closures and realignments in prior BRAC rounds, we examined how the communities surrounding closed bases were faring in relation to key national indicators. In our last status report, we observed that most communities surrounding closed bases were faring well economically in relation to key national economic indicators. While some communities surrounding closed bases were faring better than others, most have recovered or are continuing to recover from the impact of BRAC, with more mixed results recently, allowing for some negative impact from the 2001 recession. While there will be other economic impacts from 2005 BRAC actions that DOD did not consider, such as changes in the value of real estate or changes in the value of businesses in the economic area, we believe that the magnitude of job changes would be correlated with the changes in these other dimensions of economic impact. Although not a precise predictor of the economic impact, we and an independent panel of experts assembled by DOD agree that the methodology used by DOD makes a reasonable attempt to measure economic impact of BRAC actions, both in terms of communities losing and gaining jobs as a result of BRAC actions. DOD assessed the economic impact of realignments and closures using a methodology that sought to estimate the total direct and indirect job changes. To perform its assessment, DOD established the Economic Impact Joint Process Action Team with members of the services and the Office of the Secretary of Defense (OSD) to develop an economic impact model for the services and joint cross-service groups to use as they considered potential recommendations. The team met many times to develop the economic methodology. We attended and observed those meetings as the methodology was developed. DOD also retained a private firm, Booz Allen Hamilton, to provide technical assistance in developing the methodology and computer database used by the military services and joint cross-service groups in calculating economic impacts in communities for which they were considering closure or realignment actions. which the base’s primary county or counties lie. For bases in counties not in a MSA, Micropolitan Statistical Area, or a Metropolitan Division the economic area was defined as the county itself. The economic impact of a potential action on an area was measured in terms of direct and indirect job changes estimated from 2006 through 2011 as shown below. Estimated Total Job Changes = Direct Job Changes x (1 + indirect multiplier + induced multiplier). Direct job changes are the estimated net addition or loss of jobs for military personnel, military students, civilian employees, and contractor mission support employees. The indirect job changes are the estimated net addition or loss of jobs in each economic area that could potentially occur as a result of the direct job changes. DOD considered two types of indirect job changes: (1) indirect job changes that are associated with the production of goods or the provision of services that are direct inputs to a product, such as a subcontractor producing components for a weapon system and (2) induced job changes that are affected as a result of local spending by direct and indirect workers, such as retail sales. base was located. Indirect multipliers were estimated by mapping Military Occupational Specialties (MOSes) to economically similar civilian sectors. Each of these similar economic sectors multipliers were weighted by the number of military personnel mapped to each sector divided by the total number of employees in the sector. Examples of these economically similar sectors are educational services, administration and support services, scientific research and development services, aerospace product and parts manufacturing, and electronic repair and maintenance. Judgment was used to place all MOSes into one of the industrial sectors. A weighted average of the indirect multipliers, based on the weights discussed above, for each base was used to estimate the indirect job changes from military personnel. This weighted average of indirect multipliers used to estimate the military indirect multiplier for each base was used to estimate the indirect job changes from civilian personnel job changes, as well as the indirect job changes for mission-support contractors for each base. Estimating the induced job changes from military and civilian job changes was more straightforward. For each economic area, MIG used one induced multiplier for military personnel job changes and one for nonmilitary government jobs changes. These multipliers were used to estimate the induced job changes for each base in that economic area. Summing the products of the weights for each of the civilian industries calculated for the military indirect multipliers, and the induced multipliers for each of the industries from MIG, produced the induced multiplier used for mission support contractor job changes. Because of a concern about the lower spending of military trainees at recruit training facilities, an adjustment was made to reduce the values of the induced multipliers used for job changes of military trainees at recruit training bases. The Economic Impact Joint Process Action Team was also concerned about overestimating induced job changes for military trainees at recruit training bases and thought that military trainees at such bases have a smaller economic impact than civilian employees and regular military personnel, including those military personnel who receive more advanced training. The team thought this because such students receive a relatively smaller income and are generally transient. Student multipliers for bases with recruit training programs were estimated by multiplying the military induced multiplier for an economic area by the ratio of basic training wages to average military wages (slightly more than a third). Student induced multipliers for bases without basic training programs were set equal to the military induced multiplier for the base’s economic area. The team thought that these more advanced students were likely to have incomes and spending habits similar to the average military in the economic area. Some of the joint cross-service groups subsequently considered a small number of bases (leased spaces or Reserve/Guard centers) that were not included in the initial set of defined economic areas. For these economic areas, a generic set of multipliers was developed by averaging each of the multipliers of the five categories (military, civilian, contractor, student, and recruit training student) over the existing economic areas. by economic area (net result of all actions for the economic area). The total potential job change and the total potential job changes as a percentage of total in an economic area were to be considered in the context of historical economic data. For historical context, the services and the joint cross-service groups considered the following for each economic area: total employment: 1988 to 2002, annual unemployment rates: 1990 to 2003, and real per capita income: 1988 to 2002. In addition, the latest available numbers on population would be provided. These dates were chosen to reflect the latest available data from federal sources. In the 1995 BRAC round, DOD developed a separate method of assessing cumulative economic impact because some of the closures and realignments from the prior rounds had not been fully implemented, so special consideration was given to the economic impacts that were yet to occur. However in 2005, given the passage of time since all four of the previous BRAC rounds, which extended from 1988 to 1995, and other factors contributing to changing economic conditions in the interim period, DOD decided not to consider the cumulative economic impact of the prior BRAC rounds in assessing the impact of the current round. We believe DOD’s decision not to assess a cumulative economic impact for the 2005 round has merit. DOD had extensive documentation controls to protect how documents for economic impact were prepared, handled, and processed. Procedures were used to ensure that the inputs, such as the values of the multipliers, used to make calculations on job changes were correct. A review by qualified analysts who did not participate in the initial calculations was also conducted. DOD’s approach to measuring economic impact did not measure all the dimensions of the economic impact coming from a BRAC action. There will be other economic impacts on the economic area, such as changes in the value of real estate or the value of businesses in the area. The DOD approach did not estimate these effects, but it is reasonable to assume that the magnitude of job losses would be correlated with the changes in these values. DOD’s methodology does have some limitations. Specifically, it tended to overstate the employment impact for economic areas. One of DOD’s goals for the methodology was to produce credible estimates but to err on the side of overstating the actual impacts in order to prevent others from arguing that DOD was underestimating economic impact. The Joint Process Action Team was aware that the methodology had factors that might offset the estimated job losses. For example, the methodology assumed that that jobs are lost all at once and does not recognize that employees may be released over the 6-year implementation period and be reemployed in other local businesses or outside the economic area, which would reduce the estimated job loss. The methodology does not recognize the possible civilian reuse of the affected base and the resulting reemployment of workers, which would reduce the estimated job losses. In examining the construction of the indirect multipliers, it is possible to question how they were created. The indirect multiplier being used to estimate job changes from military job changes for a base is constructed as a weighted average multiplier where the weights are the fraction of total base personnel being judged to be similar to a particular civilian industry. Questions could be raised about judgments made to map particular Military Occupation Specialties to activities in civilian industries. In some cases, the mapping from military jobs to industries was easier, such as military jobs in the medical area being mapped to the medical industry. However, in other areas where the jobs are uniquely military, such as infantry, the mapping would be more problematic. If a mistake was made in mapping a job that is uniquely military to a civilian sector, the result would depend on the relative size of the multiplier of the correct civilian sector versus the civilian sector used. It could lead to overestimation or underestimation of the indirect job change. Time did not permit us to examine this mapping. Nonetheless, we believe the overall approach seemed to be a sound attempt to produce a credible multiplier. Finally, in using the ratio of estimated job losses from 2006 through 2011 to total employment as of 2002 (the latest figure for total employment) as a measure of economic impact, the economic impact was likely overstated. This occurs because total employment is likely to grow for many economic areas over the 2006-2011 implementation period as local economies grow, which would reduce the overall percentage of job losses. DOD’s methodology for assessing economic impact was reviewed by an independent panel of four economists and policy analysts from the private and academic sectors in August 2004. DOD formed the panel of four members to review the methodology and to determine if it conformed to accepted economic practices. Three of the panel members were Ph.D. economists and the other was a policy analyst. All four were experienced in conducting local economic impact studies and were not otherwise associated with the BRAC process. The panel found the methodology to be reasonable. The experts agreed that the use of direct and indirect job changes was a logical method to characterize the impact of proposed closures and realignments. The reviewers also concluded that DOD’s methodology represents a “worst-case” estimate of economic impact. We contacted each member of the panel to discuss their review of the methodology to ensure that DOD had adequately summarized the results of the panel meeting and that they agreed that the methodology was sound. We and the experts agreed that DOD had adequately summarized the review meeting and agreed that the methodology was reasonable to use. greatest negative employment change and the greatest positive employment change. As noted in prior reports, we examined how the communities surrounding closed bases were faring in relation to two key national economic indicators—the national unemployment rate and the average annual real per capita income growth rate. In our last status report, we observed that most communities surrounding closed bases were faring well economically in relation to these key national economic indicators. While some communities surrounding closed bases were faring better than others, most have recovered or are continuing to recover from the impact of BRAC, with more mixed results recently, allowing for some negative impact from the 2001 recession. Appendix XV Draft DOD Transformational Options Recommended for Approval 1. Consolidate Management at Installations with Shared Boundaries. Create a single manager for installations that share boundaries. Source & Application: H&SA 2. Regionalize Installation Support. Regionalize management of the provision of installation support activities across Military Departments within areas of significant Department of Defense (DoD) concentration, identified as Geographic Clusters. Option will evaluate designating organizations to provide a range of services, regionally, as well as aligning regional efforts to specific functions. For example, a possible outcome might be designation of a single organization with the responsibility to provide installation management services to DoD installations within the statutory National Capital Region (NCR). Source and Application: H&SA 3. Consolidate or collocate Regional Civilian Personnel Offices to create joint civilian personnel centers. Source and Application: H&SA 4. Consolidate active and Reserve Military Personnel Centers of the same service. Source and Application: H&SA 5. Collocate active and/or Reserve Military Personnel Centers across Military Departments. Source and Application: H&SA 6. Consolidate same service active and Reserve local Military Personnel Offices within Geographic Clusters. Source and Application: H&SA 7. Collocate active and/or Reserve local Military Personnel Offices across Military Departments located within Geographic Clusters. Source and Application: H&SA 8. Consolidate Defense Finance and Accounting Service (DFAS) Central and Field Sites. Consolidate DFAS business line workload and administrative/staff functions and locations. Source and Application: H&SA 9. Consolidate Local DFAS Finance & Accounting (F&A). Merge/consolidate local DFAS F&A within Geographic Clusters. Source and Application: H&SA 10. Consolidate remaining mainframe processing and high capacity data storage operations to existing Defense Mega Centers (Defense Enterprise Computing Centers). Source and Application: H&SA Appendix XV Draft DOD Transformational Options Recommended for Approval 11. Establish and consolidate mobilization sites at installations able to adequately prepare, train and deploy service members. Source and Application: H&SA 12. Establish joint pre-deployment/re-deployment processing sites. Source and Application: H&SA 13. Rationalize Presence in the DC Area. Assess the need for headquarters, commands and activities to be located within 100 miles of the Pentagon. Evaluation will include analysis of realignment of those organizations found to be eligible to move to DoD-owned space outside of a 100-miles radius. Source and Application: H&SA 14. Minimize leased space across the US and movement of organizations residing in leased space to DoD-owned space. Source and Application: H&SA 15. Consolidate HQs at Single Locations. Consolidate multi-location headquarters at single locations. Source and Application: H&SA 16. Eliminate locations of stand-alone headquarters. Source and Application: H&SA 17. Consolidate correctional facilities into fewer locations across Military Departments. Source and Application: H&SA 18. Collocate Reserve Component (RC) Headquarters. Determine alternative facility 19. Collocate Recruiting Headquarters. Analyze alternative Recruiting Headquarters alignments. Consider co-location of RC and Active Component (AC) Recruiting headquarters. Source and Application: H&SA 20. Establish a consolidated multi-service supply, storage and distribution system that 21. Privatize the wholesale storage and distribution processes from DoD activities that perform these functions. Source and Application: Supply & Storage Appendix XV Draft DOD Transformational Options Recommended for Approval 22. Migrate oversight and management of all service depot level reparables to a single DoD agency/activity. Source and Application: Supply & Storage 23. Decentralize Depot level maintenance by reclassifying work from depot-level to I- level. Source and Application: Industrial 24. Centralize I-level maintenance and decentralize depot-level maintenance to the existing (or remaining) depots. Eliminate over-redundancy in functions. Consolidate Intermediate and Depot-level regional activities Source and Application: Industrial 25. Regionalize severable and similar work at the intermediate level. Source and 26. Partnerships Expansions. Under a partnership, have government personnel work in contractor owned/leased facilities and realign or close facilities where personnel are currently working. Source and Application: Industrial 27. Collocate depots: Two Services use the same facility(s). Separate command structures but shared common operations. Source and Application: Industrial 28. Consolidate similar commodities under Centers of Technical Excellence. Source 29. Implement concept of Vertical Integration by putting entire life cycle at same site to increase synergies, e.g. production of raw materials to the manufacture of finished parts, co-locating storage, maintenance and demil. Source and Application: Industrial 30. Implement concept of Horizontal Integration by taking some of the most costly elements of the M&A processes and put them at the same site to increase efficiencies, e.g. put Load, Assemble and Pack (LAP) of all related munitions at same site. Source and Application: Industrial 31. Maintain a multi-service distribution and deployment network consolidating on regional joint service nodes. Source and Application: Industrial 32. Evaluate Joint Centers for classes and types of weapons systems and/or technologies used by more than one Military Department: Within a Defense Technology Area Plan (DTAP) Capability Area Across multiple functions (Research; Development & Acquisition; Test & Evaluation) Appendix XV Draft DOD Transformational Options Recommended for Approval Across multiple DTAP capability areas. Source and Application: Technical 33. Evaluate Service-Centric concentration, i.e. consolidate within each Service: Within a Defense Technology Area Plan (DTAP) capability area Across multiple functions (Research; Development & Acquisition; Test & Evaluation) Across multiple DTAP capability areas. Source and Application: Technical 34. Privatize graduate-level education. Source and Application: Education & Training 35. Integrate military and DoD civilian full-time professional development education programs. Source and Application: Education & Training 36. 36. Establish Centers of Excellence for Joint or Inter-service education and training by combining or co-locating like schools (e.g., form a “DoD University” with satellite training sites provided by Service-lead or civilian institutions). Source and Application: Education & Training 37. Establish “joint” officer and enlisted specialized skill training (initial skill, skill progression & functional training). Source and Application: Education & Training 38. Establish a single "Center of Excellence" to provide Unmanned Aerial Vehicle initial (a.k.a. undergraduate) training. Source and Application: Education & Training 39. Establish regional Cross-Service and Cross-Functional ranges that will support Service collective, interoperability and joint training as well as test and evaluation of weapon systems. Source and Application: Education & Training 40. Integrate selected range capabilities across Services to enhance Service collective, interoperability and joint training, such as Urban Operations, Littoral, training in unique settings (arctic, mountain, desert, and tropical). Source and Application: Education & Training 41. Combine Services' T&E Open Air Range (OAR) management into one joint management office. Although organizational/managerial, this option could engender further transformation. Joint management of OAR resources could encourage a healthy competition among OARs to increase efficiency and maximum utility DoD-wide. Source and Application: Education & Training 42. Consolidate or collocate at a single installation all services' primary phase of pilot training that uses the same aircraft (T-6). Source and Application: Education & Training Appendix XV Draft DOD Transformational Options Recommended for Approval 43. Locate (division/corps) UEx and (corps/Army) UEy on Joint bases where practical to leverage capabilities of other services (e.g., strategic lift to enhance strategic responsiveness). Source and Application: Army 44. Locate (brigades) Units of Action at installations DoD-wide, capable of training modular formations, both mounted and dismounted, at home station with sufficient land and facilities to test, simulate, or fire all organic weapons. Source and Application: Army 45. Collocate Army War College and Command and General Staff College at a single 46. Locate Special Operations Forces (SOF) in locations that best support specialized training needs, training with conventional forces and other service SOF units and wartime alignment deployment requirements. Source and Application: Army 47. Collocate or consolidate multiple branch schools and centers on single locations (preferably with MTOE units and RDTE facilities) based on warfighting requirements, training strategy, and doctrine, to gain efficiencies from reducing overhead and sharing of program-of-instruction resources. Source and Application: Army 48. Reshape installations, RC facilities and RC major training centers to support home station mobilization and demobilization and implement the Train/Alert/Deploy model. Source and Application: Army 49. Increase the number of multi-functional training areas able to simultaneously serve multiple purposes and minimize the number of single focus training areas for the Reserve Components where possible. Source and Application: Army 50. Collocate institutional training, MTOE units, RDTE organizations and other TDA units in large numbers on single installations to support force stabilization and enhance training. Army 51. Locate units/activities to enhance home station operations and force protection. Source and Application: Army 52. Consolidate aviation training with sister services for like-type aircraft to gain Appendix XV Draft DOD Transformational Options Recommended for Approval 54. Consolidate Army RDT&E organizations to capitalize on technical synergy across 55. Reduce the number of USAR regional headquarters to reflect Federal Reserve Restructuring Initiative (FRRI). Source and Application: Army 56. Consolidate RDT&E functions on fewer installations through inter-service support 57. Establish a single inventory control point (ICP) within each Service or consolidating into joint ICPs. Application: Supply and Storage 58. Expand Guard and Reserve force integration with the Active force. Examples: (1) Blended organizations. (2) Reserve Associate, Guard Associate, and Active Associate (3) Sponsored Reserve. (4) Blending of Guard units across state lines to unify mission areas, reduce infrastructure, and improve readiness. Application: MilDeps 59. Consolidate National Capital Region (NCR) intelligence community activities now occupying small government facilities and privately owned leased space to fewer, secure DoD-owned locations in the region. Application: Intel 60. Collocate Guard and Reserve units at active bases or consolidate the Guard and Reserve units that are located in close proximity to one another at one location if practical, i.e., joint use facilities. Application: MilDeps 61. Consolidate the Army’s five separate Active Component recruit training sites and BENS; Application: Supply and Storage, MilDeps 63. Privatize long-haul communications in the Defense Information Systems Agency 64. Collocate Joint Strike Fighter graduate flight training and maintenance training. 65. Collocate Joint Strike Fighter graduate flight training. Military Base Closures: Observations on Prior and Current BRAC Rounds. GAO-05-614. Washington, D.C.: May 3, 2005. Military Base Closures: Updated Status of Prior Base Realignments and Closures. GAO-05-138. Washington, D.C.: January 13, 2005. Military Base Closures: Assessment of DOD’s 2004 Report on the Need for a Base Realignment and Closure Round. GAO-04-760. Washington, D.C.: May 17, 2004. Military Base Closures: Observations on Preparations for the Upcoming Base Realignment and Closure Round. GAO-04-558T. Washington, D.C.: March 25, 2004. Military Base Closures: DOD’s Updated Net Savings Estimate Remains Substantial. GAO-01-971. Washington, D.C.: July 31, 2001. Military Bases: Lessons Learned from Prior Base Closure Rounds. GAO/NSIAD-97-151. Washington, D.C.: July 25, 1997. Military Bases: Analysis of DOD’s 1995 Process and Recommendations for Closure and Realignment. GAO/NSIAD-95-133. Washington, D.C.: April 14, 1995. Infrastructure and Environment: Technical Joint Cross-Service Group Data Integrity and Internal Control Processes for Base Realignment and Closure 2005. D-2005-086. Washington, D.C.: June 17, 2005. Defense Infrastructure: Education and Training Joint Cross-Service Group Data Integrity and Internal Control Processes for Base Realignment and Closure 2005. D-2005-084. Arlington, Va.: June 10, 2005. Defense Infrastructure: Industrial Joint Cross-Service Group Data Integrity and Internal Control Processes for Base Realignment and Closure 2005. D-2005-082. Arlington, Va.: June 9, 2005. Infrastructure and Environment: Washington Headquarters Services Data Call Submissions and Internal Control Processes for Base Realignment and Closure 2005. D-2005-079. Arlington, Va.: June 8, 2005. Defense Infrastructure: Supply and Storage Joint Cross-Service Group Data Integrity and Internal Control Processes for Base Realignment and Closure 2005. D-2005-081. Arlington, Va.: June 6, 2005. Infrastructure and Environment: Defense Finance and Accounting Service Data Call Submissions and Internal Control Processes for Base Realignment and Closure 2005. D-2005-075. Arlington, Va.: May 27, 2005. DOD Inspector General plans to issue reports on the Defense Logistics Agency, the Headquarters and Support Activities Joint Cross-Service Group, and the Medical Joint Cross-Service Group. Reserve Component Process Action Team, The Army Basing Study 2005. A-2005-0165-ALT. Alexandria, Va.: April 29, 2005. The Army Basing Study 2005 Process. A-2005-0164-ALT. Alexandria, Va.: April 22, 2005. Validation of Army Responses for Joint Cross-Service Group Questions. A-2005-0169-ALT. Alexandria, Va.: April 22, 2005. Army Military Value Data, The Army Basing Study 2005. A-2005-0083- ALT. Alexandria, Va.: December 21, 2004. Army Capacity Data, The Army Basing Study 2005. A-2005-0056-ALT. Alexandria, Va.: November 30, 2004. Cost of Base Realignment Actions (COBRA) Model. A-2004-0544-IMT. Alexandria, Va.: September 30, 2004. The Department of the Navy’s Implementation of the FY 2005 Base Realignment and Closure Process. N2005-0046. Washington, D.C.: June 10, 2005. Risk Assessment of the Department of the Navy Base Realignment and Closure 2005 Information Transfer System. N2005-0042. Washington, D.C.: April 25, 2005. Base Realignment and Closure Optimization Methodology. N2004-0058. Washington, D.C.: June 16, 2004. BRAC Cueing and Analysis Tools. F2005-0007-FB2000. Washington, D.C.: June 22, 2005. 2005 Base Realignment and Closure-Installation Visualization Tool Data Reliability. F2005-0004-FB4000. Washington, D.C.: June 16, 2005. Base Realignment and Closure Data Collection System. F2004-0008- FB40000. Washington, D.C.: September 27, 2004. 2005 Base Realignment and Closure: Installation Capacity Analysis Questionnaire. F2004-0007-FB4000. Washington, D.C.: August 24, 2004. 2005 Base Realignment and Closure: Installations Inventory. F2004-0005- FB4000. Washington, D.C.: April 12, 2004. 2005 Base Realignment and Closure: Air Force Internal Control Plan. F2004-0001-FB4000. Washington, D.C.: December 29, 2003. The Air Force Audit Agency plans to release 7 additional reports on the Air Force and joint cross-service group data collection, the Air Force analysis, and the use of various BRAC tools. In addition to the individual named above, Mike Kennedy, Jim Reifsnyder, Nelsie Alcoser, Shawn Arbogast, Raymond Bickert, Alissa Czyz, Andrew Edelson, Glenn Knoepfle, Nancy Lively, Warren Lowman, Tom Mahalek, David Mayfield, Richard Meeks, Hilary Murrish, Charles Perdue, Robert Poetta, James Reynolds, Laura Talbott, and Cheryl Weissman made key contributions to this report. Other individuals also contributing to this report included, Tommy Baril, Carl Barden, Angela Bourciquot, Steve Boyles, Delaney Branch, Joel Christenson, Kenneth Cooper, Paul Gvoth, Larry Junek, Mark Little, Philip Longee, Ricardo Marquez, Gary Phillips, Greg Pugnetti, Sharon Reid, John Strong, Roger Tomlinson, and Kimberly Young.
On May 13, 2005, the Secretary of Defense submitted proposed base realignment and closure (BRAC) actions to an independent commission for its review. The Commission must submit its recommendations to the President by September 8, 2005, for his acceptance or rejection in their entirety. Congress has final action to accept or reject these recommendations in their entirety later this year. The law requires that GAO issue a report on the Department of Defense's (DOD) recommendations and selection process by July 1, 2005. GAO's objectives were to (1) determine the extent to which DOD's proposals achieved its stated BRAC goals, (2) analyze whether the process for developing recommendations was logical and reasoned, and (3) identify issues with the recommendations that may warrant further attention. Time constraints limited GAO's ability to examine implementation details of most of the individual recommended actions. DOD had varying success in achieving its 2005 BRAC goals of (1) reducing excess infrastructure and producing savings, (2) furthering transformation, and (3) fostering jointness. While DOD proposed a record number of closures and realignments, exceeding all prior BRAC rounds combined, many proposals focused on reserve bases and relatively few on closing active bases. Projected savings are almost equally large, but most savings are derived from 10 percent of the recommendations. While GAO believes savings would be achieved, overall up-front investment costs of an estimated $24 billion are required, and there are clear limitations associated with DOD's projection of nearly $50 billion in savings over a 20-year period. Much of the projected net annual recurring savings (47 percent) is associated with eliminating jobs currently held by military personnel. However, rather than reducing end-strength levels, DOD indicates the positions are expected to be reassigned to other areas, which may enhance capabilities but also limit dollar savings available for other uses. Sizeable savings were projected from efficiency measures and other actions, but underlying assumptions have not been validated and could be difficult to track over time. Some proposals represent efforts to foster jointness and transformation, such as initial joint training for the Joint Strike Fighter, but progress in each area varied, with many decisions reflecting consolidations within, and not across, the military services. In addition, transformation was often cited as support for proposals, but it was not well defined, and there was a lack of agreement on various transformation options. DOD's process for conducting its analysis was generally logical, reasoned, and well documented. DOD's process placed strong emphasis on data, tempered by military judgment, as appropriate. The military services and seven joint cross-service groups, which focused on common business-oriented functions, adapted their analytical approaches to the unique aspects of their respective areas. Yet, they were consistent in adhering to the use of military value criteria, including new considerations introduced for this round, such as surge and homeland defense needs. Data accuracy was enhanced by the required use of certified data and by efforts of the DOD Inspector General and service audit agencies in checking the data. Time limitations and complexities introduced by DOD in weaving together an unprecedented 837 closure and realignment actions across the country into 222 individual recommendations caused GAO to focus more on evaluating major cross-cutting issues than on implementation issues of individual recommendations. GAO identified various issues that may warrant further attention by the Commission. Some apply to a broad range of recommendations, such as assumptions and inconsistencies in developing certain cost and savings estimates, lengthy payback periods, or potential impacts on affected communities. GAO also identified certain candidate recommendations, including some that were changed by senior DOD leadership late in the process that may warrant attention.
VA’s integrated health care delivery system is one of the largest in the United States and provides enrolled veterans, including women veterans, with a range of services including primary and preventive health care services, mental health services, inpatient hospital services, long-term care, and prescription drugs. VA’s health care system is organized into 21 VISNs that include VAMCs and CBOCs. VAMCs offer outpatient, residential, and inpatient services. These services range from primary care to complex specialty care, such as cardiac and spinal cord injury care. VAMCs also offer a range of mental health services, including outpatient counseling services, residential programs—which provide intensive treatment and rehabilitation services, with supported housing, for treatment, for example, of PTSD, MST, or substance use disorders—and inpatient psychiatric treatment. CBOCs are an extension of VAMCs and provide outpatient primary care and general mental health services on site. VA also operates 232 Vet Centers, which offer readjustment and family counseling, employment services, bereavement counseling, and a range of social services to assist combat veterans in readjusting from wartime military service to civilian life. When VA facilities are unable to efficiently provide certain health care services on site, they are authorized to enter into agreements with non-VA providers to ensure veterans have access to medically necessary services. Specifically, VA facilities can make services available through referral of patients to other VA facilities or use of telehealth services, sharing agreements with university affiliates or Department of Defense contracts with providers in the local community, or allowing veterans to receive care from providers in the community who will accept VA payment (commonly referred to as fee-basis care). Federal law authorizes VA to provide medically necessary health care services to eligible veterans, including women veterans. Federal law also specifically requires VA to provide mental health screening, counseling, and treatment for eligible veterans who have experienced MST. Although the MST law applies to all veterans, it is of particular relevance to women veterans because among women veterans screened by VA for MST, 21 percent screened positive for experiencing MST. VA provides health care services to veterans through its medical benefits package—health care services required to be provided are broadly stated in a regulation and further specified in VA policies. Through policies, VA requires its health care facilities to make certain services, including gender-specific services and primary care services, available to eligible women veterans. Gender-specific services that are included in the VA medical benefits package include, for example, cervical cancer screening, breast examination, management of menopause, mammography, obstetric care, and infertility evaluation. See table 1 for a list of selected basic and specialized gender-specific services that VA is required to make available and others that VA may make available to women veterans. In November 2008, VA established a policy that requires all VAMCs and CBOCs to move toward making comprehensive primary care available for women veterans. VA defines comprehensive primary care for women veterans as the availability of complete primary care—including routine detection and management of acute and chronic illness, preventive care, basic gender-specific care, and basic mental health care—from one primary care provider at one site. VA did not establish a deadline by which VAMCs and CBOCs must meet this requirement. VA policies also outline a number of requirements specific to ensuring the privacy of women veterans in all settings of care at VAMCs and CBOCs. These include requirements related to ensuring auditory and visual privacy at check-in and in interview areas; the location of exam rooms, presence of privacy curtains, and the orientation of exam tables; access to private restrooms in outpatient, inpatient, and residential settings of care; and the availability of sanitary products in public restrooms at VA facilities. 1n 1991, VA established the position of Women Veteran Coordinator—now the WVPM—to ensure that each VAMC had an individual responsible for assessing the needs of women veterans and assisting in the planning and delivery of services and programs to meet those needs. Begun as a part- time collateral position, the WVPM is now a full-time position at all VAMCs. In July 2008, VA required VAMCs to establish the WVPM as a full- time position (no longer a collateral duty) no later than December 1, 2008. Clinicians in the role of WVPM would be allowed to perform clinical duties to maintain their professional certification, licensure, or privileges, but must limit the time to the minimum required, typically no more than 5 hours per week. In September 2008, VA issued the Uniform Mental Health Services in VA Medical Centers and Clinics, a policy that specifies the mental health services that must be provided at each VAMC and CBOC. The purpose of this policy is to ensure that all veterans, wherever they obtain care in VA’s health care system, have access to needed mental health services. The policy lists the mental health care services that must be delivered on site or made available by each facility. To help ensure that mental health staff can provide these services, VA has developed and rolled out evidence- based psychotherapy training programs for VA staff that treat patients with PTSD, depression, and serious mental illness. VA’s training programs cover five evidence-based psychotherapies: Cognitive Processing Therapy (CPT) and Prolonged Exposure (PE), which are recommended for PTSD; Cognitive Behavioral Therapy (CBT) and Acceptance and Commitment Therapy (ACT), which are recommended for depression; and Social Skills Training (SST), which is recommended for serious mental illness. The training programs involve two components: (1) attendance at an in-person, experientially-based, workshop (usually 3-4 days long), and (2) ongoing telephone-based small-group consultation on actual therapy cases with a consultant who is an expert in the psychotherapy. The VA facilities we visited provided basic gender-specific and outpatient mental health services to women veterans on site, and some facilities also provided specialized gender-specific or mental health services specifically designed for women on site. All of the VAMCs we visited offered at least some specialized gender-specific services on site, and six offered a broad array of these services. Among CBOCs, other than the two largest facilities we visited, most offered limited specialized gender-specific care on site. Women needing obstetric care were always referred to non-VA providers. Regarding mental health care, we found that outpatient services for women were widely available at the VAMCs and most Vet Centers we visited, but were more limited at some CBOCs. Eight of the VAMCs we visited offered mixed-gender inpatient or residential mental health services, and two VAMCs offered residential treatment programs specifically designed for women veterans. Basic gender-specific care services were available on site at all nine of the VAMCs and 8 of the 10 CBOCs that we visited. (See table 2.) These facilities offered a full array of basic gender-specific services for women— such as pelvic examinations, and osteoporosis treatment—on site. One of the CBOCs we visited did not offer any basic gender-specific services on site and another offered a limited selection of these services. These CBOCs that provided limited basic gender-specific services referred patients to other VA facilities for this care, but had plans underway to offer these services on site once providers received needed training. In general, women veterans had access to female providers for their gender-specific care: of the 19 medical facilities we visited, all but 4 had one or more female providers available to deliver basic gender-specific care. The facilities we visited delivered basic gender-specific services in a variety of ways. Seven of the nine VAMCs and the two large CBOCs we visited had women’s clinics. The physical setup of these clinics ranged from a physically separate dedicated clinical space (at five facilities) to one or more designated women’s health providers with designated exam rooms within a mixed-gender primary care clinic. Generally, when women’s clinics were available, most female patients received their basic gender-specific care in those clinics. When women’s clinics were not available, female patients either received their gender-specific care through their primary care provider or were referred to another VA or non- VA facility for these services. Basic gender-specific services were typically available between 8:00 a.m. and 4:30 p.m. on weekdays. At one CBOC and one VAMC, however, basic gender-specific care was only available during limited time frames. At the CBOC, a provider from the affiliated VAMC traveled to the CBOC 2 days each month to perform cervical cancer screenings and pelvic examinations for the clinic’s female patients. In general, medical facilities did not offer evening or weekend hours for basic gender-specific services. The provision of specialized gender-specific services for women, including treatment after abnormal cervical cancer screenings and breast cancer treatment, varied by service and by facility. (See table 3.) All VA medical facilities referred female patients to outside providers for obstetric care. Some of the VAMCs we visited offered a broad array of other specialized gender-specific services on site, but all contracted or fee-based at least some services. In particular, most VAMCs provided screening and diagnostic mammography through contracts with local providers or fee- based these services. In addition, less than half of the VAMCs provided reconstructive surgery after mastectomy on site, although six of the nine VAMCs we visited provided medical treatment for breast cancers and reproductive cancers on site. In general, the CBOCs we visited offered more limited specialized gender-specific services on site. For example, while most CBOCs offered pregnancy testing and sexually transmitted disease (STD) screening, counseling, and treatment, only the largest CBOCs offered IUD placement on site. Most CBOCs referred patients to VA medical facilities—sometimes as far as 130 miles away—for some specialized gender-specific services. Because the travel distance can be a barrier to treatment for some veterans, officials at some CBOCs said that they will fee-base services to local providers on a case-by-case basis. At both VAMCs and CBOCs, specialized gender-specific services were usually offered on site only during certain hours: for example, four medical facilities only offered these services 2 days per week or less. A range of outpatient mental health services was readily available at the VAMCs we visited. The types of outpatient mental health services available at most VAMCs included, for example, diagnosis and treatment of depression, substance use disorders, PTSD, and serious mental illness. All of the VAMCs we visited had one or more providers with training in evidence-based therapies for the treatment of PTSD and depression. All but one of the VAMCs we visited offered at least one women-only counseling group. Two VAMCs offered outpatient treatment programs specifically for women who have experienced MST or other traumas. In addition, several VAMCs offered services during evening hours at least 1 day a week. While most outpatient mental health services were available on site, facilities typically fee-based treatment for a veteran with an active eating disorder to non-VA providers. Similarly, the eight Vet Centers we visited offered a variety of outpatient mental health services, including counseling services for PTSD and depression, as well as individual or group counseling for victims of sexual trauma. Five of the eight Vet Centers we visited offered women-only groups, and six had counselors with training or experience in treating patients who have suffered sexual trauma. Vet Centers generally offered some counseling services in the evenings. The outpatient mental health services available in CBOCs were, in some cases, more limited. The two larger CBOCs offered women-only group counseling as well as intensive treatment programs specifically for women who had experienced MST or other traumas, and two other CBOCs offered women-only group counseling. The smaller CBOCs, however, tended to rely on staff from the affiliated VAMC, often through telehealth, to provide mental health services. Five CBOCs provided some mental health services through telehealth or using mental health providers from the VAMC that traveled to the CBOCs on specific days. While most VAMCs offer mixed-gender residential mental health treatment programs or inpatient psychiatric services, few have specialized programs for women veterans. Eight of the nine VAMCs we visited served women veterans in mixed-gender inpatient psychiatric units, mixed-gender residential treatment programs, or both. Two VAMCs had residential treatment programs specifically for women who have experienced MST and other traumas. (VA has ten of these programs nationally.) None of the VAMCs had dedicated inpatient psychiatric units for women. VA providers at some facilities expressed concerns about the privacy and safety of women veterans in mixed-gender inpatient and residential environments. For example, in the residential treatment programs, beds for women veterans were separated from other areas of the building by keyless entry systems. However, female residents in some of these programs shared common areas, such as the dinning room, with male residents, and providers expressed concerns that women who were victims of sexual trauma might not feel comfortable in such an environment. The extent to which VA medical facilities we visited were following VA policies that apply to the delivery of health care services for women veterans varied, but none of the facilities had fully implemented VA policies pertaining to women veterans’ health care. In particular, none of the VAMCs or CBOCs we visited were fully compliant with VA policy requirements related to privacy for women veterans. In addition, the facilities we visited were in various stages of implementing VA’s new initiative on comprehensive primary care: most medical facilities had at least one provider that could deliver comprehensive primary care services to women veterans, although not all of these facilities were routinely assigning women veterans to these providers. Officials at some VA facilities reported that they were unclear about the specific steps they would need to take to meet VA’s definition of comprehensive primary care for women veterans. All facilities were fully compliant with at least some of VA’s privacy requirements; however, we documented observations in many clinical settings where facilities were not following one or more requirements. Some common areas of noncompliance included the following: Visual and auditory privacy at check-in. None of the VAMCs or CBOCs we visited ensured adequate visual and auditory privacy at check-in in all clinical settings that are accessed by women veterans. In most clinical settings, check-in desks or windows were located in a mixed-gender waiting room or on a high-traffic public corridor. In some locations, the check-in area was located far enough away from the waiting room chairs that patients checking in for appointments could not easily be overheard. In a total of 12 outpatient clinical settings at six VAMCs and five CBOCs, however, check-in desks were located in close proximity to chairs where other patients waited for their appointments. At one CBOC, we observed a line forming at the check-in window, with several people waiting directly behind the patient checking in, demonstrating how privacy can be easily violated at check-in. Orientation of exam tables. In exam rooms where gynecological exams are conducted, only one of the nine VAMCs and two of the eight CBOCs we visited were fully compliant with VA’s policy requiring exam tables to face away from the door. In many clinical settings that were not fully compliant at the remaining facilities, we observed that exam tables were oriented with the foot of the table facing the door, and in two CBOCs where exam tables were not properly oriented, there was no privacy curtain to help assure visual privacy during women veterans’ exams. At one of these CBOCs, a noncompliant exam room was also located within view of a mixed-gender waiting room. Figure 1 shows the correct and incorrect orientation of exam tables in two gynecological exam rooms at two VA medical facilities. Restrooms adjacent to exam rooms. Only two of the nine VAMCs and one of the eight CBOCs we visited were fully compliant with VA’s requirement that exam rooms where gynecological exams are conducted have immediately adjacent restrooms. In most of the outpatient clinics we toured, a woman veteran would have to walk down the hall to access a restroom, in some cases passing through a high-traffic public corridor or a mixed-gender waiting room. Access to private restrooms in inpatient and residential units. At four of the nine VAMCs we visited, proximity of private restrooms to women’s rooms on inpatient or residential units was a concern. In one mixed-gender inpatient medical/surgical unit, two mixed-gender residential units, and one all-female residential unit, women veterans were not guaranteed access to a private bathing facility and may have had to use a shared or congregate facility. In two of these four settings, access to the shared restroom was not restricted by a lock or a keycard system, raising concerns about the possibility of intrusion by male patients or staff while a woman veteran is showering or using the restroom. Availability of sanitary products in public restrooms. At seven of the nine VAMCs and all 10 of the CBOCs we visited, we did not find sanitary napkins or tampons available in dispensers in any of the public restrooms. VA has not set a deadline by which all VAMCs and CBOCs are required to implement VA’s new comprehensive primary care initiative for women veterans, which would allow women veterans to obtain both primary care and basic gender-specific services from one provider at one site. Officials at the VA medical facilities we visited since the comprehensive primary care for women veterans initiative was introduced reported that they were at various stages of implementing the new initiative. Officials at 6 of the 7 VAMCs and 6 of the 8 CBOCs we visited since November 2008—when VA adopted this initiative—reported that they had at least one provider who could deliver comprehensive primary care services to women veterans. However, some of the medical facilities we visited reported that they were not routinely assigning women veterans to comprehensive primary care providers. Officials at some medical facilities we visited were unclear about the steps needed to implement VA’s new policy on comprehensive primary care for women veterans. For example, at one VAMC, primary care was offered in a mixed-gender primary care clinic and basic gender-specific services were offered by a separate appointment in the gynecology clinic, sometimes on the same day. The new comprehensive primary care initiative would require both primary care and basic gender specific services to be available on the same day, during the same appointment. Officials at this facility said that they were in the process of determining whether they can adapt their current model to meet VA’s comprehensive primary care standard by placing additional primary care providers in the gynecology clinic so that both primary care services and basic gender- specific services could be offered during the same appointment, in one location. Facility officials were uncertain about whether it would meet VA’s comprehensive primary care standard if primary care and basic gender-specific services were still delivered by two different providers. However, VA’s comprehensive primary care policy is clear that the care is to be delivered by the same provider. Another area of uncertainty is the breadth of experience a provider would need to meet VA’s comprehensive primary care standard. Officials from VA headquarters have made it clear that it is their expectation that comprehensive primary care providers have a broad understanding of basic women’s health issues—including initial evaluation and treatment of pelvic and abdominal pain, menopause management, and the risks associated with prescribing certain drugs to pregnant or lactating women. However, in one location, we found that the only provider who was available to deliver comprehensive primary care may not have had the proficiency to deliver the broad array of services that are included in VA’s definition, because the facility serves a very low volume of women veterans and opportunities to practice delivering some basic gender-specific services are limited. VA officials at medical facilities we visited identified a number of key challenges in providing health care services to women veterans. These challenges include physical space constraints that affect the provision of care, including problems complying with patient privacy requirements, and difficulties hiring providers that have specific experience and training in women’s health, as well as hiring mental health providers with expertise in treating veterans with PTSD and who have experienced MST. Officials at some VA medical facilities also reported implementation issues in establishing the WVPM as a full-time position. Officials at VA medical facilities we visited reported that space constraints have raised issues affecting the provision of health care services to women veterans. In particular, officials at 7 of 9 VAMCs and 5 of 10 CBOCs we visited said that space issues, such as the number, size, or configuration of exam rooms or bathrooms at their facilities sometimes made it difficult for them to comply with some VA requirements related to privacy for women veterans. At some of the medical facilities we visited, officials raised concerns about busy waiting rooms and the limited space available to provide separate waiting rooms for patients who may not feel comfortable in a mixed-gender waiting room, particularly women veterans who have experienced MST. Officials at one CBOC said they received complaints from women veterans who preferred a separate waiting room. At this facility, space challenges that affected privacy were among the factors that led to the relocation of mental health services to a separate off-site clinic. VA facility officials told us that some of the patient bedrooms at two VAMC mixed-gender inpatient psychiatric units that were usually designated for female patients were located in space that could not be adequately monitored from the nursing station. VA policy requires that all inpatient care facilities provide separate and secured sleeping accommodations for women and that mixed-gender units must ensure safe and secure sleeping arrangements, including, but not limited to, the ability to monitor the patient bedrooms from the nursing station. VA facility officials also told us they have struggled with space constraints as they work to comply with VA’s new policy on comprehensive primary care for women and the requirements in the September 2008 Uniform Mental Health Services in VA Medical Centers and Clinics, as well as the increasing numbers of women veterans requesting these services. For example, officials at a VAMC said that limitations in the number of primary care exam rooms at their facilities made it difficult for providers to deliver comprehensive primary care services in an efficient and timely manner. Providers explained that having only one exam room per primary care provider prevents them from “multitasking,” or moving back and forth between exam rooms while patients are changing or completing intake interviews with nursing staff. Similarly, mental health providers at a medical facility said that they often shared offices, which limits the number of counseling appointments they could schedule, and primary care providers sometimes have two patients in a room at the same time separated by a curtain during the intake or screening process. In addition, at one VAMC, officials reported that the facility needed to be two to three times its current size to accommodate increasing patient demand. VA officials are aware of these challenges and VA is taking steps to address them, such as funding construction projects, moving to larger buildings, and opening additional CBOCs. However, some of these projects will not be finished for a few years. In the interim, officials said, some facilities are leasing additional space or contracting some services to community providers. VA facility officials reported difficulties hiring primary care providers with specific training and experience in women’s health. VA’s comprehensive primary care initiative requires that women veterans have access to a designated women’s health primary care provider that is “proficient, interested, and engaged” in delivering services to women veterans. The new policy requires that this primary care provider fulfill a broad array of health care services including, but not limited to detection and management of acute and chronic illness, such as osteoporosis, thyroid disease, and cancer of the breast, cervix, and lung; gender-specific primary care such as sexuality, pharmacologic issues related to pregnancy and lactation, and vaginal infections; preventive care, such as cancer screening and weight management; mental health services such as screening and referrals for MST, as well as evaluation and treatment of uncomplicated mental health disorders and substance use disorders; and coordination of specialty care. Officials at some facilities we visited told us that they would like to hire more providers with the required knowledge and experience in women’s health, but struggle to do so. For example, at one VAMC, officials reported that they had difficulty filling three vacancies for primary care providers, which they needed to meet the increasing demand for services and to replace staff who had retired. They said it took them a long time to find providers with the skills required to serve the needs of women veterans. Similarly, at one CBOC, officials reported that it takes them about 8 to 9 months to hire interested primary care physicians. Further, officials at some facilities we visited said that they rely on just one or two providers to deliver comprehensive primary care to women veterans. This is a concern to the officials because, should the provider retire or leave VA, the facility might not be able to replace them relatively quickly in order to continue to provide comprehensive primary care services to women veterans on site. VA officials have acknowledged some of the challenges involved in training additional primary care providers to meet their vision of delivering comprehensive primary care to women veterans. A November 2008 report on the provision of primary care to women veterans cites insufficient numbers of clinicians with specific training and experience in women’s health issues among the challenges VA faces in implementing comprehensive primary care. To help address the knowledge gap, VA is using “mini-residency” training sessions on women’s health. These training sessions—which VA designed to enhance the knowledge and skills of primary care providers—consist of two and one-half days of case-based learning and hands-on training in gender-specific health care for women. During the mini-residency, providers receive specific training in performing pelvic examinations, cervical cancer screenings, clinical breast examinations, and other relevant skills. VA medical facility and Vet Center officials reported challenges hiring psychiatrists, psychologists, and other mental health staff with specialized training or experience in treating PTSD and MST. Medical facility officials often noted that there is a limited pool of qualified psychiatrists and psychologists, and a high demand for these professionals both in the private sector and within VA. In addition, two officials reported that because it is difficult to attract and hire mental health professionals with experience in treating the veteran population, some medical facilities have hired younger, less experienced providers. These officials noted that while younger providers may have the appropriate education and training in some evidence-based psychotherapy treatment methods that are recommended for treating PTSD and MST, they often lack practical experience treating a challenging patient population. Some officials reported that staffing and training challenges limit the types of group or individual mental health treatment services that VA medical facilities and Vet Centers can offer. For example, officials at one VAMC said that they had problems attracting qualified mental health providers to work at its affiliated CBOCs. The facility posted announcements for psychiatrist and psychologist positions, but sometimes received no applications. Because the facility has not been able to recruit mental health providers, it relies on contract providers and fee-basing to deliver mental health services to veterans in its service area. At one Vet Center, officials told us that because none of their counselors have been trained to counsel veterans who have experienced MST, patients seeking counseling for MST are usually referred to the nearby CBOC or VAMC. At one CBOC, a licensed social worker reported that he provides individual counseling for about seven women who have experienced MST, even though he has limited training in this area. He said that this situation was not ideal, but said that he consults with mental health providers at the associated VAMC on some of these cases, and that without his services some of these women might not receive any counseling. VA officials told us that they are aware of the challenges involved in finding clinical staff with specialized training and experience in working with veterans who have PTSD or have experienced MST. A VA official told us that as part of a national effort to enhance mental health providers’ knowledge of clinically effective treatment methods and make these methods available to veterans, VA has developed evidenced-based psychotherapy training for VA mental health staff. In particular, CPT, PE, and ACT are evidence-based treatment therapies for PTSD and also commonly used by providers who work with patients who have experienced MST. A VA headquarters official who is responsible for these training programs told us that as of May 4, 2009, 1,670 VA clinicians had completed VA-provided training in evidence-based therapies. Although VA is providing training in these evidence-based therapies, VA officials stated that this training is not mandatory for VA mental health providers who work with patients who have PTSD or have experienced M ST. Some VA officials expressed concerns that certain aspects of the new policy making the WVPM a full-time position may have the unintended consequence of discouraging clinicians from applying for or staying in the position, potentially leading to the loss of experienced WVPMs. One concern that some WVPMs raised during our interviews was that they were interested in performing clinical duties beyond the minimum required to maintain their professional certification, but would not be able to do so under the new policy. The new policy limits a WVPM’s clinical duties to the minimum required to maintain professional certification, licensure, or privileges, typically no more than 5 hours per week. Another concern was that the change to full-time status could result in a reduction in salary for some clinicians because the position could be classified as an administrative position, depending on how the policy is implemented at the VAMC. At two VAMCs we visited, such concerns had discouraged the incumbent WVPM from accepting the full-time position. VA headquarters officials told us that they are aware of and have expressed their concerns to VA senior headquarters officials about unintended consequences of the new policy. VA headquarters officials provided VISN and VAMC leadership with some options that they could use to help avoid or minimize the potential loss of experienced WVPMs. For example, one option that could be approved on a case-by-case basis is to use a job-sharing arrangement that would allow the incumbent WVPM and another person to each dedicate 50 percent of their time to the WVPM position, performing clinical duties the other 50 percent, in order to transition staff into the full-time position or as a succession planning effort. VA headquarters officials said that action on this issue was important because VA does not have the time or resources to train new staff to replace experienced WVPMs who may leave their positions. Mr. Chairman, this completes my prepared remarks. I would be happy to respond to any questions you or other Members of the committee have at this time. For further information about this testimony, please contact Randall Williamson at (202) 512-7114 or williamsonr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made major contributions to this testimony are listed in appendix II. We selected locations for our site visits using VA data on each VA medical center (VAMC) in the United States. Our goal was to identify a geographically diverse mix of facilities, including some facilities that provide services to a high volume of women veterans, particularly women veterans of Operation Enduring Freedom (OEF) and Operation Iraqi Freedom (OIF); some facilities that serve a high proportion of National Guard or Reserve veterans; and some facilities that serve rural veterans. We also considered whether VAMCs had programs specifically for women veterans, particularly treatment programs for post-traumatic stress disorder (PTSD) and for women who have experienced military sexual trauma (MST). For each of the factors listed below, we examined available facility- or market-level data to identify facilities of interest: total number of unique women veteran patients using the VAMC; total number of unique OEF/OIF women veteran patients using the VAMC; proportion of unique women veterans using the VAMC who are OEF/OIF proportion of unique OEF/OIF women veterans using the VAMC who were discharged from the National Guard or Reserves; within the VA-defined market area for the VAMC, the proportion of women veterans who use VA health care and live in rural or highly rural areas; and availability of on-site programs specific to women veterans, such as inpatient or residential treatment programs that offer specialized treatment for women veterans with PTSD or who have experienced MST, including programs that are for women only or have an admission cycle that includes only women; and outpatient treatment teams with a specialized focus on MST. We selected a judgmental sample of the VAMCs that fell into the top 25 facilities for at least two of these factors. Once we had selected these VAMCs, we also selected at least one community-based outpatient clinic (CBOC) affiliated with each of the VAMCs and one nearby Vet Center, which we also visited during our site visits. In selecting these CBOCs and Vet Centers, we focused on selecting facilities that represented a range of sizes, in terms of the number of women veterans they served. Tables 5 and 6 provide information on the unique number of women veterans served by each of the VAMCs and CBOCs we selected for site visits. In addition to the contact named above, Marcia A. Mann, Assistant Director; Susannah Bloch; Chad Davenport; Alexis MacDonald; and Carmen Rivera-Lowitt made key contributions to this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Historically, the vast majority of VA patients have been men, but that is changing. VA provided health care to over 281,000 women veterans in 2008--an increase of about 12 percent since 2006--and the number of women veterans in the United States is projected to increase by 17 percent between 2008 and 2033. Women veterans seeking care at VA medical facilities need access to a full range of health care services, including basic gender-specific services--such as cervical cancer screening--and specialized gender-specific services--such as treatment of reproductive cancers. This testimony, based on ongoing work, discusses GAO's preliminary findings on (1) the on-site availability of health care services for women veterans at VA facilities, (2) the extent to which VA facilities are following VA policies that apply to the delivery of health care services for women veterans, and (3) key challenges that VA facilities are experiencing in providing health care services for women veterans. GAO reviewed applicable VA policies, interviewed officials, and visited 19 medical facilities--9 VA medical centers (VAMC) and 10 community-based outpatient clinics (CBOC)--and 8 Vet Centers. These facilities were chosen based in part on the number of women using services and whether facilities offered specific programs for women. The results from these site visits cannot be generalized to all VA facilities. GAO shared this statement with VA officials, and they generally agreed with the information presented. The VA facilities GAO visited provided basic gender-specific and outpatient mental health services to women veterans on site, and some facilities also provided specialized gender-specific or mental health services specifically designed for women on site. Basic gender-specific services, including pelvic examinations, were available on site at all nine VAMCs and 8 of the 10 CBOCs GAO visited. Almost all of the medical facilities GAO visited offered women veterans access to one or more female providers for their gender-specific care. The availability of specialized gender-specific services for women, including treatments after abnormal cervical cancer screenings and breast cancer, varied by service and facility. All VA medical facilities refer female patients to non-VA providers for obstetric care. Some of the VAMCs GAO visited offered a broad array of other specialized gender-specific services on site, but all contracted or fee-based at least some services. Among CBOCs, the two largest facilities GAO visited offered an array of specialized gender-specific care on site; the other eight referred women to other VA or non-VA facilities for most of these services. Outpatient mental health services for women were widely available at the VAMCs and most Vet Centers GAO visited, but were more limited at some CBOCs. While the two larger CBOCs offered group counseling for women and services specifically for women who have experienced sexual trauma in the military, the smaller CBOCs tended to rely on VAMC staff, often through videoconferencing, to provide mental health services. The extent to which the VA medical facilities GAO visited were following VA policies that apply to the delivery of health care services for women veterans varied, but none of the facilities had fully implemented these policies. None of the VAMCs and CBOCs GAO visited were fully compliant with VA policy requirements related to privacy for women veterans in all clinical settings where those requirements applied. For example, many of the medical facilities GAO visited did not have adequate visual and auditory privacy in their check-in areas. Further, the facilities GAO visited were in various stages of implementing VA's new initiative to provide comprehensive primary care for women veterans, but officials at some VAMCs and CBOCs reported that they were unclear about the specific steps they would need to take to meet the goals of the new policy. Officials at facilities that GAO visited identified a number of challenges they face in providing health care services to the increasing numbers of women veterans seeking VA health care. One challenge was that space constraints have raised issues affecting the provision of health care services. For example, the number, size, or configuration of exam rooms or bathrooms sometimes made it difficult for facilities to comply with VA requirements related to privacy for women veterans. Officials also reported challenges hiring providers with specific training and experience in women's health care and in mental health care, such as treatment for women veterans with post-traumatic stress disorder or who had experienced military sexual trauma.
The D.C. Family Court Act of 2001 fundamentally changed the way the Superior Court handled its family cases. One of the central organizing principles for establishing the Family Court was the one family/one judge case management concept, whereby the same judge handles all matters related to one family. To support the implementation of the Family Court a total of about $30 million in federal funds was budgeted to fund the Family Court’s transition from fiscal years 2002 through 2004. Several federal and District laws set timeframes for handling abuse and neglect proceedings. The D.C. Family Court Act of 2001, which consolidated all abuse and neglect cases in the Family Court, required that all pending abuse and neglect cases assigned to judges outside the Family Court be transferred to the Family Court by October 2003. Additionally, ASFA requires each child to have a permanency hearing within 12 months of the child’s entry into foster care, defined as the earlier of the following two dates: (1) the date of the first judicial finding that the child has been subjected to child abuse or neglect or (2) the date that is 60 days after the date on which the child is removed from the home. The purpose of the permanency hearing is to decide the goal for where the child will permanently reside and set a timetable for achieving the goal. Permanency may be accomplished through reunification with a parent, adoption, guardianship, or some other permanent placement arrangement. To ensure that abuse and neglect cases are properly managed, the Council for Court Excellence, at the request of Congress, evaluates Family Court data on these cases. It is important that District social service agencies and the Family Court receive and share information they need on the children and families they serve. For example, CFSA caseworkers need to know from the court the status of a child’s case, when a hearing is scheduled, and a judge’s ruling. The Family Court needs case history information from caseworkers, such as whether services have been provided and if there is evidence of abuse or neglect. According to District officials, current plans to exchange information between the Superior Court and District agencies and among District agencies are estimated to cost about $66 million, of which about $22 million would support initiatives outlined in the Mayor’s plan issued in July 2002. According to District officials, about $36 million of the $66 million would come from capital funds that are currently available; however, they would need to seek additional funding for the remaining $30 million. The Superior Court’s total cost for the system it is using to help the Court better manage its caseload and automate the exchange of data with District agencies—the Integrated Justice Information System (IJIS)— is expected to be between $20 million and $25 million, depending on the availability of funds for project-related infrastructure improvements and other project initiatives. Funding for this project is being made available through the capital budget of the D.C. Courts, which is comprised of all components of the District’s judiciary branch. The Family Court met established timeframes for transferring cases into the Family Court and decreased the timeframes for resolving abuse and neglect cases. While the D.C. Family Court Act of 2001 generally required the transfer of abuse and neglect cases to the Family Court by October 2003, it also permitted judges outside the Family Court to retain certain abuse and neglect cases provided that their retention of cases met criteria specified in the D.C. Family Court Act of 2001. Specifically, these cases were to remain at all times in full compliance with ASFA, and the Chief Judge of the Superior Court must determine that the retention of the case would lead to a child’s placement in a permanent home more quickly than if the case were to be transferred to a judge in the Family Court. In its October 2003 progress report on the implementation of the Family Court, the Superior Court reported that it had transferred all abuse and neglect cases back to the Family Court, with the exception of 34 cases, as shown in table 1. The Chief Judge of the Superior Court said that, as of August 2003, a justification for retaining an abuse and neglect case outside the Family Court had been provided in all such cases. According to the Superior Court, the principal reason for retaining abuse and neglect cases outside the Family Court was a determination made by non-Family Court judges that the cases would close before December 31, 2002, either because the child would turn 21, and thus no longer be under court jurisdiction, or because the case would close with a final adoption, custody, or guardianship decree. In the court’s October 2003 progress report, it stated that the cases remaining outside the Family Court involve children with emotional or educational disabilities. While the Superior Court reported that 4 of the 34 abuse and neglect cases remaining outside the Family Court had closed subsequent to its October 2003 progress report, children in the remaining 30 cases had not yet been placed in permanent living arrangements. On average, children in these 30 cases are 14 years of age and have been in foster care for 8 years, nearly three times the average number of years in care for a child in the District. Table 2 provides additional information on the characteristics of the 30 cases that remained outside the Family Court as of November 2003. The Superior Court also reported that the Family Court had closed 620 of the 3,255 transferred cases, or 19 percent. Among the transferred cases closed by the Family Court, 77 percent of the 620 cases closed when the permanency goal was achieved following reunification of the child with a parent, adoption, guardianship, or custody of the child by a designated family member or other individual. In most of the remaining transferred cases that had closed, the child had reached the age of majority, or 21 years of age in the District. Table 3 summarizes the reasons for closing abuse and neglect cases transferred to the Family Court, as of October 2003. In addition to transferred cases, the Family Court is responsible for the routine handling of all newly filed cases. For alleged cases of abuse and neglect, complainants file a petition with the Family Court requesting a review of the allegation. After the filing of the petition, the Family Court holds an initial hearing in which it hears and rules on the allegation. Following the initial hearing, the court may resolve the case through mediation or through a pretrial hearing. Depending on the course of action that is taken and its outcome, several different court proceedings may follow to achieve permanency for children, thereby terminating the court’s jurisdiction. Family Court abuse and neglect proceedings include several key activities, such as adjudication, disposition, and permanency hearings. Figure 1 shows the flow of abuse and neglect cases through the various case activities handled by the D.C. Family Court. Data provided by the court show that in the last 2 years there has been a decrease in the amount of time to begin an adjudication hearing for children in abuse and neglect cases. Figure 2 shows median times to begin hearings for children removed from their homes and for children not removed from their homes. As required by District law, the court must begin the hearing within 105 days for children removed from their homes and within 45 days for children not removed from their homes. Between 2001 and 2003, the median time to begin adjudication hearings in cases when a child was removed from home declined by 140 days to 28 days, or about 83 percent. Similarly, the decline in timeframes to begin hearings was about as large in cases when children remained in their homes. In these cases, median timeframes declined by about 90 percent during this same period to 12 days. While the reduction in timeframes for these hearings began prior to the establishment of the Family Court, median days to begin hearings for children removed from their homes increased immediately following the court’s establishment before declining again. According to two magistrate judges, the increase in timeframes immediately following establishment of the Family Court was attributable to the volume and complexity of cases initially transferred to it. Similarly, timeframes to begin disposition hearings, a proceeding that occurs after the adjudication hearing and prior to permanency hearings, declined between 2001 and 2003, as shown in figure 3. As required by District law, the court must begin disposition hearings within 105 days for children removed from their homes and within 45 days for children not removed from their homes. The median days to begin disposition hearings for children removed from their homes declined by 202 days to 39 days, or about 84 percent, between 2001 and 2003. The median days to begin disposition hearings for children not removed from their homes declined by 159 days to 42 days, or about 79 percent. Therefore, the Superior Court is also within the timeframes required by D.C. law for these hearings. While the decline in the timeframes for disposition hearings began prior to the Family Court, according to two magistrate judges we interviewed, the time required to begin these hearings increased in the 7-month period following the establishment of the Family Court because of the complexity of these cases. Despite declines in timeframes to begin adjudication and disposition hearings, the Family Court has not achieved full compliance with the ASFA requirement to hold permanency hearings within 12 months of a child’s placement in foster care. The percentage of cases with timely permanency hearings increased from 25 percent in March 2001 to 55 percent in September 2002, as shown in figure 4. Although the presence of additional magistrate judges, primarily hired to handle cases transferred into the Family Court from other divisions and to improve the court’s timeliness in handling its cases, has increased the Family Court’s ability to process additional cases in a timelier manner, court officials said that other factors have also improved the court’s timeliness. These factors included reminders to judges of upcoming permanency hearing deadlines and the use of uniform court order forms. However, other factors continue to impede the Family Court’s full achievement of ASFA compliance. Some Family Court judges have questioned the adequacy of federal ASFA timelines for permanency, citing barriers external to the court, which increase the time required to achieve permanency. Among these external barriers are lengthy waits for housing, which might take up to a year, and the need for parents to receive mental health services or substance abuse treatment before they can reunite with the child. From January through May 2003, Family Court judges reported that parental disabilities, including emotional impairments and treatment needs, most often impeded children’s reunification with their parents. In nearly half of these reported instances, the parent needed substance abuse treatment. Procedural impediments to achieving reunification included the lack of sufficient housing to fully accommodate the needs of the reunified family. With regard to adoption and guardianship, procedural impediments included the need to complete administrative requirements associated with placing children with adoptive families in locations other than the District. Financial impediments to permanency included insufficient adoption or guardianship subsidies. Table 4 provides additional details on impediments to achieving permanency goals. Associate judges we interviewed cited additional factors that impeded the achievement of the appropriate foster care placements and timely permanency goals. For example, one judge said that the District’s Youth Services Administration inappropriately placed a 16-year old boy in the juvenile justice system because CFSA had not previously petitioned a neglect case before the Family Court. As a result, the child experienced a less appropriate and more injurious placement in the juvenile justice system than what the child would have experienced had he been appropriately placed in foster care. In other cases, an associate judge has had to mediate disputes among District agencies that did not agree with court orders to pay for services for abused and neglected children, further complicating and delaying the process for providing needed services and achieving established permanency goals. To assist the Family Court in its management of abuse and neglect cases, the Family Court transition plan required magistrate judges to preside over abuse and neglect cases transferred from judges in other divisions of the Superior Court, and these judges absorbed a large number of those cases. In addition, magistrate judges, teamed with associate judges under the one family/one judge concept, had responsibility for assisting the Family Court in resolving all new abuse and neglect cases. Both associate and magistrate judges cited other factors that have affected the court’s ability to fully implement the one family/one judge concept and achieve the potential efficiency and effectiveness that could have resulted. For example, the Family Court’s identification of all cases involving the same child depends on access to complete, timely, and accurate data in IJIS. In addition, Family Court judges said that improvements in the timeliness of the court’s proceedings depends, in part, on the continuous assignment of the same caseworker from CFSA to a case and sufficient support of an assigned assistant corporation counsel from the District’s Office of Corporation Counsel. Family Court judges said the lack of consistent support from a designated CFSA caseworker and lack of Assistant Corporation counsels, have in certain cases, prolonged the time required to conduct court proceedings. In addition, several judges and court officials told us that they do not have sufficient support personnel to allow the Family Court to manage its caseload more efficiently. For example, additional courtroom clerks and court aids could improve case flow and management in the Family Court. These personnel are needed to update automated data, prepare cases for the court, and process court documentation. Under contract with the Superior Court, Booz Allen Hamilton analyzed the Superior Court’s staffing resources and needs; this evaluation found that the former Family Division, now designated as the Family Court, had the highest need for additional full-time positions to conduct its work. Specifically, the analysis found that the Family Court had 154 of the 175 full-time positions needed, or a shortfall of about 12 percent. Two branches—juvenile and neglect and domestic relations—had most of the identified shortfall in full- time positions. In commenting on a draft of the January 2004 report, the Superior Court said that the Family Court, subsequent to enactment of the D.C. Family Court Act of 2001, hired additional judges and support personnel in excess of the number identified as needed in the Booz Allen Hamilton study to meet the needs of the newly established Family Court. However, several branch chiefs and supervisors we interviewed said the Family Court still needed additional support personnel to better manage its caseload. The Superior Court has decided to conduct strategic planning efforts and re-engineer business processes in the various divisions prior to making the commitment to hire additional support personnel. According to the Chief Judge of the Superior Court, intervening activities, such as the initial implementation of IJIS and anticipated changes in the procurement of permanent physical space for the Family Court, have necessitated a reassessment of how the court performs its work and the related impact of its operations on needed staffing. In September 2003, the Superior Court entered into another contract with Booz Allen Hamilton to reassess resource needs in light of the implementation of the D.C. Family Court Act of 2001. According to the Chief Judge of the Superior Court as of April 19, 2004, a final report on these resource needs had not been issued. The working relationship between the Family Court and CFSA has improved; however, Family Court judges and CFSA officials noted several hindrances that constrain their working relationship. They have been working together to address some of these hindrances. For example, the Family Court and CFSA participate in various planning meetings. In addition, Family Court judges and CFSA caseworkers have participated in training sessions together. These sessions provide participants with information about case management responsibilities and various court proceedings, with the intent of improving and enhancing their mutual understanding about key issues. Also, since 2002, Office of Corporation Counsel attorneys have been located at CFSA and work closely with caseworkers—an arrangement that has improved the working relationship between CFSA and the Family Court because the caseworkers and the attorneys are better prepared for court appearances. Further, the Family Court and CFSA communicate frequently about day-to-day operations as well as long-range plans involving foster care case management and related court priorities, and on several occasions expressed their commitment to improving working relationships. To help resolve conflicts about ordering services, Family Court judges and CFSA caseworkers have participated in sessions during which they share information about their respective concerns, priorities, and responsibilities in meeting the needs of the District’s foster care children and their families. Additionally, CFSA assigned a liaison representative to the Family Court who is responsible for working with other District agency liaison representatives to assist social workers and case managers in identifying and accessing court-ordered services for children and their families at the Family Court. The D.C. Family Court Act of 2001 required the District’s Mayor to ensure that representatives of appropriate offices, which provide social services and other related services to individuals and families served by the Family Court, are available on-site at the Family Court to coordinate the provision of such services. A monthly schedule shows that CFSA, the D.C. Department of Health, the D.C. Housing Authority, the D.C. Department of Mental Health, Youth Services Administration, and the D.C. Public Schools have representatives on-site. However, the Department of Human Services, the Metropolitan Police Department, and the Income Maintenance Administration are not on-site but provide support from off- site locations. According to data compiled by the liaison office, from February 2003 to March 2004, the office made 781 referrals for services. Of these referrals, 300 were for special education services, 127 were for substance abuse services and 121 were related to housing needs. Hindrances that constrain the working relationship between the Family Court and CFSA include the need for caseworkers to balance court appearances with other case management duties, an insufficient number of caseworkers, caseworkers who are unfamiliar with cases that have been transferred to them, and differing opinions about the responsibilities of CFSA caseworkers and judges. For example, although CFSA caseworkers are responsible for identifying and arranging services needed for children and their families, some caseworkers said that some Family Court judges overruled their service recommendations. Family Court judges told us that they sometimes made decisions about services for children because they believe caseworkers did not always recommend appropriate ones or provide the court with timely and complete information on the facts and circumstances of the case. Furthermore, the Presiding Judge of the Family Court explained that it was the judges’ role to listen to all parties and then make the best decisions by taking into account all points of view. The D.C. Courts, comprised of all components of the District’s judiciary branch, has made progress in procuring permanent space for the Family Court, but all Family Court operations will not be consolidated under the current plan. To prepare space for the new Family Court, the D.C. Courts designated and redesigned space for the Family Court, constructed chambers for the new magistrate judges and their staff, and relocated certain non-Family Court-related components in other buildings, among other actions. The first phase of the Family Court construction project, scheduled for completion in July 2004, will consolidate Family Court support services and provide additional courtrooms, hearing rooms, and judges’ chambers. In addition, the project will provide an expanded Mayor’s Liaison Office, which coordinates Family Court services for families and provides families with information on such services, and a new family waiting area, among other facilities. However, completion of the entire Family Court construction project, scheduled for late 2009, will require the timely completion of renovations in several court buildings located on the Judiciary Square Campus. Because of the historic nature of some of these buildings, the Superior Court must obtain necessary approvals for exterior modifications from various regulatory authorities, including the National Capital Planning Commission. In addition, some actions may require environmental assessments and their related formal review process. While many of the Family Court operations will be consolidated in the new space, several court functions will remain in other areas. According to the Chief Judge of the Superior Court, the new space will consolidate all public functions of the Family Court and 76 percent of the support functions and associated personnel. The current Family Court space plan is an interim plan leading to a larger plan, intended to fully consolidate all Family Court and related operations in one location, for which the D.C. Courts has requested $6 million for fiscal year 2005 to design Family Court space and $57 million for fiscal year 2006 to construct court facilities. If the D.C. Courts does not receive funding for the larger Family Court space plan, it will continue with the current interim plan. The Superior Court and the District of Columbia are exchanging some data and making progress toward developing a broader capability to share data among their respective information systems. In August 2003, the Superior Court began using IJIS to automate the exchange of data with District agencies, such as providing CFSA and the Office of the Corporation Counsel with information on the date, time, and location of scheduled court proceedings. CFSA managers said that scheduling of court hearings has improved. Scheduling information allows caseworkers to plan their case management duties such that they do not conflict with court appearances. Further, the District’s Office of the Chief Technology Officer (OCTO), responsible for leading the information technology development for the District’s data exchange effort, has developed a prototype, or model, to enable the exchange of data among the police department, social service agencies, and the Superior Court. While the District has made progress, it has not yet fully addressed or resolved several critical issues we reported in August 2002. These issues include the need to specify the integration requirements of the Superior Court and District agencies and to resolve privacy restrictions and data quality issues among District agencies. The District is preparing plans and expects to begin developing a data sharing capability and data warehouses to enable data sharing among CFSA, the Department of Human Services’ Youth Services Administration, the Department of Mental Health, and the Family Court in 2004. According to the Program Manager, OCTO will work to resolve the issues we raised in our August 2002 report and incorporate the solutions into its plans. While the Superior Court, the Family Court, and the District have made progress in implementing the D.C. Family Court Act of 2001, several issues continue to affect the court’s progress in meeting all requirements of the act. Several barriers, such as a lack of substance abuse services, hinder the court’s ability to more quickly process cases. While the Superior Court and the District have made progress in exchanging information and building a greater capability to perform this function, it remains paramount that their plans fully address several critical issues we previously reported and our prior recommendations. Finally, while progress has been made in enhancing the working relationship between the Family Court and CFSA, this is an area that requires continuous vigilance and improvement in order to ensure the safety and well being of the District’s children and families. Mr. Chairman, this concludes my prepared statement. I will be happy to respond to any questions you or other members of the committee may have. For further information regarding this testimony, please contact Cornelia M. Ashby at (202) 512-8403. Individuals making key contributions to this testimony include Carolyn M. Taylor, Anjali Tekchandani, and Mark E. Ward. D.C. Family Court: Progress Has Been Made in Implementing Its Transition. GAO-04-234. Washington, D.C.: January 6, 2004. D.C. Child and Family Services: Better Policy Implementation and Documentation of Related Activities Would Help Improve Performance. GAO-03-646. Washington, D.C.: May 27, 2003. D.C. Child and Family Services: Key Issues Affecting the Management of Its Foster Care Cases. GAO-03-758T. Washington, D.C.: May 16, 2003. District of Columbia: Issues Associated with the Child and Family Services Agency’s Performance and Policies. GAO-03-611T. Washington, D.C.: April 2, 2003. District of Columbia: More Details Needed on Plans to Integrate Computer Systems With the Family Court and Use Federal Funds. GAO- 02-948. Washington, D.C.: August 7, 2002. Foster Care: Recent Legislation Helps States Focus on Finding Permanent Homes for Children, but Long-Standing Barriers Remain. GAO-02-585. Washington, D.C.: June 28, 2002. D. C. Family Court: Progress Made Toward Planned Transition and Interagency Coordination, but Some Challenges Remain. GAO-02-797T. Washington, D.C.: June 5, 2002. D. C. Family Court: Additional Actions Should Be Taken to Fully Implement Its Transition. GAO-02-584. Washington, D.C.: May 6, 2002. D. C. Family Court: Progress Made Toward Planned Transition, but Some Challenges Remain. GAO-02-660T. Washington, D.C.: April 24, 2002. D. C. Courts: Disciplined Processes Critical to Successful System Acquisition. GAO-02-316. Washington, D.C.: February 28, 2002. District of Columbia Child Welfare: Long-Term Challenges to Ensuring Children’s Well-Being. GAO-01-191. Washington, D.C.: December 29, 2000. Foster Care: Status of the District of Columbia’s Child Welfare System Reform Efforts. GAO/T-HEHS-00-109. Washington, D.C.: May 5, 2000. Foster Care: States’ Early Experiences Implementing the Adoption and Safe Families Act. GAO/HEHS-00-1. Washington, D.C.: December 22, 1999. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Family Court, established by the D.C. Family Court Act of 2001, was created in part to transition the former Family Division of the D.C. Superior Court into a court solely dedicated to matters concerning children and families. The act required the transfer of abuse and neglect cases by October 2003 and the implementation of case management practices to expedite their resolution in accordance with timeframes established by the Adoptions and Safe Families Act of 1997 (ASFA); a plan for space, equipment, and other needs; and that the Superior Court integrate its computer systems with those of other D.C. agencies. The act also reformed court practices and established procedures intended to improve interactions between the court and social service agencies in the District. One such agency, the Child and Family Services Agency (CFSA), is responsible for protecting children at risk of abuse and neglect and ensuring that services are provided for them and their families. Both social service agencies and the courts play an important role in addressing child welfare issues. Representative Tom Davis, Chairman of the House Committee on Government Reform, asked GAO to assess the Family Court's efforts to comply with ASFA requirements and the D.C. Family Court Act of 2001, and its efforts to improve communication with CFSA. The Family Court met timeframes for transferring cases and decreased the timeframes for resolving abuse and neglect cases. As of October 2003, only 34 of the approximately 3,500 cases that were to be transferred to the Family Court from other divisions of the Superior Court remained outside the Family Court. For children removed from their homes, the median days to begin disposition hearings declined by 202 days to 39 days, or about 84 percent between 2001 and 2003. However, the Family Court has not met the ASFA requirement to hold permanency hearings within 12 months of a child's placement in foster care for all cases. Timely permanency hearings were held for 25 percent of cases in March 2001 and 55 percent in September 2002. Support from Family Court judges and top CFSA management has been a key factor in improving the working relationship between CFSA and the Family Court. However, Family Court judges and CFSA officials noted several hindrances that constrain their working relationship. For example, some CFSA caseworkers said that some Family Court judges overruled their service recommendations. Progress has also been made in acquiring permanent space for the Family Court and exchanging data with District agencies. According to the Chief Judge of the Superior Court, all public functions of the Family Court and 76 percent of the support functions will be consolidated in the new space. The construction project is scheduled for completion in 2009 and will require timely renovations in existing court buildings. To comply with the D.C. Family Court Act of 2001, the Superior Court and the District are exchanging some data and making progress toward developing the ability to exchange other data. In August 2003, the Superior Court began using a new computer system and is providing CFSA with information on scheduled court proceedings. Further, the District has developed a model to enable the exchange of data among several District agencies, but it has not yet resolved many critical systems issues.
While parents are primarily responsible for the education and care of children who are younger than school age, a variety of factors have led to an increased demand for early learning and child care programs. For example, workforce participation among mothers with children age 5 and under has generally increased since the 1970s. In addition, initiatives to expand access to preschool have developed at the local, state, and federal level. Federal support for early learning and child care has developed gradually in response to emerging needs. Historically, early learning and child care programs existed separately with separate goals: early learning programs focused on preparing young children for school, while child care programs subsidized the cost of child care for low-income parents who worked or engaged in work-related activities. Over time, the distinction between these two types of programs has blurred somewhat as policymakers seek to make educationally enriching care available to more young children. In addition to costs paid by parents, multiple levels of government contribute funding to support early learning and child care through a loosely connected system of private and public programs. Public financing for early learning and child care in the United States involves multiple funding streams and programs at the federal, state, and local level. A portion of federal support for child care is provided through funding to states, which in turn provide subsidies to low-income families. Within the parameters of federal law, regulations, and guidance, states generally determine their own specific policies concerning the administration of these funds, including who is eligible to receive subsidies, the amount of the subsidies, and the standards that programs must meet. We previously reported that different agencies administer the array of federal early learning and child care programs. This report, like our prior work, uses standard definitions to describe fragmentation, overlap, and duplication among government programs: Fragmentation refers to those circumstances in which more than one federal agency (or more than one organization within an agency) is involved in the same broad area of national need and opportunities exist to improve service delivery. Overlap occurs when multiple agencies or programs have similar goals, engage in similar activities or strategies to achieve their goals, or target similar beneficiaries. Duplication occurs when two or more agencies or programs are engaged in the same activities or provide the same services to the same beneficiaries. As we have previously reported, fragmentation, overlap, and duplication exist across many areas of government activity. In some cases it may be appropriate or beneficial for multiple agencies or entities to be involved in the same programmatic or policy area, due to the complex nature or the magnitude of the federal effort. For policy makers considering investments in early learning or child care programs, understanding more about the quality of the program or the results of early learning interventions can be instructive. Evaluative information can help demonstrate whether and why a program is working well or not. We have previously reported that performance assessment is an important way to obtain such information. Performance assessment is also critical to effective program management. Agencies may use different methods to assess the performance of programs. Performance measurement focuses on whether a program has achieved its objectives, expressed as measureable standards, while program evaluations typically examine a broader range of information on program performance and its context. Both forms of assessment aim to support resource allocation and other management decisions. Agencies may measure different types of performance information, including the type or level of program activities (process), the direct products and services delivered by a program (output), or the results of those products and services (outcomes). Multiple federal programs may provide or support early learning or child care for children age 5 and under. The federal investment in early learning and child care includes three broad categories of programs: 1. Programs with an explicit early learning or child care purpose: For these programs, early learning or child care is specifically described as a program purpose, according to our analysis of the CFDA and agency documents. This may include some programs that also serve children older than 5 or provide some services outside a formal early learning or child care setting. The Child Care and Development Fund (CCDF), which provides subsidies to low-income working families, is an example of a program in this category. 2. Programs without an explicit early learning or child care purpose: These programs may provide or support early learning or child care; however, early learning or child care is not specifically described as a program purpose in the CFDA or agency documents. According to agency officials, these programs permit, but do not require, using funds for these services. Programs in this category include multipurpose block grants; programs that permit funds to be used for early learning or child care as an ancillary service; and programs that support early learning or child care through food, materials, or other services. Examples of programs without an explicit early learning or child care purpose include Temporary Assistance for Needy Families (TANF), the Workforce Innovation and Opportunity Act Adult program, and the Child and Adult Care Food Program. 3. Tax expenditures that subsidize child care through the tax code: These include tax credits and exclusions that subsidize the private purchase of child care. Tax credits allow eligible individuals or employers to reduce their tax liability dollar for dollar. The credits included in this review are nonrefundable and do not offer benefits to individuals or businesses with no tax liability. Exclusions allow individuals to exclude certain compensation from their taxable income and generally provide larger tax savings to those taxed at higher rates. The revenue that the government forgoes through tax expenditures can be viewed as spending channeled through the tax system, which contributes to the overall federal investment. The credit for child and dependent care expenses is an example of such a tax expenditure. Within this framework, we identified 9 programs that have an explicit early learning or child care purpose and another 35 programs that do not have an explicit early learning or child care purpose. In addition to these federally funded programs, we identified three federal tax expenditures that forgo tax revenue to subsidize the private purchase of child care and adult dependent care services. (For a complete list of programs and tax expenditures we identified, see fig. 5 through fig. 14 in appendix II.) Agencies obligated approximately $15 billion in fiscal year 2015, the most recent obligations data available at the time we conducted our review, across the nine programs with an explicit early learning or child care purpose. The vast majority of this funding is concentrated in two programs administered by HHS: Head Start and CCDF. Together, these two programs comprised over 90 percent of total obligations for programs with an explicit early learning or child care purpose in fiscal year 2015. All other programs with an explicit early learning or child care purpose each obligated less than $500 million in fiscal year 2015 (see table 1). All of the programs with an explicit early learning or child care purpose require at least some funds to be used on early learning or child care services, according to agency officials. However, not all funds for these programs are targeted toward children from birth through age 5 in an early learning or child care setting, and agencies noted that they are generally not required to track funds used specifically for these purposes. One exception is the Striving Readers Comprehensive Literacy Program: Education officials collect data on the amount of funds used for pre- literacy activities because grantees must ensure that local providers use 15 percent of funds to serve children from birth through age 5. In fiscal year 2015, all four programs that exclusively target children age 5 or under—Early Intervention Program for Infants and Toddlers with Disabilities, Preschool Development Grants, Preschool Grants for Children with Disabilities, and Head Start—used 100 percent of their funds for early learning or child care services for this population, according to agency officials. In contrast, CCDF, Family and Child Education (FACE), Promise Neighborhoods, and Child Care Access Means Parents in School (CCAMPIS) programs provide services to school-age children in addition to children age 5 and under. Officials told us they were unable to identify the amount of funds used specifically for children age 5 and under for these programs. For example, CCDF serves eligible children under age 13, and while states report the percentage of children served by age, officials told us states are not required to report the amount of CCDF funds spent by age. Not all CCDF funds are used for child care subsidies. In addition to directly subsidizing access to child care services for eligible low-income children, CCDF invests in improving the quality of child care available to families. We recently reported that the majority of children who received CCDF subsidies were under age 5. Among programs without an explicit early learning or child care purpose, none require spending on early learning or child care, according to agency officials. Further, agency officials told us that they do not track the amount of funds used for early learning or child care for most of these programs and are not required to do so. Officials from 3 of these 35 programs could identify the amount of funds obligated for early learning and child care purposes. Funding for early learning or child care for these three programs ranged from $2 million to $14 million in fiscal year 2015. Additionally, HHS officials told us that although they do not track program obligations specifically for early learning or child care purposes for the Social Services Block Grant (SSBG) and TANF, they do track state spending on child care. However, HHS does not track spending specifically for services provided to children age 5 and under, according to agency officials. (See table 6 in appendix III for details.) Moreover, although they do not track this information, officials from some programs without an explicit early learning or child care purpose said it is likely that little funding, if any, actually went toward these purposes. For example, officials from the U.S. Department of Agriculture (USDA) told us that the National School Lunch Program and School Breakfast Program are targeted to school-age children, and they did not think very many children under age 5 receive meals from the program. Additionally, Department of Labor officials told us that the Native American Employment and Training Program permits funds to be used for child care services, among other supportive services, to enable parent participation in the program. However, due to the limited grant size, officials said it is likely that most participants only receive referrals to child care providers. Much like the agencies with programs that do not have an explicit early learning or child care purpose, Treasury does not estimate the amount of forgone revenue resulting specifically from tax credits or exclusions that support the care of children age 5 and under, according to agency officials. All of the tax expenditures we identified are available for the care of dependent children. The credit for child and dependent care expenses also subsidizes dependent care of individuals who are physically or mentally incapable of self-care, including adults with disabilities or who are elderly. Combined, these tax expenditures accounted for approximately $5.4 billion of forgone federal income tax revenue in fiscal year 2015 (see table 2). This amount, however, includes forgone revenue for care of children older than age 5 and dependent adults, since the available data do not distinguish children and other dependents by age. As we found in 2012, some fragmentation, overlap, and potential duplication exist among early learning and child care programs. The federal investment in early learning and child care is fragmented in that it is administered through multiple agencies. HHS, Education, and Interior administer programs with an explicit early learning or child care purpose. Five additional agencies and one regional commission administer programs without an explicit early learning or child care purpose, and the Internal Revenue Service at Treasury is responsible for administering federal tax expenditures (see table 3). We found some overlap between early learning and child care programs, as some programs target similar beneficiaries (see fig. 1). For example, five of the nine programs with an explicit early learning or child care purpose primarily target children age 5 and under, and four programs target low-income children. Despite these general similarities, however, some of these programs target very specific populations that in some cases have limited overlap or no overlap. For example, Preschool Development Grants specifically target 4-year-olds, the Early Intervention Program for Infants and Toddlers with Disabilities targets children with disabilities from birth through age 2, and Preschool Grants for Children with Disabilities targets children with disabilities ages 3 through 5. Other programs target more specific populations, such as children whose low- income parents are pursuing postsecondary education, or children living in certain distressed geographic areas. Additionally, some programs engage in similar activities, according to our analysis of agency-provided information. For example, grantees of Head Start, Preschool Development Grants, and FACE use funds for enrollment slots (spots for individual children to participate in programs on an ongoing basis), health care, and social services or transportation, according to agency officials. However, other programs with an explicit early learning or child care purpose do not fund enrollment. Instead, some programs fund additional services to aid early learning, such as special education services or evaluations (see fig. 2). Despite some similarities in target populations and activities, programs with an explicit early learning or child care purpose often have different goals and administrative structures. For example, while the CCAMPIS program and CCDF both fund child care, they have different goals. The goal of CCAMPIS is to support the participation of low-income parents in postsecondary education. In contrast, CCDF has dual goals of providing child care as a work support to parents and providing children with quality child care to prepare them for success in school, according to HHS. Additionally, the two largest programs—Head Start and CCDF—differ significantly from each other both in their goals and in how they are administered. Head Start was created, in part, to support children’s early development by offering comprehensive, community-based services to meet children’s multiple needs and, as such, provides federal grants directly to community-based public and private service providers. In contrast, CCDF was created to help states reduce dependence on public assistance. It provides grants to states to subsidize child care to support parents’ involvement in the workforce. States, in turn, generally provide subgrants to counties or other local entities for distribution to parents. Though some overlap exists among programs with an explicit early learning or child care purpose, the majority of the programs we identified– 35 of 44–do not have such a purpose. Overlap is limited among these 35 programs and those 9 programs with an explicit early learning or child care purpose for a number of reasons. For example: Multipurpose block grants can be combined and jointly administered: In their comments regarding our 2012 report, HHS officials noted that many states choose to integrate CCDF, TANF, and SSBG funding streams to provide services. They noted that states jointly administer these funding streams under one set of rules, often in coordination with other state and local funding. Programs that provide child care as an ancillary purpose are not targeted toward children: Overlap between programs that permit funds to be used for child care as an ancillary service and those with an explicit child care purpose is limited given that such programs are not targeted to children and that child care is not among the programs’ objectives. For example, six worker training programs authorized by the Workforce Innovation and Opportunity Act allow for grant dollars to pay for child care and other supportive services that are necessary to enable an individual to participate in training and other authorized activities. These programs are not targeted to children age 5 or under—their objectives are to provide individuals with job search assistance and training. Some programs support but do not provide early learning or child care for young children: Other programs we identified support early learning or child care by providing food, materials, or other services. These programs have limited overlap with programs with an explicit early learning or child care purpose because they support such programs rather than provide early learning or care for children age 5 and under. For example, the Child and Adult Care Food Program, which is administered by USDA, reimburses child care centers for the cost of meals and snacks, among other things. Additionally, the General Services Administration donates unused federal property to certain state and local agencies through the Donation of Federal Surplus Personal Property program. Officials from the General Services Administration told us that agencies that receive the donations pass them along to eligible organizations, which may include preschools. They also told us that they do not track the ultimate destination of the donated property. HHS and Education have acknowledged some overlap among early learning and child care programs. In a November 2016 joint report to Congress, HHS and Education identified eight federal programs with a primary purpose of providing early learning for young children. In this report, HHS and Education stated that overlap among early learning and child care programs is purposeful and necessary to meet the needs of children and parents. For example, some programs fund enrollment slots and others fund additional services to aid early learning, such as special education services, according to HHS and Education officials. They also noted that families may have multiple needs that require more than one type of service. For instance, families can combine Head Start and CCDF, which allows families to meet children’s learning needs and parents’ child care needs, according to HHS officials. Despite this overlap, there may be service gaps because these programs are not entitlements, and therefore do not serve all eligible children. For example, we recently reported that an estimated 1.5 million children received CCDF subsidies, out of an estimated 8.6 million children who were eligible in their state in an average month in 2011 and 2012. Additionally, Education officials told us that many states have narrowed their eligibility criteria for the Early Intervention Program for Infants and Toddlers with Disabilities because of funding constraints. According to these officials, states are focusing these services on children with the most severe disabilities. There may be potential for duplication among early learning and child care programs insofar as some programs may fund similar types of services for similar populations. However, as we noted in our 2012 review, the extent to which actual duplication exists is difficult to assess at the federal level due to differing program eligibility requirements and data limitations. For example: Program eligibility requirements: Eligibility requirements differ among programs, even for similar subgroups of children, such as those from low-income families. For example, Head Start serves primarily low-income children under age 5 whose families have incomes at or below the official federal poverty guidelines, while CCDF serves children under age 13 whose parents are working or in school and who earn up to 85 percent of state median income. Moreover, states have the flexibility to establish specific eligibility policies for CCDF within broad federal eligibility requirements. Given this flexibility, it may be possible for families with similar circumstances to be eligible to receive CCDF subsidies in some states but not others. Additionally, although the CCAMPIS program also provides funding for child care, families must be eligible for Pell Grants, which are need-based federal grants to low-income undergraduate students, in order to receive services. Data limitations: For some programs, relevant programmatic information is sometimes not readily available. For example, as we previously reported, HHS does not collect data on all families who receive child care funded by TANF. This is because some families that do not receive cash assistance receive child care funded through TANF. States are not required to report on families who receive only TANF-funded child care without also receiving cash assistance. This leaves an incomplete picture of the number of children receiving federally funded child care subsidies. We previously suggested requiring additional data collection on families receiving TANF-funded child care, but this information is currently not collected. Additionally, inadequate or missing data, as well as difficulties quantifying the outcomes of some tax expenditures, can make it difficult to study the beneficiaries of these expenditures. While the extent of potential duplication may be difficult to fully assess, some early learning and child care programs include some safeguards against duplication. For example, some programs can use funds to expand access to other programs, thus limiting the likelihood that the same beneficiaries receive the same services from more than one program. Promise Neighborhood grantees can use funds to expand access to Head Start or to establish new child care or preschool options. Similarly, Preschool Development Grants can be used to expand the capacity of Head Start to serve more eligible children. Specifically, Preschool Development Grant funds must be used to supplement, not supplant, any federal, state, or local funds, including Head Start and CCDF, among others. In addition, the tax code has limits on combining the credit for child and dependent care and the employer-provided child care exclusion. Taxpayers can claim the credit for child and dependent care if they pay someone to care for a dependent under age 13 or for a spouse or other dependents who are not able to care for themselves. The credit can be up to 35 percent of dependent care expenses with a limit of $3,000 per qualifying person and $6,000 for two or more qualifying persons. The employer-provided child care exclusion, a kind of flexible spending account for dependent care expenses, generally provides participants an opportunity to exclude an amount not to exceed $5,000 for dependent care each year from their gross income. Families using the employer- provided child care exclusion must subtract the amount of those benefits from the maximum they are eligible to receive for the credit for child and dependent care, thereby preventing duplication of benefits. We found that since our 2012 review of early learning and child care programs, HHS and Education have improved coordination among the agencies that administer these programs, which has helped to address potential risks regarding fragmentation, overlap, and potential duplication. As we previously reported, effective coordination can help mitigate the effects of program fragmentation and overlap and potentially help bridge service gaps. Based upon our analysis, the Early Learning Interagency Policy Board— HHS and Education’s inter-departmental workgroup that focuses on children from birth through age 8—has followed leading practices for interagency collaboration that we have identified, such as defining outcomes, tracking progress toward goals, and including relevant participants across agencies, among others (see table 4). For example, in response to needed actions we identified in 2012, HHS and Education expanded membership of this group to include other agencies with early learning and child care programs. In addition to efforts related to the Interagency Policy Board, HHS and Education have improved coordination in other ways since our 2012 review of early learning and child care programs. For example, they have: Jointly administered the Preschool Development Grants: HHS and Education coordinate to co-administer the Preschool Development Grants program, which began in 2014. Issued joint policy statements: HHS and Education have also issued joint policy statements on a range of early learning and child care issues. Examples include statements on limiting preschool suspensions and expulsions, and strategies to increase the inclusion of young children with disabilities in high-quality early learning programs. Joint policy statements create consistent guidance for local programs, according to HHS and Education officials. Coordinated training and technical assistance: In 2015, HHS redesigned its training and technical assistance system across Head Start and CCDF. Previously, Head Start and CCDF had separate training and technical assistance centers that were operated independently of one another. The new training and technical assistance centers provide services to both Head Start and CCDF. Additionally, HHS and Education coordinated technical assistance by issuing a literature review about strategies to help children maintain the benefits of preschool attendance. They also held joint webinars on the use of assistive technology to support young children with disabilities and strategies to limit preschool suspensions and expulsions. Agencies assess performance for all nine programs with an explicit early learning or child care purpose, and they do so using different methods. The agencies use various combinations of three approaches: performance monitoring, conducting program evaluations or studies, and reviewing other performance information (see fig. 3). Specifically: Performance monitoring: For all nine programs, agencies reported they monitor performance annually either through performance measures or other annual reviews. Interior monitors performance for the FACE program through an annual review. This review includes implementation data and program outcomes, such as children’s proficiency in math and literacy, parenting practices, and integration of native language and culture into FACE program instruction, which officials can use to assess program progress from year to year. For the other eight programs, HHS and Education report results on measureable performance standards in agency congressional budget justifications or other publicly available sources. Program evaluations or studies: For six of the nine programs, agency officials also periodically conduct internal or contracted program evaluations or studies. Most of the programs that have conducted program evaluations have done so to fulfill a program requirement. For example, the 2004 reauthorization of the Individuals with Disabilities Education Act (IDEA) called for a national assessment to measure the implementation progress and relative effectiveness of the law. Similarly, in response to a congressional mandate in the 1998 reauthorization of Head Start, HHS conducted a national impact evaluation of Head Start and an evaluation of the Early Head Start program. Additionally, since fiscal year 2000, Congress has appropriated funds through CCDF specifically for research and evaluation, according to agency officials. Other performance information: Officials from six of the nine programs told us they collect other performance information through grantee-submitted performance reports or other methods. For example, the CCAMPIS program, which funds child care for student parents enrolled in postsecondary institutions, uses grantee-submitted annual reports to collect detailed information about individual student enrollment that provides context for understanding performance measures. Additionally, some programs track information about the cost of certain outcomes. Agency officials told us they use results from performance monitoring, evaluations, and other performance information to assist in grantee monitoring, determine continued funding to grantees, and develop technical assistance, among other things. While agencies use different methods to assess programs, all agencies that administer programs with an explicit early learning or child care purpose collect performance information that aligns with program objectives to determine progress toward those objectives. For example, the Striving Readers Comprehensive Literacy program aims to advance literacy skills—including pre-literacy skills, reading, and writing—for students from birth through grade 12. The program has four performance measures that assess student’s literacy proficiency at age 4, grade 5, grade 8, and again at high school. (See appendix IV for more details on recent program performance for each of the nine explicit-purpose early learning or child care programs.) Additionally, agency officials examine common aspects of performance for many programs with an explicit early learning or child care purpose. Specifically, we found that many programs had assessments relating to 1) results for children age 5 and under, 2) program quality or teacher qualifications, and 3) academic improvement or kindergarten readiness (see fig. 4). For example: Children age 5 and under: Eight of the nine programs assess results regarding this age group. For example, for the Early Intervention Program for Infants and Toddlers with Disabilities, Education officials assessed the percentage of children who entered the program with below age social-emotional skills and who then substantially increased their rate of growth by the time they exited the program. Program quality or teacher qualifications: Six of the nine programs assess program quality or teacher qualifications. For example, CCDF measures the number of states that implement a systemic approach to assessing, improving, and communicating a child care or education program’s level of quality, including meeting certain benchmarks. Similarly, Preschool Grants for Children with Disabilities measures the number of states with at least 90 percent of special education teachers certified in the areas in which they are teaching. Academic improvement or kindergarten readiness: Eight of the nine programs, including all seven of the programs with an explicit early learning purpose, assess academic improvement or kindergarten readiness. For example, for the Striving Readers Comprehensive Literacy program, Education measures the percentage of participating 4-year-old children who achieve significant gains in oral language skills. For Preschool Development Grants, Education measures the number and percentage of children served by the grant who are ready for kindergarten. Although many of the programs with an explicit early learning or child care purpose assess common aspects of performance, the specific results agencies examine differ for a number of reasons. One reason is that agencies use different tools to assess performance. For example, although all seven early learning programs assess some aspect of academic improvement or kindergarten readiness, programs vary in how they perform these assessments. The Striving Readers Comprehensive Literacy program provides grants to states and requires them to use approved state accountability assessments to determine most program performance measures. In contrast, the FACE program provides grants to Bureau of Indian Education-funded schools that implement in-home and center-based services, and uses a single student assessment tool to measure performance across participants. Another reason assessments differ is that some programs examine students’ progress while they participate in the program, whereas others assess proficiency at a particular point in time. For instance, both of the special education programs administered by Education that we reviewed collect entry and exit data on children with disabilities who receive program services. Officials use these assessments to show children’s developmental progress while in the program across a number of early childhood outcomes, including early language and literacy knowledge skills, including early language and literacy knowledge skills. In contrast, other programs assess children only once, during the academic year. For example, the Preschool Development Grants program assesses children once at kindergarten entry. Programs also differ in the time frame over which officials assess performance. For example, HHS has assessed the long-term impact of Head Start and Early Head Start by evaluating the same cohort of children as they progress through later grades, even after the children stopped receiving program services. Other programs gauge children’s performance annually while grantees receive funding, but do not evaluate them after they stop receiving grant-funded services. (See appendix IV for additional performance details and a complete list of program performance measures for all nine programs with an explicit early learning or child care purpose.) In addition to assessing performance by program, HHS and Education each have an agency-wide priority goal that incorporates early learning or child care program performance measures, and both agencies assess performance on meeting these goals. The GPRA Modernization Act of 2010 (GPRAMA) requires that every 2 years, certain agencies identify their highest priority performance goals, which GPRAMA refers to as agency priority goals. Agencies are expected to identify performance measures to track progress on achieving these goals, or identify alternative ways of measuring progress, such as milestones for completing major deliverables. Education incorporated performance measures for the Preschool Development Grants program into its priority goal, and HHS incorporated performance measures from Head Start and CCDF (see table 5). Education and HHS have incorporated these goals into their strategic plans and have published updates on progress toward meeting their priority goals at performance.gov. Guidance from the Office of Management and Budget states that agencies are to identify tax expenditures, as appropriate, among the various federal programs and activities that contribute to their strategic objectives and agency priority goals. HHS officials told us that they decided the three child care-related tax expenditures we identified are not potential contributors to their agency priority goal to improve the quality of early childhood programs for low-income children. According to HHS officials, they concluded that low-income families who qualify for HHS’s key programs, such as CCDF, are not likely to benefit from these tax expenditures due to their lack of tax liabilities. Moreover, agency officials told us the child care-related tax expenditures we identified are available to help pay for dependent care and do not directly align with HHS’s priority goal focused on program quality. According to Education officials, Education also does not incorporate these tax expenditures in its agency priority goal because families who benefit from publicly provided early learning services would not incur tuition costs, and therefore would not benefit from these expenditures. We provided a draft of this report for review and comment to the Departments of Education (Education), Health and Human Services (HHS), and the Interior (Interior). We also sent selected excerpts of the report to the Appalachian Regional Commission, and to the Departments of Agriculture, Housing and Urban Development, Justice, Labor, the Treasury, and the General Services Administration. We received formal written comments from HHS, which are reproduced in appendix V. In addition, Education and HHS provided technical comments, which we incorporated as appropriate. Interior did not have comments on our report. In its written comments, HHS agreed with our findings and noted that children and families benefit most from investments in federal early learning and child care programs when they are coordinated with similar programs and activities. The agency also noted that it will continue to work with the Department of Education and other agencies to streamline resources for early learning and child care programs, to the extent permitted by law. We are sending copies of this report to the appropriate congressional committees, the Secretaries of Education, Health and Human Services, and the Interior, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-7215 or brownbarnesc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. The objectives of this review were to examine: (1) what is known about the federal investment in early learning and child care programs; (2) the extent to which early learning and child care programs are fragmented, overlap, or are duplicative, and the efforts agencies have made to manage these conditions; and (3) the extent to which agencies assess performance for programs with an explicit early learning or child care purpose. To count the number of federal early learning and child care programs, we examined the key benefits and services they provide. We used the following three criteria to identify relevant programs. The programs (1) funded or supported early learning or child care services, (2) were provided to children age 5 and under, and (3) delivered services in an educational or child care setting. We excluded kindergarten programs from our scope because early learning applies to children who are preschool age (age 5 and under), while children who are in kindergarten are grouped as part of the standard K-12 education demographic. We also excluded any programs that were not listed in the Catalog of Federal Domestic Assistance (CFDA), a government-wide compendium of federal programs, projects, services, and activities that provide assistance or benefits to the American public. Additionally, we limited our review to programs for which federal funds were obligated in fiscal year 2015, the most recent available obligations data at the time we conducted our work. As we did in 2012 and prior work, we excluded programs operated by the U.S. Department of Defense because these programs are available only to members of the military and their families. Furthermore, these programs are not listed in the CFDA. Tax expenditures included in this review include those that (1) fund or support early learning or child care services, (2) are obtained on behalf of children under age 5, and (3) forgo taxes so those funds can be used to purchase child care services occurring in an educational or child care setting. In identifying programs and tax expenditures for this work, we relied on the results of our 2012 report, agency responses to our questionnaires, and other supplemental information provided by agencies. We did not conduct an independent legal analysis to verify the accuracy of the information provided to us by agencies. To identify federal programs and tax expenditures, we started with the list of 45 programs and 5 tax expenditures in our 2012 review. For each program, we sent questionnaires to and received responses from nine agencies and one regional commission and conducted follow-up interviews to confirm that these programs and tax expenditures continued to meet all three of our criteria, which are described more fully below, in fiscal year 2015. We asked agency officials to identify additional programs meeting these criteria that were not included in our 2012 review. We reviewed descriptions in the Congressional Research Service’s 2014 Tax Compendium to identify any new tax expenditures that can be used for early learning or child care. We used the questionnaires we sent to the Department of the Treasury (Treasury) to confirm that the tax expenditures we identified in 2012 remained in effect in fiscal year 2015. In addition to agency responses to our questionnaires, we obtained supplementary information from the Departments of Education (Education), Health and Human Services (HHS), the Interior (Interior), and other agencies. Using a similar definition as in our prior review, we considered a program to have an explicit early learning or child care purpose if, according to our analysis, early learning or child care is specifically described as a program purpose in the CFDA or other agency information. After we identified programs and tax expenditures that met our criteria, we obtained information about fiscal year 2015 program obligations, the most recent year for which these data were available, from the President’s budget for fiscal year 2017. We obtained information on estimated losses in revenue from fiscal year 2015 tax expenditures from Treasury’s Tax Expenditure Estimates for fiscal year 2017. To analyze potential fragmentation, overlap, and duplication across programs, we assessed the programs’ activities and target populations using questionnaire responses from the nine agencies and one commission, as well as from supporting documentation. We also interviewed Education and HHS officials regarding their efforts to coordinate with other agencies that administer early learning and child care programs, and examined documents about these efforts. We compared these efforts against leading practices for agency collaboration. In addition, we obtained information about program performance for programs with an explicit early learning or child care purpose by reviewing agency performance reports, congressional budget justifications, and program studies, and by interviewing agency officials. For programs with performance measures, we reported the results for the last 3 years of available data found in agency congressional budget justifications, or other publically available sources. We also reviewed program studies that were cited in congressional budget justifications or other published performance documents, and obtained additional program studies that Education, HHS, and Interior conducted or used independent contractors to conduct. Through these steps, we identified 30 studies published between 2002 and 2016. We then reviewed a subset of 27 studies that we determined to have assessed results aligned with the programs’ objectives. We reviewed the methodological and analytical approaches of each of these studies to ensure that they were appropriate to measure program performance. The Department of Education (Education) has two performance measures for Child Care Access Means Parents in School program (CCAMPIS), but has not set program-wide targets (see table 7). Education reports performance on these measures separately for four- year and two-year institutions of higher education in its annual congressional budget justification. The fiscal year 2017 budget justification contained performance information for fiscal years 2012 and 2013. Education officials told us reporting requirements for the program changed recently, from 18- and 36-month performance reporting to annual performance reporting. At that time, officials revised the data collection instrument and program performance measures to reflect the annual data collection. As of 2016, officials said they have two years of baseline data and are considering setting performance targets for future years. Education has not conducted an evaluation of the CCAMPIS program. However, officials told us they use data from other Education-conducted evaluations of post-secondary students to gather contextual information about conditions associated with low-income student parents. Education has six performance measures for the Early Intervention Program for Infants and Toddlers with Disabilities and set program-wide targets for each of these (see table 8). In its congressional budget justification, Education reports its progress toward achieving these targets annually. Results have remained relatively stable among the six measures over the 3 years of data we reviewed (2012-2014). However, in 2014, the most recent year of available data, Education reported meeting two performance targets: the number of states that serve at least 1 percent of infants in the general population under age 1 through this program, and the percentage of children receiving age-appropriate early intervention services in the home or in programs designed for typically developing peers. Other measures were relatively close to meeting targets. Education acknowledges that some data quality issues exist, particularly with regard to missing data. In various reauthorizations, the Individuals with Disabilities Education Act (IDEA) has included provisions for collecting information on the implementation and impact of the law. For example, in response to a requirement in the 1997 reauthorization, Education conducted four longitudinal child-based studies on specific age groups, including the National Early Intervention Longitudinal Study (NEILS). NEILS was a descriptive study of young children and families served through this program, the services they received, the cost of those services, and the outcomes that children and their families experienced. According to the authors of the final NEILS report, outcomes for families and children from birth through age 3 were generally positive. For example, nearly one third of children who received early interventions were not receiving special education services at kindergarten, nor did they have any disability. However, a large percentage of children who received early intervention services had communication problems, such as speech or communication delay, or the inability to make needs known, and for many children, these problems continued through kindergarten. The 2004 reauthorization of IDEA also called for a national assessment to measure the implementation progress and the relative effectiveness of the law. Among other reports published as part of the IDEA national assessment, Education published an analysis of the patterns in identification of and outcomes for children with disabilities as well as a report on IDEA implementation. Among other key findings, Education found that 32 states have early learning guidelines for infants and toddlers. Few states, however, provide a mandated or suggested written plan documenting the early intervention services a child should receive and how these services are to be administered for infants and toddlers and their families. Education also uses the Early Childhood Longitudinal Study, Birth Cohort, a nationally representative sample of children studied from birth through their entry into kindergarten, to obtain important demographic information on infants and toddlers with disabilities. The Preschool Development Grants program has three performance measures for its fiscal year 2014 cohort of grantees (see table 9). Education has not set program-wide performance targets, but requires individual grantees to set targets. Education published a progress update on grantee performance for the 2015-2016 school year, which was the program’s first year of implementation. In this update, Education reported that the state grantees met nearly 90 percent of their targets for the number of children served in high-quality preschool programs. Education has not yet published results for all three of its performance measures. Education officials told us that Education and HHS have not conducted an evaluation of this program, in part because the program is new and in the process of implementation. Education has five performance measures for Preschool Grants for Children with Disabilities and sets program-wide targets for each of these (see table 10). In its congressional budget justification, Education reports its progress toward achieving these targets annually. Results have remained relatively stable among the five measures over the 3 years of data we reviewed (2012-2014). However, in 2014, the most recent year of available data, Education reported meeting only one performance target—the percentage of children with disabilities (ages 3 through 5) attending a regular early childhood program and receiving the majority of hours of special education and related services in the regular early childhood program. Other measures were relatively close to meeting targets. Education acknowledges that some data quality issues exist, particularly with regard to missing data. In various reauthorizations, IDEA has included provisions for collecting information on the implementation and impact of the law. For example, in response to a requirement in the 1997 reauthorization, Education conducted four longitudinal child-based studies on specific age groups, including the Pre-Elementary Education Longitudinal Study (PEELS). PEELS was designed to describe young children with disabilities, their experiences, the services they receive, as well as their performance over time in preschool, kindergarten, and elementary school. Education has conducted several reports based on these longitudinal data. In the most recent report we reviewed, Education found, among other things, that children who received preschool special education services showed growth each year in vocabulary and mathematics; however, growth slowed in both subject areas as children got older. Children’s performance varied across assessments and across subgroups defined by disability. At age 10, the gap between these subgroups persisted, and there were no differences in growth rates between subgroups. The 2004 reauthorization of IDEA also called for a national assessment to measure the implementation progress and the relative effectiveness of the law. Among other reports published as part of the IDEA national assessment, Education published an analysis of the patterns in identification of and outcomes for children with disabilities as well as a report on IDEA implementation. Education found that the percentage of children identified for services increased every year from 1997 to 2006 for all preschool-age children, among other findings. Furthermore, children in each of the disability categories differed significantly from the general population in academic skills as well as social development. Among other key findings, this study found that 27 states either mandate or suggest a standards-based individualized education program for preschool age children. Such a program is designed to enable a child to make progress in the general education curriculum through the inclusion of state academic standards, among other things. Additionally, through two complementary cohort studies of children’s early school experiences, Education is currently investigating outcomes experienced by children with and without disabilities in preschool programs, through its National Center for Education Statistics. Education has 15 performance measures for the Promise Neighborhoods program (see tables 11 and 12). Education requires that individual grantees set their own performance targets. Education publicly reports available data for its two grant cohorts online. Due to data reliability concerns, Education officials have not reported results for some of the performance measures. Education officials stated that some of these data are under development because they have not been able to collect them consistently from their grantees. They noted that they have made efforts to address these concerns by investing in technical assistance to grantees. Education has not conducted an evaluation of the Promise Neighborhoods program. A key purpose of the Promise Neighborhoods Grant program is to learn how particular strategies affect student outcomes through a rigorous evaluation of the program. In 2014, we recommended that the Secretary of Education develop a plan to use the data collected from grantees to conduct a national evaluation of the program. As of May 2017, this recommendation remained unimplemented. Education has four performance measures for the Striving Readers Comprehensive Literacy program and sets program-wide targets for each of these (see table 13). In its congressional budget justification, Education reports its progress toward achieving these targets annually. One of the four performance measures is specific to the early learning population: the percentage of participating 4-year-old children who achieved significant gains in oral language skills. Results on this measure have declined over the 2 years of data we reviewed (2013-2015). Education reported meeting none of the four performance targets in 2015, the most recent year of available data. Education has not conducted an evaluation of the current Striving Readers Comprehensive Literacy program, but officials told us they are considering conducting one in the future. However, Education has synthesized evaluations of adolescent reading interventions implemented by the related 2006 and 2009 Striving Readers grant cohorts. The original Striving Readers program funded only the 2006 and 2009 grant cohorts and aimed to build a scientific research base for identifying and replicating strategies to improve adolescent literacy skills. For those cohorts, Education found that four of the 10 interventions had at least one study showing a positive effect on reading achievement. The remaining six interventions had no discernible effects. The findings from the studies funded by Striving Readers expanded the evidence base on effective reading interventions for adolescents by adding information on interventions not previously reviewed by Education’s Institute of Education Sciences. HHS has six performance measures for CCDF and sets targets for the four that it considers outcome measures (see table 14), and reports results annually. HHS reported in its congressional budget justification that CCDF met two of its six performance targets in 2014, the last year of available data. Some of CCDF measures are reviewed biannually. HHS is in the process of updating these measures due to recent statutory and legislative changes to the Child Care and Development Block Grant program. In addition, one of CCDF’s measures is under development, and HHS is still building capacity to collect this information from all states. Officials plan to obtain this information from states and territories in 2017. HHS has conducted evaluations of CCDF to learn about different approaches to improving quality and helping parents retain employment. For example, one evaluation examined three types of programs that promote language development. This evaluation found that two of these programs improved teacher interactions with children and increased pre- literacy skills of children. Another evaluation found that expanding income eligibility and extending the time before families have to reapply for child care subsidies temporarily increased the use and stability of subsidy receipt. HHS has 11 performance measures for Head Start. In fiscal year 2015, the last year of available data, HHS reported progress toward achieving targets for the five measures that it considers related to outcomes or efficiency (see table 15). HHS reported meeting two of these five performance targets in fiscal year 2015. For example, HHS reported that Head Start met its target for reducing the proportion of grantees receiving a score in the low range on a classroom assessment tool. Officials told us that improving the quality of teacher-child interactions, staff training and competency, and classroom environments have been primary goals of the program in recent years. HHS added a fifth outcome measure, to increase the percentage of Head Start and Early Head Start teachers that have a Bachelor’s degree or higher. In its fiscal year 2017 performance report, HHS collected baseline data on this measure, and used this information to set targets for the future. In response to a mandate in the 1998 reauthorization of Head Start, HHS conducted a national level, random-assignment impact evaluation of the Head Start program (Head Start Impact Study). HHS published this impact study, which followed children through first grade, in 2010, and in a subsequent report, researchers followed the same cohorts of children as they transitioned to third grade. These impact studies assessed the advantages that 4-year-old children gained during 1 year of participation in Head Start and 3-year-old children gained during 2 years of participation in Head Start among cognitive, social-emotional, health, and parenting outcomes. The Head Start Impact Study showed that having access to Head Start improves children’s preschool experiences and school readiness in certain areas, though few of those advantages persist through third grade. However, some subgroups of children in this study experienced sustained benefits into third grade. In more recent work, HHS examined the extent to which classroom quality affected outcomes observed in the impact study. Although some subgroups experienced sustained positive effects, the evaluation found little evidence to support that Head Start leads to program impacts lasting into third grade for participants overall, regardless of the program’s quality level. However, the study’s ability to detect effects of Head Start participation may have been limited by difficulty maintaining the random assignment of children to Head Start and to comparison groups, an important component of such impact studies. The 1994 reauthorization of Head Start that established the Early Head Start program called for an evaluation to focus on services delivered to families with infants and toddlers and the impacts of these services on children and families. In response, in 2002 HHS published a separate rigorous, random-assignment evaluation of the Early Head Start program. This evaluation investigated program impacts on children and families through their time in the program. Subsequent reports followed children and families as they transitioned from preschool and again in the 5th grade. HHS found, among other things, a consistent pattern of modest, favorable impacts across a range of outcomes when participating children were 2 and 3 years old, with larger impacts in some subgroups. However, some impacts on the full sample of children and families did not persist when assessed at grade 5, though some subgroup impacts remained. In more recent work, the Centers for Disease Control and Prevention linked data from this project with child welfare records to and found that the program may be effective in reducing some kinds of child maltreatment outcomes among Early Head Start children when compared to children in a control group. Additionally, HHS collects information on the characteristics, experiences, and outcomes of children participating in Head Start through the Head Start Family and Child Experiences Survey (FACES). This survey provides data from five successive, nationally representative samples of Head Start participants. HHS also maintains a similar survey specific to the Early Head Start population called Baby FACES. The Department of the Interior (Interior) does not have specific performance measures for the Family and Child Education (FACE) program. Instead, Interior uses an independent contractor to conduct annual reviews of the FACE program to obtain performance information. These reviews identify program outcomes and provide implementation data, which are summarized for the program overall and disaggregated for individual program sites. Interior reports a number of outcomes in these reviews, including children’s proficiency in math and literacy, parenting practices, and integration of native language and culture into FACE program instruction. In response to a 2004 Office of Management and Budget mandate, Interior contracted an external impact evaluation of the FACE program. In 2008, Interior funded a second FACE impact study. Interior subsequently published a report that integrated findings from both impact studies. Selected findings from this report indicate that a greater number of the FACE parents included in this study were more likely to participate in literacy activities with their children and to be involved in their children’s school than parents of non-FACE participants. However, both groups of children appeared equally kindergarten-ready, on several measures. Because FACE serves children with greater needs, the study concludes the program puts those children, as well as those with special needs, on an equal playing field with their peers. However, the study did not account for some child and program factors that may have affected participants’ outcomes. In addition to the contact named above, Rebecca Woiwode (Assistant Director), Hedieh Fusfield (Analyst-in-Charge), Colin Ashwood, and Karissa Robie made key contributions to this report. Additional assistance was provided by Kay Brown, Carol Henn, Brian James, Melissa Jaynes, John Lack, Kirsten Lauber, Benjamin T. Licht, Elizabeth Mixon, Janet Mascia, Drew Nelson, Mimi Nguyen, Dae Park, Jessica Orr, James Rebbe, Marylynn Sergent, Stephanie Shipman, Deborah A. Signer, Almeta Spencer, Rachel Stoiko, Rebecca Kuhlmann Taylor, Sarah Veale, Betty Ward-Zuckerman, Greg Whitney, and Craig Winslow.
Millions of children age 5 and under participate each year in federally funded preschool and other early learning programs, or receive federally supported child care. Federal support for early learning and child care has evolved over time to meet emerging needs. In 2012, GAO reported that multiple federal agencies administer numerous early learning and child care programs. GAO was asked to re-examine federal programs that provide or support early learning and child care. This report examines 1) the federal investment in early learning and child care programs; 2) fragmentation, overlap, and duplication among early learning and child care programs and agencies' efforts to address these conditions; and 3) the extent to which agencies assess performance for programs with an explicit early learning or child care purpose. GAO analyzed responses to questionnaires from nine agencies and one regional commission; reviewed budget and tax expenditure documentation, evaluations, annual program performance results, and other agency documentation; and interviewed officials from HHS, Education, and Interior. Multiple federal programs may provide or support early learning or child care for children age 5 and under. Of these programs, nine describe early learning or child care as an explicit purpose and are administered by the Departments of Health and Human Services (HHS), Education (Education), and the Interior (Interior). Fiscal year 2015 obligations for these nine programs totaled approximately $15 billion, with the vast majority of these funds concentrated in Head Start and the Child Care and Development Fund. An additional 35 programs did not have an explicit early learning or child care purpose, but permitted funds to be used for these services. Additionally, three tax expenditures subsidized individuals' private purchase of child or dependent care. As GAO found in 2012, some early learning and child care programs are fragmented, overlap, or have potential for duplication. Specifically: Fragmentation. The federal investment in early learning and child care is fragmented in that it is administered through multiple agencies. Overlap. Some programs with an explicit early learning or child care purpose overlap, given that they target similar beneficiaries, such as low-income children, or engage in similar activities. However, these programs often have different goals and administrative structures. Duplication. Some programs are potentially duplicative because they may fund similar types of services for similar populations. However, the extent to which actual duplication exists is difficult to assess due to differing program eligibility requirements and data limitations. HHS and Education have helped address these conditions through improved agency coordination, particularly by following leading practices for interagency collaboration. For example, in response to needed actions GAO identified in 2012, HHS and Education expanded membership of their inter-departmental workgroup on young children to include other agencies with early learning and child care programs. The agencies have also documented their agreements, dedicated staff time to promote the goals and activities of this inter-departmental workgroup, and issued joint policy statements. The resulting improvement in coordination has helped mitigate the effects of fragmentation and overlap. HHS, Education, and Interior use different methods to assess performance for the nine programs with an explicit early learning or child care purpose. These agencies collect performance information through various combinations of performance monitoring, program evaluations or studies, and other information, such as grantee-submitted reports. In addition, they collect performance information that aligns with program objectives, and many programs examine common aspects of performance. However, the specific results agencies assess differ for a number of reasons. For example, some programs assess children only while they receive services, while others assess later impacts on children. GAO makes no recommendations in this report. In its comments, HHS agreed with GAO's findings and noted that children benefit most from investments in federal early learning and child care programs when they are coordinated with similar programs. Education and HHS also provided technical comments, which GAO incorporated as appropriate.
Medicaid finances the delivery of health care services for a diverse low- income and medically needy population. The Social Security Act, which Congress amended in 1965 to establish the Medicaid program, provides the statutory framework for the program, setting broad parameters for states that choose to participate and implement their own Medicaid programs. CMS is responsible for overseeing state Medicaid programs to ensure compliance with federal requirements. Historically, Medicaid eligibility has been limited to certain categories of low-income individuals—such as children, parents, pregnant women, persons with disabilities, and individuals age 65 and older. In addition to these historical eligibility standards, PPACA permitted states to expand their Medicaid programs by covering non-elderly, non-pregnant adults with incomes at or below 133 percent of the federal poverty level (FPL). As of May 2015, 29 states, including the District of Columbia, had expanded their Medicaid programs to cover this new adult group, and one other state’s proposed expansion was pending federal approval. Federal law requires state Medicaid programs to cover a wide array of mandatory services, and permits states to cover additional services at their option. Consequently, Medicaid generally covers a wide range of health care services that can be categorized into broad types of coverage, including hospital care; non-hospital acute care, such as physician, dental, laboratory, and preventive services; prescription drugs; and LTSS in institutions and in the community. (See figure 2 for an overview of Medicaid expenditures by category.) In recent years, we and others have examined patterns of service utilization and expenditures within the Medicaid population and found that enrollment and expenditures vary among the different categories of enrollees. For example, for fiscal year 2011, children constituted the largest category of enrollees (47.4 percent), but accounted for a small share of Medicaid expenditures (19 percent). In that same year, enrollees with disabilities (14.7 percent of Medicaid enrollees) accounted for the largest share of Medicaid expenditures (42.7 percent). (See fig. 3.) In addition, we found that, generally, a small subset of Medicaid enrollees— such as those with institutional care needs or chronic conditions—account for a large portion of Medicaid expenditures. States have traditionally provided Medicaid benefits using a fee-for- service system, where health care providers are paid for each service delivered. However, according to CMS, in the past 15 years, states have increasingly implemented managed care systems for delivering Medicaid services. In a managed care delivery system, enrollees obtain some portion of their Medicaid services from a managed care organization (MCO) under contract with the state, and capitation payments to MCOs are typically made on a predetermined, per person per month basis. Nationally, about 37 percent of Medicaid spending in fiscal year 2014 was attributable to Medicaid managed care. Many states are expanding their use of managed care to additional geographic areas and Medicaid populations. States oversee Medicaid MCOs through contracts and reporting requirements. CMS provides oversight and technical assistance for the Medicaid program, but states are primarily responsible for administering their respective Medicaid programs’ day-to-day operations—including determining eligibility, enrolling individuals and providers, and adjudicating claims—within broad federal requirements. Each state has a Medicaid state plan that describes how the state will administer its Medicaid program consistent with federal requirements. States submit these state plans for approval to CMS, but have significant flexibility to structure their programs to best suit their needs. In addition, within certain parameters, states may innovate outside of many of Medicaid’s otherwise applicable requirements through Medicaid demonstrations, with HHS approval. For example, states may test ways to obtain savings or efficiencies in how services are delivered in order to cover otherwise ineligible services or populations. State FMAP = 1.00 – 0.45 (State PCI / U.S. PCI) Federal law specifies that the regular FMAP will be no lower than 50 percent and no higher than 83 percent. For fiscal year 2015, regular FMAP rates ranged from 50.00 percent to 73.58 percent. Under PPACA, state Medicaid expenditures for certain Medicaid enrollees, newly eligible under the statute, are subject to a higher federal match. States that choose to expand their Medicaid programs receive an FMAP of 100 percent beginning in 2014 for expenditures for the PPACA- expansion enrollees—those who were not previously eligible for Medicaid and are eligible now under PPACA’s expansion of eligibility criteria. The FMAP is to gradually diminish to 90 percent by 2020. States also receive an FMAP above the state’s regular match (but below the PPACA- expansion FMAP) for their Medicaid expenditures for the state-expansion enrollees—those who would not have been eligible for Medicaid prior to PPACA except that they were covered under a state’s pre-PPACA “expansion” of eligibility through, for example, a Medicaid demonstration. This FMAP is to gradually increase and eventually equal the FMAP for the PPACA-expansion enrollees beginning in 2019. The formula used to calculate the state-expansion FMAP rates is based on a state’s regular FMAP rate so the enhanced FMAP rate will vary from state to state until 2019. See figure 5 for the variation across states in the regular FMAP; Medicaid spending, enrollment, and managed care enrollment; and whether the state had expanded Medicaid coverage to newly eligible adults under PPACA as of May 2015. See appendix II for the information in tabular form. Medicaid enrollees report access to medical care that is generally comparable to that of privately insured individuals. However, some enrollees may face access challenges, such as in obtaining specialty care or dental care. CMS has taken steps to help ensure enrollees’ access to care and additional steps could bolster those efforts. CMS also has ongoing efforts to collect data from states to help assess Medicaid enrollees’ access to care, but better data are needed. Medicaid enrollees report access to care that is generally comparable to the privately insured, but some face particular access challenges. We have found that Medicaid enrollees report experiencing access to medical care that is generally comparable to that of privately insured individuals. For example, according to national survey data, few enrollees covered by Medicaid for a full year—less than 4 percent—reported difficulty obtaining necessary medical care or prescription medicine in 2008 and 2009, similar to privately insured individuals. (See fig. 6.) Regarding children, respondents with children covered by Medicaid reported positive responses to most questions about their ability to obtain care, and at levels generally comparable to those with private insurance, from 2007 through 2010. Although few Medicaid enrollees report difficulty obtaining necessary care in general, our work indicates that particular populations can face particular challenges obtaining care. For example, about 7.8 percent of working-age adults with full-year Medicaid reported difficulty obtaining care compared with 3.3 percent of similar adults with private insurance— a statistically significant difference. Some enrollees face particular challenges, such as accessing services. For example, Medicaid enrollees were more likely than individuals with private insurance to report factors such as lack of transportation and long wait times as reasons for delaying medical care. (See fig. 7.) We have also found that Medicaid-covered adults may be more likely to have certain health conditions that can be identified and managed through preventive services, such as obesity and diabetes, than individuals with private insurance. However, states’ Medicaid coverage of certain preventive services for adults has varied, which has resulted in different levels of coverage across states. Specialty care, such as mental health care and dental care, may be particularly difficult for some Medicaid enrollees to obtain. Access to Specialty Care, Including Mental Health Care National surveys of enrollees and our own surveys of state Medicaid officials and physicians have consistently indicated that Medicaid enrollees may have difficulty obtaining specialty care, such as mental health care. In our survey of state Medicaid officials in 2012, for example, officials in about half of the states reported challenges ensuring enough participating specialty providers for Medicaid enrollees, such as in obstetrics and gynecology, surgical specialties, and pediatric services. In addition, we found that about 21 percent of respondents with Medicaid- covered children reported that it was only sometimes or never easy to see a specialist, compared to about 13 percent of respondents with privately insured children, from 2008 through 2010. Our 2010 national survey of physicians found that specialty physicians were generally more willing to accept privately insured children as new patients than Medicaid-covered children; similarly, more physicians reported having difficulty referring Medicaid-covered children to specialty providers than reported having difficulty referring privately insured children. (See fig. 8.) We have also found that both Medicaid-covered adults and children may face challenges obtaining mental health care. Research has shown that Medicaid enrollees experience a higher rate of mental health conditions than those with private insurance. Officials we interviewed from six states that expanded Medicaid under PPACA generally reported that Medicaid expansion had increased the availability of mental health treatment for newly eligible adults, but cited access concerns for new Medicaid enrollees due to shortages of Medicaid-participating psychiatrists and psychiatric drug prescribers.survey, state officials reported problems ensuring sufficient psychiatry In our 2012 national providers for Medicaid enrollees. Among Medicaid-covered children, national survey data from 2007 through 2009 indicated that 14 percent of noninstitutionalized Medicaid-covered children had a potential need for mental health services, but most of these children did not receive mental health services. In addition, many Medicaid-covered children who took psychotropic medications (medications that affect mood, thought, or behavior) did not receive other mental health services during the same year. In December 2011, we reported that Medicaid-covered children in foster care in selected states were prescribed psychotropic medications at higher rates than nonfoster children in Medicaid during 2008. In 2012, HHS’s Administration for Children and Families issued guidance to state agencies seeking to improve their monitoring and oversight practices for psychotropic medications. See GAO, Foster Children: Additional Federal Guidance Could Help States Better Plan for Oversight of Psychotropic Medications Administered by Managed-Care Organizations, GAO-14-362 (Washington, D.C.: April 28, 2014); and Foster Children: HHS Guidance Could Help States Improve Oversight of Psychotropic Prescriptions, GAO-12-201 (Washington, D.C.: Dec. 14, 2011). our recommendation in comments on our draft report.CMS indicated that it no longer agreed that additional guidance was necessary, stating that its existing guidance applied to managed care settings. We continue to believe that our recommendation is valid. We found that many states were, or were transitioning to, managed care organizations to administer prescription-drug benefits, and that selected states had taken only limited steps to plan for the oversight of drug prescribing for foster children receiving care through these organizations—creating a risk that controls instituted under fee-for-service may not remain once states move to managed care. As we reported, additional HHS guidance that helps states prepare and implement monitoring efforts within the context of a managed-care environment could help ensure appropriate oversight of psychotropic medications to children in foster care. In recent years, Medicaid enrollees’ use of dental services increased, but some access problems persist. We found that while the percentage of individuals with Medicaid dental coverage who had a dental visit increased from 28 percent in 1996 to 34 percent in 2010, individuals with Medicaid dental coverage were still much less likely than privately insured individuals to have visited the dentist. About two-thirds of Medicaid- covered children, for example, did not visit the dentist at all in 2010, while most privately insured children did. (See fig. 9.) This difference in use of dental services persisted despite the fact that Medicaid-covered children may have a greater need for dental care than privately insured children. We have found that Medicaid-covered children are almost twice as likely to have untreated tooth decay as privately insured children. In addition, states have found it particularly challenging to ensure a sufficient number of dental providers for Medicaid enrollees. CMS has taken some steps to address access to dental care, and other steps could build on those efforts. In 2010, for example, the agency launched a Children’s Oral Health Initiative that aimed to, among other things, increase the proportion of Medicaid and State Children’s Health Insurance Program (CHIP) children who receive a preventive dental service. In response to our prior recommendations, CMS also took steps to ensure that states gather information on the provision of Medicaid dental services by managed care programs, and to improve the accuracy of the data on HHS’s Insure Kids Now website, which provides state- reported information on dentists who serve children enrolled in Medicaid and CHIP. We recommended that CMS require states to verify that dentists listed on the Insure Kids Now website have not been excluded from Medicaid by HHS, and periodically verify that excluded providers are not included on the lists of dentists posted by the states. However, CMS has said that it relies on states to provide accurate lists of eligible dentists and that data issues prevent the agency from independently verifying that excluded providers are not included on the website. We continue to believe that CMS should require states to ensure that excluded providers are not listed on the website, so that it does not present inaccurate information about providers available to serve Medicaid-covered children. We also recommended that for states that provide Medicaid dental services through managed care organizations, CMS ensure that states with inadequate managed care dental provider networks take action to CMS has reported taking steps to improve strengthen these networks.these networks, including meeting with national dental associations, but we believe more can be done to identify inadequate networks and, once inadequate networks are identified, to work with states to strengthen them to help ensure that they meet the needs of Medicaid enrollees. CMS has ongoing efforts to collect data from states to help assess Medicaid enrollees’ access to care and identify areas for improvement. States are required to submit certain types of data to CMS, and they can opt to submit other types of data. For example, states are required to submit reports on the provision of certain services for eligible children, as part of the Early and Periodic Screening, Diagnostic, and Treatment (EPSDT) benefit. These reports, known as CMS 416 reports, include such information as the number of children receiving well-child checkups and the number of children referred for treatment services for conditions identified during well-child checkups. CMS has used these reports to identify states with low reported rates of service provision, to help identify state Medicaid programs needing improvements. In addition, states voluntarily report Child Core Set measures, which assess the quality of care provided through Medicaid and CHIP, and include, for example, measures of access to primary care and the receipt of follow-up care for children prescribed attention deficit hyperactivity disorder medication. Also, states that use managed care plans to deliver services for Medicaid and CHIP enrollees are required to annually review these plans to evaluate the quality, timeliness, and access to services that the plans provide—and submit their “external quality review” reports to CMS. In 2011, we reported on problems and gaps in the required CMS 416 reports. We found that states sometimes made reporting errors, and in some cases those errors overstated the extent to which children received well-child checkups. In addition, states did not always report required data on how many Medicaid-covered children were referred for additional services to address conditions identified through check-ups. Finally, we found that CMS did not require states to report information on whether Medicaid-covered children actually received services for which they were referred—or to report information separately for children in managed care versus those in fee-for-service systems. CMS has since taken steps to improve the CMS 416 data, and we believe more can be done, as discussed below. In response to our recommendation that CMS establish a plan to review the accuracy and completeness of the CMS 416 data and ensure that problems are corrected, CMS has established an automated quality assurance process to identify obvious reporting errors and as of March 2015 was developing training for state staff responsible for the data. We also recommended that CMS work with states to explore options for capturing information on children’s receipt of services for which they were referred. The agency has issued guidance to states about how to report referrals for health care services, but has not required states to report whether children receive services for which they are referred. CMS officials noted that data collection tools other than the CMS 416 reports, such as the Child Core Set measures, provide CMS with information on whether children are receiving needed care—and that HHS was developing additional measures to help fill gaps in assessing children’s care. While these are positive steps, we have noted that CMS’s ability to monitor children’s access to services is dependent on consistent, reliable, complete, and sufficiently detailed data from each state. The Child Core Set measures, for example, are voluntarily reported by states, and we have reported that although the number of states reporting measures has increased in recent years, that states have varied considerably in the number of measures they reported. We continue to believe that information on whether children receive services for which they are referred is important for monitoring and ensuring their access to care. We also recommended that CMS work with states to explore options for reporting on the receipt of services separately for children in managed care and fee-for-service delivery models. However, CMS officials indicated that they do not plan to require states to report such information, in part, to limit the reporting burden for states. CMS officials added that the states report information on children’s access to care through their managed care external quality review reports. While this is a positive step, these reports do not represent a consistent set of measures used by all states that CMS can use for oversight purposes. We continue to believe that having accurate and complete information on children’s access to health services, by delivery model, is an important element of monitoring and ensuring access to care and that CMS should fully implement this recommendation. Our 2015 report on managed care services used by Medicaid enrollees in 19 states highlighted the importance of having reliable data to help understand patterns of managed care utilization and the impact that managed care delivery models may have on enrollees’ access to care. We found that the number of managed care services used by adult and child enrollees varied by state, population, type of service, and whether enrollees were enrolled in comprehensive managed care plans for all or part of the year. For example, the number of services per enrollee per year for adults ranged from about 13 to 55 services per enrollee per year, across the 19 states. (See fig. 10.) With regard to children, in almost every selected state, the total service utilization was higher for children who were enrolled in comprehensive managed care for less than the full year, when compared with those enrolled for a full year. This type of information can be useful for understanding access to care among enrollees in Medicaid managed care plans. A detailed, interactive display of the data used to support our findings is available at http://www.gao.gov/products/GAO-15-481. Given Medicaid’s size, complexity, and diversity, transparency in how funds are used is critical to ensuring fiscal accountability. However, a lack of reliable CMS data about program payments and state financing of the non-federal share of Medicaid hinders oversight, and our work has pointed to the need for better data, as well as improved policy and oversight, to ensure that funds are being used appropriately and efficiently. In addition, gaps in HHS’s criteria, process, and policy for approving states’ demonstration spending raise questions about billions of dollars in federal spending. While HHS and CMS have taken important steps in recent years to improve transparency, oversight, and fiscal accountability, more can be done to build on those efforts. Improving HHS’s review and approval process for demonstration spending may prevent unnecessary federal spending. Data limitations hinder the transparency of program payments and state financing sources, and hinder federal oversight. Federal Data on Program Payments CMS does not have the complete and reliable data needed to understand the payments states make to individual providers, nor does it have a transparent policy and standard process for assessing whether those payments are appropriate. States have considerable flexibility in setting payment rates that providers, such as hospitals, receive for services rendered to individual Medicaid enrollees. In addition to these regular claims-based payments, states may make (and obtain federal matching funds for) payments to certain providers that are not specifically linked to services Medicaid enrollees receive. These payments can help offset any remaining costs of care for Medicaid patients, and in some cases can be used to offset costs incurred treating uninsured patients. These types of payments are known as supplemental payments, which include disproportionate share hospital (DSH) payments and other payments, such as those known as Medicaid upper payment limit (UPL) supplemental payments. States have some flexibility in how they distribute supplemental payments to individual providers. However, Medicaid payments to providers should not be excessive, as the law states that they must be “economical and efficient.” We have had longstanding concerns about federal oversight of supplemental payments, which our work has found to be a significant and growing component of Medicaid spending, totaling at least $43 billion in fiscal year 2011. CMS oversight of provider supplemental payments is limited because the agency does not require states to report provider- specific data on these payments, nor does it have a policy and standard process for determining whether Medicaid payments to individual providers are economical and efficient. States may have incentives to make excessive Medicaid payments to certain institutional providers, such as local government hospitals needing or receiving financial support from the state. Absent a process to review these payments—and absent data on total payments individual providers receive—the agency may not identify potentially excessive payments to providers, and the federal government could be paying states hundreds of millions—or billions—of dollars more than what is appropriate, as shown in the examples below. Additional actions needed to identify and Efforts to ensure only eligible individuals and providers participate in Medicaid can be improved. The federal government and the states both play important roles in ensuring that Medicaid payments made to health care providers and managed care organizations are correct and appropriate. The size and diversity of the Medicaid program make it particularly vulnerable to improper payments—including payments made for treatments or services that were not covered by program rules, that were not medically necessary, or that were billed for but never provided. Medicaid improper payments are a significant cost to Medicaid—totaling an estimated $17.5 billion in fiscal year 2014, according to HHS. Due to our concerns about Medicaid’s improper payment rate and the sufficiency of federal and state oversight, we added Medicaid to our list of high-risk programs in 2003.appropriate use of funds by (1) identifying and preventing improper payments in both fee-for-service and managed care, (2) setting appropriate payment rates for managed care organizations, and (3) ensuring only eligible individuals and providers participate in Medicaid. Responsibility for program integrity activities is spread across multiple state and federal entities, resulting in fragmented efforts, creating the potential for unnecessary duplication, which we have previously identified in some areas, as well as program areas not being covered. The combined federal and state efforts have recovered only a small portion of the estimated improper payments in Medicaid, and the Medicaid improper These factors, coupled with recent payment rate has recently increased.and projected increases in Medicaid spending, heighten the importance of coordinated and cost-effective program integrity efforts. CMS has taken many important steps in recent years to help improve program integrity— including some in response to our recommendations—and we believe even more can be done in this area. Coordinating to Minimize Duplication and Ensure Coverage Our work has highlighted how careful coordination of federal and state efforts is necessary to both avoid duplication and ensure maximum program coverage. Given the number of entities involved in program integrity efforts, coordination among entities is critical. (See fig. 14.) Without careful coordination, the involvement of multiple state and federal entities in Medicaid program integrity results in fragmented efforts, possibly leaving some program areas insufficiently covered. In 2014, we reported a gap in oversight of the growing expenditures on Medicaid managed care, which constituted over a quarter of federal Medicaid expenditures in 2011. In particular, we found that the federal government and the states were not well positioned to identify improper payments made to—or by—managed care organizations. We found that CMS had largely delegated managed care program integrity oversight activities to the states, but states generally focused their efforts on fee- for-service claims. We concluded that further federal and state oversight, coupled with additional federal guidance and support to states, could help ensure that managed care organizations are taking appropriate actions to identify and prevent improper payments. Specifically, we recommended that CMS 1. require states to conduct audits of payments to and by managed care 2. update CMS’s Medicaid managed care guidance on program integrity practices and effective handling of recoveries by managed care plans; and 3. provide states with additional support in overseeing Medicaid managed care program integrity, such as the option to obtain audit assistance from existing Medicaid integrity contractors. CMS generally agreed with our recommendations, and has taken steps to provide states with additional guidance. In October 2014, CMS made available on its website the managed care plan compliance toolkit to provide further guidance to states and managed care plans on identifying improper payments to providers. In addition, agency officials told us that, as of December 2014, at least six states were using their audit contractors to audit managed care claims. While CMS has taken steps to improve oversight of Medicaid managed care, the lack of a comprehensive program integrity strategy for managed care leaves a growing portion of Medicaid funds at risk. In our view, CMS actions to require states to conduct audits of payments to and by managed care organizations, and to update guidance on Medicaid managed care program integrity practices and recoveries, are crucial to improving program integrity, and we will continue to follow CMS’s actions in this area. (Appendix I includes our open recommendations regarding Medicaid improper payments, which we believe could help reduce improper payments if implemented.) On June 1, 2015, the agency issued a proposed rule to revise program integrity policies, including policy measures that we have recommended. Among other measures, if finalized, the rule would require states to conduct audits of managed care organizations’ encounter and financial data every three years. Additionally, the proposed rule would standardize the treatment of recovered overpayments by plans. Our work has highlighted the importance of focusing state and federal resources on cost-effective efforts to identify improper payments. States’ information systems are a key component of program integrity activities, and states’ program integrity efforts include receiving, reviewing, and paying Medicaid claims; and auditing claims payments after the fact. Consistent with the requirements defined by CMS, states use Medicaid Management Information Systems (MMIS) provider and claims processing subsystems to perform program integrity activities related to provider enrollment and prepayment review. (See fig. 15.) Our work has shown that the effectiveness of states’ information systems used for program integrity purposes is uncertain. In 2015, we reviewed 10 states’ use of information technology systems to support efforts aimed at preventing and detecting improper payments. These states’ information systems ranged in age and capability, with 3 of the 10 states’ operating systems being more than 20 years old. However, the effectiveness of the states’ use of the systems for program integrity purposes is not known, and we recommended that CMS require states to measure and report quantifiable benefits of program integrity systems when requesting federal funds, and to reflect their approach for doing so. CMS concurred with these recommendations. In our past work, we also recommended—and CMS acted on—other measures to streamline program integrity efforts, as shown in the following examples. CMS’s hiring of separate review and audit contractors for its program integrity efforts was inefficient and led to duplication because key functions—such as assessing whether payments were improper and learning states’ Medicaid policies—were performed by both contractors. We recommended that CMS eliminate duplication between the separate contractors, which CMS did in conjunction with the agency’s redesign of its Medicaid Integrity Program. This redesign eliminated the review contractor function and included a more collaborative and coordinated audit approach that leverages state expertise to identify potential audit targets, and relies on more complete and up-to-date state Medicaid claims data. Two CMS oversight tools—the state comprehensive reviews and the state program integrity assessments—were duplicative because both tools were used to collect similar information from the states. Furthermore, we found that the state program integrity assessments contained unverified and inaccurate data. We recommended that CMS eliminate this duplication, and CMS subsequently discontinued the state program integrity assessments. CMS’s comprehensive reviews of states’ program integrity efforts contained important information about all aspects of states’ program integrity capabilities. However, we found no apparent connection between the reviews’ findings and CMS’s selection of states for audits. We recommended that CMS use the knowledge gained from the comprehensive reviews as a criterion for focusing audit resources toward states with structural or data-analysis vulnerabilities. CMS agreed and, among other steps, in 2013 redesigned the reviews to streamline the process, reduce the burden on states, and refocus the reviews on risk-assessment. Ensuring Medicaid Remains a Payer of Last Resort CMS and the states must ensure that if Medicaid enrollees have another source of health care coverage, that source should pay, to the extent of its liability, before Medicaid does. Medicaid enrollees may have health care coverage through third parties—such as private health insurers—for a number of reasons. For example, some adults may be covered by employer-sponsored insurance even though they qualify for Medicaid. Similarly, children may be eligible for Medicaid while being covered under a parent’s health plan. Figure 16 shows the estimated prevalence of private health insurance among Medicaid enrollees. In 2015, we found that states had adopted various approaches to identify enrollees with other insurance than Medicaid, and states were working to ensure that these third parties paid for health care services to the extent of their liability before Medicaid. However, these states needed additional CMS guidance and support in these efforts.CMS play a more active leadership role in monitoring, supporting, and promoting state third-party liability efforts. Specifically, we recommended that CMS 1. routinely monitor and share across all states information regarding key third-party liability efforts and challenges, and 2. provide guidance to states on their oversight of third-party liability efforts conducted by Medicaid managed care plans. CMS concurred with our recommendations, and stated that it would continue to look at ways to provide guidance to states to allow for sharing of effective practices and to increase awareness of initiatives under development in states. CMS also stated that it would explore the need for additional guidance regarding state oversight of third-party liability efforts conducted by Medicaid managed care plans. In the preamble to the June 1, 2015, proposed rule, the agency indicated it plans to issue guidance, which would require managed care plans to include information on third-party liability amounts in the encounter data submitted to states. We will continue to follow CMS’s actions in this area. Managed care is designed to ensure the provision of appropriate health care services in a cost-efficient manner. However, the design of capitation payments, which are made prospectively to health plans to provide or arrange for services for Medicaid enrollees, can create incentives that adversely affect program integrity and patient care. For example, these payments may create an incentive to underserve or deny access to needed care. Thus, appropriate safeguards are needed to ensure access to care and appropriate payment in Medicaid managed care. In 2010, we found that CMS’s oversight of states’ Medicaid managed care rate setting methodologies was not consistent across its regional offices, and that in assessing the quality of the data used to set rates, the agency primarily relied on state and health plan assurances, thereby placing billions of federal and state dollars at risk. We found significant gaps in CMS’s oversight. For example, in one instance, the agency had not reviewed one state’s rate setting for multiple years, resulting in the state receiving approximately $5 billion a year in federal funds for three years without having had its rates reviewed by CMS. We also found that regional offices varied in their interpretations of how extensive a review of states’ rate setting was needed and the sufficiency of evidence for meeting actuarial soundness requirements, among other things. We recommended that CMS 1. implement a mechanism to track state compliance with requirements, 2. clarify guidance on rate-setting reviews, and 3. make use of information on data quality in determining the appropriateness of managed care capitation rates. As a result of our work, CMS implemented a detailed checklist to standardize the regional offices’ reviews. CMS has also taken a number of other steps to improve its oversight of states’ rate setting. In 2014, CMS completed its development of a database to track contracts, including rate-setting reviews. According to agency officials, as of March 2015, 57 rate submissions had been submitted to the database and were undergoing review by CMS’s Office of the Actuary and the Division of Managed Care Plans. CMS officials reported that the agency had developed a managed care program review manual, which included modules on financial oversight, and had updated rate setting and contract review tools. In 2014, the agency released its 2015 Managed Care Rate Setting Consultation Guide, which clarified the agency’s requirements relating to the information states must submit in developing their rate certifications, including a description of the type, sources, and quality of the data used by the state in setting its rates. On June 1, 2015, the agency issued a proposed rule that, if finalized, would make changes to Medicaid managed care rate setting, such as requiring more consistent and transparent documentation of the rate setting process to allow for more effective reviews of states’ rate certification submissions. We will continue to follow CMS’s actions in this area. Both CMS and the states play an important role in ensuring that only eligible individuals receive Medicaid coverage and that only eligible providers receive payment. Our work has highlighted several issues facing CMS and the states in their efforts to minimize fraud in Medicaid eligibility among both enrollees and providers. To be eligible for Medicaid coverage, applicants must meet financial and nonfinancial requirements, such as federal and state requirements regarding residency, immigration status, and documentation of U.S. citizenship. Similarly, to participate in Medicaid, providers must enroll and submit information about their ownership interests and criminal background. States must screen potential Medicaid providers, search exclusion and debarment lists, and take action to exclude those providers who appear on those lists. Using 2011 data, we recently identified indications of potentially fraudulent or improper payments related to certain Medicaid enrollees and paid to some providers, as shown in our review of approximately 9 million enrollees in four states and summarized below. While these cases indicate only potentially improper payments, they raise questions about the effectiveness of beneficiary and provider enrollment screening controls. We identified about 8,600 enrollees who had payments made on their behalf concurrently by two or more of our selected states.approved benefits of at least $18.3 million for these enrollees in these states. We identified about 200 deceased enrollees in the four states who appear to have received Medicaid benefits totaling at least $9.6 million. Specifically, our analysis matching Medicaid data to the Social Security Administration’s data on date of death found these individuals were deceased before the Medicaid service was provided. We found that about 50 medical providers in the four states we examined had been excluded from federal health care programs, including Medicaid; these providers were excluded from these programs when they provided and billed for Medicaid services during fiscal year 2011. The selected states approved the claims at a cost of about $60,000. We found that the identities of over 50 deceased providers in the four states we examined were used to receive Medicaid payments. Our analysis matching Medicaid eligibility and claims data to the Social Security Administration’s full death file found these individuals were deceased before the Medicaid service was provided. The Medicaid benefits involved with these deceased providers totaled at least $240,000 for fiscal year 2011. We found nearly 26,600 providers with addresses that did not match any U.S. Postal Service records. These unknown addresses may have errors due to inaccurate data entry or differences in the ages of MMIS and U.S. Postal Service address-management tool data, making it difficult to determine whether these cases involve fraud through data matching alone. CMS has taken steps since 2011 to strengthen the Medicaid beneficiary and provider enrollment-screening controls in ways that may address the issues we identified, and we believe that additional CMS guidance could bolster those efforts. In 2013, CMS issued federal regulations, in response to PPACA, to establish a more rigorous approach to verify the information needed to determine Medicaid eligibility. Under these regulations, states are required to use electronic data maintained by the federal government to the extent that such information may be useful in verifying eligibility. CMS created a tool called the Data Services Hub, implemented in fiscal year 2014, to help verify some of the information used to determine eligibility for Medicaid and other health programs. States are to use the hub to both verify an individual’s eligibility when they receive an application, and to reverify eligibility on at least an annual basis thereafter, unless the state has an alternative mechanism approved by HHS. In addition, in February 2011, CMS and HHS’s Office of Inspector General issued regulations establishing a new risk-based screening process for providers with enhanced verification measures, such as unscheduled or unannounced site visits and fingerprint-based criminal background checks. If properly implemented by CMS, the hub and the additional provider screening measures could help mitigate some of the potential improper payment issues that we identified. However, we identified gaps in state practices for identifying deceased enrollees, as well as state challenges in screening providers effectively and efficiently, and recommended that CMS provide guidance to states to better 1. identify enrollees who are deceased, and 2. screen providers by using automated information available through Medicare’s enrollment database. HHS concurred with our recommendations and stated it would work with states to determine additional approaches to better identify deceased enrollees, and that it would continue to educate states about the availability of provider information and how to use that information to help screen Medicaid providers more effectively and efficiently. We will continue to monitor HHS’s efforts in this area. Medicaid’s federal-state partnership could be improved through a revised federal financing approach that better addresses variations in states’ financing needs. First, automatically providing increased federal financial assistance to states affected by national economic downturns—through an increased FMAP—could help provide timely and targeted assistance that is more responsive to states’ economic conditions. Second, revisions to the current FMAP formula could more equitably allocate Medicaid funds to states by better accounting for their ability to fund Medicaid. These improvements could better align federal funding with each state’s resources, demand for services, and costs; better facilitate state budget planning; and provide states with greater fiscal stability during times of economic stress. assistance would better aid states during economic downturns. better reflect states’ varying ability to fund Medicaid. Economic downturns can hamper states’ ability to fund their Medicaid programs. During economic downturns, states’ employment and tax revenues typically fall, while enrollment in the Medicaid program tends to increase as the number of individuals with incomes low enough to qualify for Medicaid coverage rises. We have reported that each state, however, can experience different economic circumstances—and thus different levels of change in Medicaid enrollment and state revenues during a downturn. Figures 17 and 18 show the percentage change in Medicaid enrollment and state tax revenue, by state, respectively. In response to the two most recent recessions, Congress acted to temporarily increase support to states by increasing the federal share of Medicaid funding provided by the FMAP formula. Following the 2001 recession, the Jobs and Growth Tax Relief Reconciliation Act of 2003 provided states $10 billion in temporary assistance through an increased FMAP. In response to the 2007 recession, the American Recovery and Reinvestment Act of 2009 (Recovery Act) provided states with $89 billion through a temporarily increased FMAP. Under the Recovery Act, the level of funding was intended to help both maintain state Medicaid programs so enrollees would be assured continuity of services—and to assist states with fiscal needs beyond Medicaid. Our prior work, however, found that these efforts to provide states with temporary increases in the FMAP were not as responsive to states’ economic conditions as they could have been. Improving the responsiveness of federal assistance to states during economic downturns would facilitate state budget planning, provide states with greater fiscal stability, and better align federal assistance with the magnitude of the economic downturn’s effects on individual states. We have identified opportunities to improve the timing, amount, and duration of assistance provided, as detailed below. Automatic and timely trigger for starting assistance. To be effective at stabilizing state funding of Medicaid programs, assistance should be provided close to the beginning of a downturn. An automatically activated, prearranged mechanism for triggering federal assistance could use readily available economic data to begin assistance rather than rely on legislative action at the time of a future national economic downturn. Targeted assistance based on state needs. States’ efforts to fund Medicaid during economic downturns face two main challenges: (1) financing increased enrollment, and (2) replacing lost revenue. We found that better targeting of assistance based on each state’s level of need could help ensure that federal assistance is aligned with the magnitude of an economic downturn’s effect on individual states. Timely and tapered end of assistance. Determining when and how to end increased FMAP assistance to states is complicated.found that more gradually reducing the percentage of increased FMAP provided to states could help mitigate the effects of a slower recovery. Such tapered assistance would avoid abrupt changes and allow states to plan their transitions back to greater reliance on their own revenues. See GAO-12-38. that policymakers could adjust depending on circumstances, such as competing budget demands and other state fiscal needs beyond Medicaid. We compared this prototype formula with assistance provided during the Recovery Act. Under our prototype formula, assistance would have begun in January 2008 rather than in October 2008, as was the case under the Recovery Act; the end of assistance would have been triggered in April 2011, and assistance would have been phased out by September 2011, rather than in June 2011 under the Recovery Act and its extension. Based on our work, we noted that Congress could consider enacting an FMAP formula that is targeted for variable state Medicaid needs and provides automatic, timely, and temporary increased FMAP assistance in response to national economic downturns. As of July 2015, Congress has not enacted such a formula. In commenting on drafts of our 2011 reports, HHS agreed with the analysis and goals of the reports, emphasized the importance of aligning changes to the FMAP formula with individual state circumstances, and offered several considerations to guide policy choices regarding appropriate thresholds for timing and targeting of increased FMAP funds. In prior work spanning more than three decades, we have emphasized that in federal-state programs such as Medicaid, funds should be allocated to states in a manner that is equitable from the perspective of both enrollees and taxpayers. To be equitable from the perspective of enrollees, and thereby allow states to provide a comparable level of services to each person in need, a funding allocation mechanism should take into account the demand for services in each state—which depends on both the number of people needing services and their level of need—and geographic cost differences among states. To be equitable from the perspective of taxpayers, an allocation mechanism should ensure that taxpayers in poorer states are not more heavily burdened than those in wealthier ones. To account for states’ relative wealth, a mechanism must take into account each state’s ability to finance its share of program costs from its own resources, which should account for all potentially taxable income, including personal income of state residents and corporate income. Our prior work has found that the current FMAP formula does not adequately address variation in the demand for services in each state, geographic cost differences, and state resources. The FMAP formula uses per capita income as the basis for calculating each state’s federal matching rate. However, per capita income is a poor proxy for the size of a state’s population in need of Medicaid services, as two states with similar per capita incomes can have substantially different numbers of Per capita income also does not include any low-income residents.measure of geographic differences in the costs of providing health care services, which can vary widely. Finally, although per capita income measures the income received by state residents—such as wages, rents, and interest income—it does not include other components of a state’s resources that affect its ability to finance Medicaid, such as corporate income produced within the state, but not received by state residents. In 2013, we identified multiple alternative data sources that could be used to develop measures of the demand for Medicaid services, geographic cost differences, and state resources. These measures could be combined in various ways to provide a basis for allocating Medicaid funds more equitably among states. (See table 2.) We have reported over the years on challenges facing the Medicaid program and concerns about the adequacy of federal oversight. As previously discussed, in 2003, we designated Medicaid as a high-risk program due to its size, growth, diversity of programs, and concerns about gaps in oversight. More than a decade later, those factors remain relevant for federal oversight. In addition, state Medicaid programs are changing rapidly. PPACA has led to unprecedented programmatic changes, and more are anticipated as states continue to pursue new options available under the law to expand eligibility and restructure payment and health care delivery systems. The effects of changes brought on by PPACA, as well as the aging of the U.S. population, will continue to emerge in the coming years and are likely to exacerbate the challenges that already exist in federal oversight and management of the Medicaid program.delivery and payment approaches, as well as new technologies, will continue to pose challenges to federal oversight and management. These changes have implications for enrollees and for program costs, and underscore the importance of ongoing attention to federal oversight efforts. In addition, other changes in states’ health care Emerging changes brought on by PPACA will transform states’ enrollment processes, as well as increase enrollment and program spending. Oversight to monitor access and use of services will be critical. Enrollment processes. PPACA required the establishment of a coordinated eligibility and enrollment process for Medicaid, CHIP, and the health insurance exchanges. To implement this process— referred to as the “no wrong door” policy—states were required to develop IT systems that allow for the exchange of data to ensure that applicants are enrolled in the program for which they are eligible, regardless of the program for which they applied. We found that some states struggled with meeting the requirement to transfer—send and receive—applications with the federally facilitated exchange. Increased enrollment. Enrollment is expected to increase significantly, even in states that do not implement the expansion, as streamlined processes and publicity about the expansion encourage enrollment among previously eligible but unenrolled adults and children. The sheer number of additional enrollees—about 10 million by 2020, according to Congressional Budget Office (CBO) estimates—may stretch health care resources and exacerbate challenges to ensuring access to care. Increased spending. Over the next 5 years, Medicaid expenditures are expected to increase more rapidly than in the prior 10 years, rising from an estimated $529 billion in combined federal and state spending in 2015 to about $700 billion in 2020, due, in part, to the continuing implementation of PPACA. The federal share of expenditures, which has historically averaged about 57 percent, is projected to increase as well, to about 60 percent, largely because of the enhanced federal match required under PPACA for newly eligible enrollees. While expenditures grew at an average annual rate of 5.3 percent between 2005 and 2015, the CMS Office of the Actuary has projected that the rate of increase will rise to 5.8 percent between 2015 and 2020. See CMS, 2014 Actuarial Report on the Financial Outlook for Medicaid. spending, including supplemental payments that states often make to institutional providers, would help to ensure the fiscal accountability and integrity of the program, facilitate efforts to manage program costs, and provide information needed for policy making. Lastly, improved federal program integrity efforts will be critical to ensuring the appropriate use of program funds. Continued increases in states’ demonstration spending, changes in states’ delivery systems and payment approaches, as well as the aging of the population and the introduction of new technologies also will continue to pose challenges to federal oversight. Increased demonstration spending. Medicaid spending governed by the terms and conditions of Medicaid demonstrations, rather than traditional Medicaid state plan requirements, accounted for close to one-third of federal Medicaid spending in 2014—up from one-fourth of federal Medicaid spending in 2013 and one-fifth in 2011. The trend among states to seek flexibilities under the demonstration authority has implications for enrollees’ access and program spending. For example, enrollees may lose protections—such as those to limit cost- sharing or to provide certain mandatory benefits—under the traditional Medicaid program. The federal government will need to oversee increasingly diverse Medicaid programs not subject to traditional Medicaid requirements. As of February 2015, HHS had approved demonstration proposals from two states—Arkansas and Iowa— allowing them to provide coverage to some or all of their expansion populations through premium assistance to purchase private health insurance on exchanges established under PPACA. Changes in states’ delivery systems. Growth of managed care and states’ exploration of new models of health care delivery systems, particularly for long-term services and supports, will further heighten the need for program oversight. Enrollment of Medicaid populations in managed care arrangements continues to grow, with attendant challenges for program oversight. Over the next 5 years, expenditures for capitation payments and premiums are projected to grow more rapidly than total Medicaid expenditures. We have found weaknesses in CMS and state oversight of managed care. The HHS Office of Inspector General has also documented weaknesses in state standards, as well as significant issues with the availability of providers, and called for CMS to work with states to improve oversight of managed care plans. Recent state efforts to explore new health care models have implications for federal oversight of enrollees’ care and program costs. In July 2012, CMS announced a major initiative to support state design and testing of innovative health care payment and service delivery models intended to enhance quality of care and lower costs for enrollees in Medicaid, CHIP, and Medicare, as well as other state residents. Beginning in 2017, states may embark on even more ambitious efforts to reshape their payment and delivery systems. The past two decades have seen a marked shift in where and how long term care services are delivered to disabled and aged enrollees, with care increasingly being provided in home- and community-based settings rather than in institutions such as nursing homes. In fiscal year 2011, about 45 percent of long term care spending was for home- and community-based services, up from 32 percent in 2002. As the population ages—and particularly as the number of people over age 85 increases—Medicaid expenditures on these services are predicted to grow. New technology. New developments in technology, such as innovations in health care treatments and telemedicine, are likely to influence how state Medicaid programs deliver and pay for care— raising implications for federal oversight of access to care and costs. In 2008, CBO concluded from its review of the economic literature that roughly half of the increase in health care spending during the past several decades was associated with the expanded capabilities of medicine brought about by technological advances, including new drugs, devices, or services, as well as new clinical applications of existing technologies. The potential for new technologies to contribute significantly to long-term health care spending growth poses particular challenges for the Medicaid program. State Medicaid directors have highlighted as a critical concern the emergence of high-cost, cutting-edge pharmaceuticals, in light of the requirement that state Medicaid programs that cover outpatient drugs must cover nearly all Food and Drug Administration-approved prescription drugs of manufacturers that participate in the Medicaid drug rebate program. These changes underscore the importance of addressing problems we have identified in ensuring fiscal accountability, program integrity, and access. For example, as additional states submit demonstration proposals—and as the demonstrations HHS has already approved come up for renewal—the concerns and recommendations that we have raised about HHS approving demonstrations without assurances that they will not increase federal expenditures are likely to persist or increase. The potential for sweeping changes in state Medicaid programs’ payment and service delivery systems has implications for enrollees’ access to and quality of care, and for program costs. Increasing enrollment in managed care arrangements may heighten concerns about access to care and program integrity within these arrangements. We have made recommendations to HHS that could help address concerns we have raised in these areas. Attention to Medicaid’s transformation and the key issues facing the program will be important to ensuring that Medicaid is both effective for the enrollees who rely on it and accountable to the taxpayers. GAO has multiple ongoing studies in these areas and will continue to monitor the Medicaid program for the Congress. We provided a draft of this report to HHS for review. HHS provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Health and Human Services, Administrator of CMS, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Katherine M. Iritani at (202) 512-7114 or iritanik@gao.gov or Carolyn L. Yocom at (202) 512-7114 or yocomc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. The following table lists Medicaid-related matters for congressional consideration GAO has published that are classified as open because Congress has either not taken or has not completed steps to implement the matter. The matters are listed by key issue and report. The following table lists selected Medicaid-related recommendations GAO has made to the Department of Health and Human Services that are classified as open because the agency has either not taken or has not completed steps to implement the recommendation. The recommendations are listed by key issue and report. MACPAC did not report managed care information for Maine, Tennessee, or Vermont due to data issues. In addition to the contacts named above, Robert Copeland, Assistant Director; Robin Burke; Nancy Fasciano; Sandra George; Drew Long; Jasleen Modi; Giao N. Nguyen; Vikki Porter; and Emily Wilson made key contributions to this report. The following are selected GAO products pertinent to the key issues discussed in this report. Other products may be found at GAO’s web site at www.gao.gov. Behavioral Health: Options for Low-Income Adults to Receive Treatment in Selected States. GAO-15-449. Washington, D.C.: June 19, 2015. Medicaid: Service Utilization Patterns for Beneficiaries in Managed Care. GAO-15-481. Washington, D.C.: May 29, 2015. Children’s Health Insurance Program: Effects on Coverage and Access, and Considerations for Extending Funding. GAO-15-348. Washington, D.C.: February 27, 2015. Foster Children: Additional Federal Guidance Could Help States Better Plan for Oversight of Psychotropic Medications Administered by Managed-Care Organizations. GAO-14-362. Washington, D.C.: April 28, 2014. Children’s Health Insurance: Information on Coverage of Services, Costs to Consumers, and Access to Care in CHIP and Other Sources of Insurance. GAO-14-40. Washington, D.C.: November 21, 2013. Dental Services: Information on Coverage, Payments, and Fee Variation. GAO-13-754. Washington, D.C.: September 6, 2013. Children’s Mental Health: Concerns Remain about Appropriate Services for Children in Medicaid and Foster Care. GAO-13-15. Washington, D.C.: December 10, 2012. Medicaid: States Made Multiple Program Changes, and Beneficiaries Generally Reported Access Comparable to Private Insurance. GAO-13-55. Washington, D.C.: November 15, 2012. Foster Children: HHS Guidance Could Help States Improve Oversight of Psychotropic Prescriptions. GAO-12-201. Washington, D.C.: December 14, 2011. Medicaid and CHIP: Most Physicians Serve Covered Children but Have Difficulty Referring Them for Specialty Care. GAO-11-624. Washington, D.C.: June 30, 2011. Medicaid and CHIP: Reports for Monitoring Children’s Health Care Services Need Improvement. GAO-11-293R. Washington, D.C.: April 5, 2011. Medicaid and CHIP: Given the Association between Parent and Child Insurance Status, New Expansions May Benefit Families. GAO-11-264. Washington, D.C.: February 4, 2011. Oral Health: Efforts Under Way to Improve Children’s Access to Dental Services, but Sustained Attention Needed to Address Ongoing Concerns. GAO-11-96. Washington, D.C.: November 30, 2010. Medicaid: State and Federal Actions Have Been Taken to Improve Children’s Access to Dental Services, but Gaps Remain. GAO-09-723. Washington, D.C.: September 30, 2009. Medicaid Preventive Services: Concerted Efforts Needed to Ensure Beneficiaries Receive Services. GAO-09-578. Washington, D.C.: August 14, 2009. Medicaid: Extent of Dental Disease in Children Has Not Decreased, and Millions Are Estimated to Have Untreated Tooth Decay. GAO-08-1121. Washington, D.C.: September 23, 2008. Medicaid: Concerns Remain about Sufficiency of Data for Oversight of Children’s Dental Services. GAO-07-826T. Washington, D.C.: May 2, 2007. Medicaid Demonstrations: More Transparency and Accountability for Approved Spending Are Needed. GAO-15-715T. Washington, D.C.: June 24, 2015. Medicaid Demonstrations: Approval Criteria and Documentation Need to Show How Spending Furthers Medicaid Objectives. GAO-15-239. Washington, D.C.: April 13, 2015. Medicaid: CMS Oversight of Provider Payments Is Hampered by Limited Data and Unclear Policy. GAO-15-322. Washington, D.C.: April 10, 2015. Medicaid Financing: Questionnaire Data on States’ Methods for Financing Medicaid Payments from 2008 through 2012. GAO-15-227SP. Washington, D.C.: March 13, 2015, an e-supplement to GAO-14-627. Medicaid Demonstrations: HHS’s Approval Process for Arkansas’s Medicaid Expansion Waiver Raises Cost Concerns. GAO-14-689R. Washington, D.C.: August 8, 2014. Medicaid: Completed and Preliminary Work Indicate that Transparency around State Financing Methods and Payments to Providers Is Still Needed for Oversight. GAO-14-817T. Washington, D.C.: July 29, 2014. Medicaid Financing: States’ Increased Reliance on Funds from Health Care Providers and Local Governments Warrants Improved CMS Data Collection. GAO-14-627. Washington, D.C.: July 29, 2014. Medicaid Demonstration Waivers: Approval Process Raises Cost Concerns and Lacks Transparency. GAO-13-384. Washington, D.C.: June 25, 2013. Medicaid: More Transparency of and Accountability for Supplemental Payments Are Needed. GAO-13-48. Washington, D.C.: November 26, 2012. Medicaid: Data Sets Provide Inconsistent Picture of Expenditures. GAO-13-47. Washington, D.C.: October 29, 2012. Medicaid: States Reported Billions More in Supplemental Payments in Recent Years. GAO-12-694. Washington, D.C.: July 20, 2012. Medicaid: Ongoing Federal Oversight of Payments to Offset Uncompensated Hospital Care Costs Is Warranted. GAO-10-69. Washington, D.C.: November 20, 2009. Medicaid: CMS Needs More Information on the Billions of Dollars Spent on Supplemental Payments. GAO-08-614. Washington, D.C.: May 30, 2008. Medicaid Financing: Long-standing Concerns about Inappropriate State Arrangements Support Need for Improved Federal Oversight. GAO-08-650T. Washington, D.C.: April 3, 2008. Medicaid Demonstration Waivers: Recent HHS Approvals Continue to Raise Cost and Oversight Concerns. GAO-08-87. Washington, D.C.: January 31, 2008. Medicaid Financing: Long-Standing Concerns about Inappropriate State Arrangements Support Need for Improved Federal Oversight. GAO-08-255T. Washington, D.C.: November 1, 2007. Medicaid Demonstration Waivers: Lack of Opportunity for Public Input during Federal Approval Process Still a Concern. GAO-07-694R. Washington, D.C.: July 24, 2007. Medicaid Financing: Federal Oversight Initiative Is Consistent with Medicaid Payment Principles but Needs Greater Transparency. GAO-07-214. Washington, D.C.: March 30, 2007. Medicaid and SCHIP: Recent HHS Approvals of Demonstration Waiver Projects Raise Concerns. GAO-02-817. Washington, D.C.: July 12, 2002. Medicaid: Additional Actions Needed to Help Improve Provider and Beneficiary Fraud Controls. GAO-15-313. Washington, D.C.: May 14, 2015. Medicaid Information Technology: CMS Supports Use of Program Integrity Systems but Should Require States to Determine Effectiveness. GAO-15-207. Washington, D.C.: January 30, 2015. Medicaid: Additional Federal Action Needed to Further Improve Third- Party Liability Efforts. GAO-15-208. Washington, D.C.: January 28, 2015. Medicaid Program Integrity: Increased Oversight Needed to Ensure Integrity of Growing Managed Care Expenditures. GAO-14-341. Washington, D.C.: May 19, 2014. Medicaid: CMS Should Ensure That States Clearly Report Overpayments. GAO-14-25. Washington, D.C.: December 6, 2013. Medicaid: Enhancements Needed for Improper Payments Reporting and Related Corrective Action Monitoring. GAO-13-229. Washington, D.C.: March 29, 2013. Medicaid Integrity Program: CMS Should Take Steps to Eliminate Duplication and Improve Efficiency. GAO-13-50. Washington, D.C.: November 13, 2012. National Medicaid Audit Program: CMS Should Improve Reporting and Focus on Audit Collaboration with States. GAO-12-814T. Washington, D.C.: June, 14, 2012. Program Integrity: Further Action Needed to Address Vulnerabilities in Medicaid and Medicare Programs. GAO-12-803T. Washington, D.C.: June 7, 2012. Medicaid: Federal Oversight of Payments and Program Integrity Needs Improvement. GAO-12-674T. Washington, D.C.: April 25, 2012. Medicaid Program Integrity: Expanded Federal Role Presents Challenges to and Opportunities for Assisting States. GAO-12-288T. Washington, D.C.: December 7, 2011. Fraud Detection Systems: Centers for Medicare and Medicaid Services Needs to Expand Efforts to Support Program Integrity Initiatives. GAO-12-292T. Washington, D.C.: December 7, 2011. Fraud Detection Systems: Centers for Medicare and Medicaid Services Needs to Ensure More Widespread Use. GAO-11-475. Washington, D.C.: June 30, 2011. Medicare and Medicaid Fraud, Waste, and Abuse: Effective Implementation of Recent Laws and Agency Actions Could Help Reduce Improper Payments. GAO-11-409T. Washington, D.C.: March 9, 2011. Medicaid Managed Care: CMS’s Oversight of States’ Rate Setting Needs Improvement. GAO-10-810. Washington, D.C.: August 4, 2010. Medicaid: Alternative Measures Could Be Used to Allocate Funding More Equitably. GAO-13-434. Washington, D.C.: May 10, 2013. Medicaid: Prototype Formula Would Provide Automatic, Targeted Assistance to States during Economic Downturns. GAO-12-38. Washington, D.C.: November 10, 2011. Medicaid: Improving Responsiveness of Federal Assistance to States during Economic Downturns. GAO-11-395. Washington, D.C.: March 31, 2011. State and Local Governments: Knowledge of Past Recessions Can Inform Future Federal Fiscal Assistance. GAO-11-401. Washington, D.C.: March 31, 2011. Recovery Act: Increased Medicaid Funds Aided Enrollment Growth, and Most States Reported Taking Steps to Sustain Their Programs. GAO-11-58. Washington, D.C.: October 8, 2010. Medicaid: Strategies to Help States Address Increased Expenditures during Economic Downturns. GAO-07-97. Washington, D.C.: October 18, 2006. Federal Assistance: Temporary State Fiscal Relief. GAO-04-736R. Washington, D.C.: May 7, 2004. Medicaid Formula: Differences in Funding Ability among States Often Are Widened. GAO-03-620. Washington, D.C.: July 10, 2003. Medicaid: Overview of Key Issues Facing the Program. GAO-15-746T. Washington, D.C.: July 8, 2015. Medicaid: A Small Share of Enrollees Consistently Accounted for a Large Share of Expenditures. GAO-15-460. Washington, D.C.: May 8, 2015 2015 Annual Report: Additional Opportunities to Reduce Fragmentation, Overlap, and Duplication and Achieve Other Financial Benefits. GAO-15-404SP. Washington, D.C.: April 14, 2015. High-Risk Series: An Update. GAO-15-290. Washington, D.C.: February 11, 2015. Medicaid: Federal Funds Aid Eligibility IT System Changes, but Implementation Challenges Persist. GAO-15-169. Washington, D.C.: December, 12, 2014. Medicaid Payment: Comparisons of Selected Services under Fee-for- Service, Managed Care, and Private Insurance. GAO-14-533. Washington, D.C.: July 15, 2014. Prescription Drugs: Comparison of DOD, Medicaid, and Medicare Part D Retail Reimbursement Prices. GAO-14-578. Washington, D.C.: June 30, 2014. Medicaid: Assessment of Variation among States in Per-Enrollee Spending. GAO-14-456. Washington, D.C.: June 16, 2014. Medicaid: Demographics and Service Usage of Certain High-Expenditure Beneficiaries. GAO-14-176. Washington, D.C.: February 19, 2014. Medicaid: Use of Claims Data for Analysis of Provider Payment Rates. GAO-14-56R. Washington, D.C.: January 6, 2014. Medicaid Managed Care: Use of Limited Benefit Plans to Provide Mental Health Services and Efforts to Coordinate Care. GAO-13-780. Washington, D.C.: September 30, 2013. Medicaid: States’ Use of Managed Care. GAO-12-872R. Washington, D.C.: August, 17, 2012. Medicaid: States’ Plans to Pursue New and Revised Options for Home- and Community-Based Services. GAO-12-649. Washington, D.C.: June 13, 2012. Medicaid and CHIP: Enrollment, Benefits, Expenditures, and Other Characteristics of State Premium Assistance Programs. GAO-10-258R. Washington, D.C.: January 19, 2010.
The Medicaid program marks its 50th anniversary on July 30, 2015. The joint federal-state program has grown to be one of the largest sources of health care coverage and financing for a diverse low-income and medically needy population. Medicaid is undergoing transformative changes, in part due to PPACA, which expanded the program by allowing states to opt to cover low-income adults in addition to individuals in historic categories, such as children, pregnant women, older adults, and individuals with disabilities. GAO has a large body of work on challenges facing Medicaid and gaps in federal oversight. This report describes (1) key issues that face the Medicaid program based on this work, and (2) program and other changes with implications for federal oversight. GAO reviewed its reports on Medicaid issued from January 2005 through July 2015; reviewed documentation from the Centers for Medicare & Medicaid Services (CMS), the HHS agency that oversees Medicaid; and interviewed CMS officials. GAO identified four key issues facing the Medicaid program, based on prior work. Access to care : Medicaid enrollees report access to care that is generally comparable to that of privately insured individuals and better than that of uninsured individuals, but may have greater health care needs and greater difficulty accessing specialty and dental care. Transparency and oversight : The lack of complete and reliable data on states' spending—including provider payments and state financing of the non-federal share of Medicaid—hinders federal oversight, and GAO has recommended steps to improve the data on and scrutiny of states' spending. Also, improvements in the Department of Health and Human Services' (HHS) criteria, policy, and process for approving states' spending on demonstrations—state projects that may test new ways to deliver or pay for care—are needed to potentially prevent billions of dollars in unnecessary federal spending, as GAO previously recommended. Program integrity : The program's size and diversity make it vulnerable to improper payments. Improper payments, such as payments for non-covered services, totaled an estimated $17.5 billion in fiscal year 2014, according to HHS. An effective federal-state partnership is key to ensuring the most appropriate use of funds by, among other things, (1) setting appropriate payment rates for managed care organizations, and (2) ensuring only eligible individuals and providers participate in Medicaid. Federal financing approach : Automatic federal assistance during economic downturns and more equitable federal allocations of Medicaid funds to states (by better accounting for states' ability to fund Medicaid) could better align federal funding with states' needs, offering states greater fiscal stability. GAO has suggested that Congress could consider enacting a funding formula that provides automatic, timely, and temporary increased assistance in response to national economic downturns. Medicaid's ongoing transformation—due to the Patient Protection and Affordable Care Act (PPACA), the aging of the U.S. population, and other changes to state programs—highlights the importance of federal oversight, given the implications for enrollees and program costs. Attention to Medicaid's transformation and the key issues facing the program will be important to ensuring that Medicaid is both effective for the enrollees who rely on it and accountable to the taxpayers. GAO has multiple ongoing studies in these areas and will continue to monitor the Medicaid program for the Congress. GAO has made over 80 recommendations regarding Medicaid, some of which HHS has implemented. GAO has highlighted 24 key recommendations that have not been implemented. HHS agreed with and is acting on some and did not agree with others. GAO continues to believe that all of its recommendations have merit and should be implemented. HHS provided technical comments on a draft of this report, which GAO incorporated as appropriate.
The Office of the Architect of the Capitol (AOC) is responsible for maintaining and caring for the buildings and grounds primarily located in the Capitol Hill complex, such as the Capitol building, the House and Senate office buildings, the Library of Congress, and the Supreme Court. AOC is also responsible for making all necessary capital improvements within the complex, including major renovations and new construction. The historic nature and high-profile use of many of these buildings creates a complex environment in which to carry out this mission. For example, the U.S. Capitol building is, at once, a national capitol, museum, office building, ceremonial site, meeting center, media base, and tourist attraction. In making structural or other physical changes, AOC must consider the historical significance and the effect on each of these many uses. Further, AOC must perform its duties in an environment that requires balancing the divergent needs of congressional leadership, committees, individual members of the Congress, congressional staffs, and the visiting public. The challenges of operating in this environment were compounded by the events of September 11, 2001, and their aftermath, including the October 2001 discovery of anthrax bacteria on Capitol Hill, and the resulting need for increased security and safety. Given the important role AOC plays in supporting the effective functioning of the Congress and neighboring institutions, the Legislative Branch Appropriations Act, 2002, mandated this review, and Senate and House Appropriations Committee reports directed our review on certain management shortcomings at AOC that needed attention, with a focus on recommending solutions—strategic planning, organizational alignment, strategic human capital management, and financial management. The committees also asked us to assess information technology, and three key program areas—worker safety, recycling, and project management—both to illustrate the management issues we are addressing and to help AOC identify best practices and areas for improvement in these important programs. This report also discusses actions taken to date by AOC and our recommendations for further enhancements. In April 2002, at the request of the Subcommittee on Legislative Branch, Senate Committee on Appropriations, we submitted a statement for the record for AOC’s appropriations hearing that outlined our preliminary observations on what AOC needed to do to improve its management. This report completes our review. The act also requires AOC to develop a management improvement plan to address our recommendations. We recognize that this report outlines a large and complex agenda for achieving organizational transformation at AOC, and that AOC cannot tackle all these changes at once. The experiences of successful major change management initiatives in large private and public sector organizations suggest that such initiatives can often take at least 5 to 7 years until they are fully implemented and the related cultures are transformed in a sustainable manner. Nonetheless, this agenda provides the broad landscape of issues confronting AOC and is therefore important to crafting a comprehensive and integrated approach to addressing AOC’s challenges and setting appropriate priorities, even though by necessity it will have to be phased in over time. By drawing on the full potential of its management team, AOC can begin to take immediate steps on a number of actions, although we recognize that AOC will be able to implement some of these actions more quickly than others. In fiscal year 2002, AOC operated with a budget of $426 million, which included $237 million for capital expenditures associated with the construction or major renovation of facilities within the Capitol Hill complex. Organizationally, AOC has a centralized staff that performs administrative functions; what AOC refers to as “jurisdictions” handle their own day-to-day operations. These jurisdictions include the Senate Office Buildings, the House Office Buildings, the U.S. Capitol Buildings, the Library of Congress Buildings and Grounds, the Supreme Court Buildings and Grounds, the Capitol Grounds, the Capitol Power Plant, the U.S. Botanic Garden. There are over 2,300 employees in AOC; nearly 1 out of every 3 employees is a member of a union. New requirements to meet long-standing labor and safety laws have added to the complexity of AOC operations. For example, the Congressional Accountability Act of 1995 (CAA) applied 11 civil rights, labor, and workplace laws to AOC as well as other legislative branch agencies. In particular, meeting the obligations of labor laws, such as the Fair Labor Standards Act of 1938 and the Federal Service Labor-Management Relations Statute, while overcoming a history of poor labor-management relations has been a struggle. CAA also requires AOC to meet standards set by the Occupational Safety and Health Act of 1970, which applied new life and fire safety codes, as well as other building codes, to the agency. CAA established the Office of Compliance (OOC) to enforce the provisions of the act through inspections, investigations, and prosecution of potential violations. In addition, OOC provides education to employees and employing offices, and administers dispute resolution procedures if violations are found. AOC has demonstrated a commitment to change through the management improvements it has planned and under way. For example, consistent with the preliminary observations we provided in our April statement, AOC has recently commenced a new strategic planning effort that focuses on developing drafted congressional protocols patterned after our protocols, conducted client surveys in the Capitol, House, Senate, and Library of implemented a senior executive performance evaluation system, improved budget formulation and execution processes, begun preparations for producing auditable financial statements, begun drafting a policy to establish an agencywide approach to drafted a workplace safety and health master plan, consulted with experts on how to structure its request for proposals for developing a long term master plan for the Capitol Hill complex, and improved recycling program coordination and client outreach. The Legislative Branch Appropriations Act, 2002, directed us to conduct a comprehensive management study of AOC’s operations. Under this mandate, we address three objectives: (1) What improvements in strategic planning, organizational alignment, and strategic human capital management would help AOC better achieve its mission and accomplish its strategic goals? (2) What actions can the AOC take to improve its overall management infrastructure in other key functional areas, such as financial management and information technology management, to improve its performance and better accomplish its goals? (3) What specific improvement can AOC make in selected program areas, including worker safety, project management, and recycling, vital to achieving its mission? To address these objectives, we have been working constructively with AOC managers to understand their complex operating environment and the long-standing challenges they must address. In addition to the standard audit methods described below, as part of our constructive engagement, we provided AOC briefings and GAO reports on best practices in the areas we reviewed. For example, at AOC’s request, GAO officials provided briefings on our own approach to strategic planning and establishing congressional protocols along with copies of our strategic planning and protocol documents. In addition, we provided GAO reports on areas such as strategic human capital management and world-class financial management and other guidance on GAO’s human capital policies and procedures. Finally, upon request we provided details of our focus group methodology discussed below to assist AOC in replicating our approach in AOC jurisdictions we did not cover. For each of the management functions and the worker safety and health, recycling, and project management programs, we reviewed AOC’s legislative authority and internal AOC documents, including selected AOC policies and procedures, internal and consultant reports on AOC management issues, reports by the Inspector General and GAO, and other reports on best practices. To obtain management’s perspective on the objectives, we interviewed key senior AOC officials, including the Architect; the Chief of Staff; the Assistant Architect; the Chief Financial Officer; the General Counsel; the Deputy Chief of Staff; the Director of Safety, Fire, and Environmental Programs; the Director of the Office of Labor Relations; and the Acting Chief of the Office of Design and Construction. We also interviewed AOC officials at the next level of management responsible for strategic planning, human resources, information technology, budget, accounting, project management, architecture, engineering, construction, and recycling. We also spoke to senior AOC managers and toured facilities in the following AOC jurisdictions: U.S. Capitol Building, House Office Building, Senate Office Buildings, Library Buildings and Grounds, Supreme Court, Capitol Power Plant, and the U.S. Botanic Garden. We interviewed the Inspector General to discuss the work his office had done on the management areas we reviewed. In addition to formal interviews, AOC allowed us to attend as observers a number of key internal meetings, including two budget review meetings on budget formulation and execution progress and issues for two jurisdictions, three quarterly capital project review meetings to discuss the status of AOC projects, an August 2002 National Academy of Sciences workshop to discuss Capitol Hill complex-wide master planning efforts, and a June 2002 workshop by DuPont Safety Resources on strategies for safety excellence. To obtain additional perspectives on the areas examined as part of our review and as an initial effort to support AOC planned efforts to begin to routinely obtain employee feedback, we used focus groups to gather employee and supervisor perceptions, opinions, and attitudes about working at AOC. For our focus groups at AOC, we were interested in obtaining (1) employees’ views of what aspects of working at AOC were going well or needed improvement, (2) whether employees had the resources needed to perform their jobs, and (3) employees’ perspectives on AOC’s worker safety program. We contracted with the firm of Booz|Allen|Hamilton to conduct the focus groups and summarize and analyze the results. We conducted 13 of these focus groups with employees randomly selected from the House and Senate Office Building jurisdictions, Capitol Power Plant, Senate Restaurants, and the Construction Management Division. We selected employees from these parts of AOC in accord with our specific review areas of worker safety and project management and also because they contained some of the largest employee populations. The other two focus groups consisted of randomly selected employee supervisors from the House and Senate jurisdictions. In all, we invited 200 employees to attend 15 focus groups and 127 employees participated. To obtain a better understanding of project management at AOC, we also conducted a focus group with full-time AOC project managers. For the focus group, we asked about what is working well at the AOC in project management and where there might be areas for improvement. We also discussed (1) the project management process at AOC, (2) the project management environment, and (3) resources and tools used in performing project management duties at AOC. We invited 14 project managers and 8 attended. A more detailed discussion of our focus group objectives, scope, and methodology, including a list of our focus group questions is contained in appendix I. To further understand how project management works at AOC, we conducted two in-depth case studies of projects currently under way—the relocation of the Senate Recording Studio and the modernization of the coal handling system at the Capitol Power Plant. We selected these case studies using the following criteria: both were drawn from AOC’s “hot”—or high priority—projects, one was a medium project and one was a large project, and one had a project manager from the central Assistant Architect’s office and the other from a jurisdiction. In addition both projects were on a critical path to the completion of other high priority AOC projects. Our methodology entailed reviewing relevant project documents as well as interviewing key internal and external stakeholders for the projects. On November 20, 2002, we provided to the Architect of the Capitol a draft of this report for comment. We received written comments from the Architect. The Architect’s comments are reprinted in appendix II. AOC also provided technical comments that were incorporated where appropriate. In his written comments, the Architect stated that he is “dedicated to preserving and enhancing the national treasures entrusted to my agency’s care, and to providing high quality service to the Congress and our other clients.” He further stated “the GAO testimony provided in April 2002 and our discussions with GAO regarding the report resulted in our advancing improvement efforts at the .” The Architect generally agreed with our findings, conclusions, and recommendations and indicated that AOC is developing an implementation plan to adopt recommended management changes and that three themes—strategic planning, communications, and performance management—will be the primary focus of its immediate efforts. The Architect disagreed with our statement that AOC’s 5-year Safety Management Plan was drafted independent of its broader strategic planning effort. Although we believe that this statement was true at the time of our review, AOC has subsequently made efforts to improve the alignment between its draft strategic and worker safety plans. Therefore, we deleted this statement. We performed our work in Washington, D.C., from November 2001 through September 2002 in accordance with generally accepted government auditing standards. Major contributors to this report are listed in appendix III. The Office of the Architect of the Capitol (AOC) recognizes that because of the nature of the challenges and demands it faces, change will not come quickly or easily. AOC therefore must ensure that it has the policies, procedures, and people in place to effectively implement the needed changes. That is, to serve the Congress, central AOC management needs the capability to define goals, set priorities, ensure follow-through, monitor progress, and establish accountability. The themes we discuss in this chapter focus on building the capability to lead and execute organizational transformation. Therefore, as a first priority, AOC needs to establish a management and accountability framework by, among other things, demonstrating top leadership commitment to organizational involving key congressional and other stakeholders in developing its using its strategic plan as the foundation for aligning its activities, core processes, and resources to support mission-related outcomes; establishing a communications strategy to foster change and create shared expectations and build involvement; developing annual goals and a system for measuring performance; and strategically managing its human capital to drive transformation and to support the accomplishment of agency goals. Across the federal government, fundamental questions are being asked about what government does; how it does it; and in some cases, who should do the government’s business. The answers to these questions are driving agencies to transform their organizational cultures. This organizational transformation entails shifts from hierarchical to flatter and more horizontal structures, an inward focus to an external (customer and stakeholder) focus, micro-management to employee empowerment, reactive behavior to proactive approaches, avoiding new technologies to embracing and leveraging them, hoarding knowledge to sharing knowledge, avoiding risk to managing risk, and protecting turf to forming partnerships. AOC confronts many of these same issues. For example, to serve its clients, AOC is organized along jurisdictional lines—stovepipes that are not fully matrixed. In this environment, AOC faces the challenge of how best to marshal its jurisdiction-based resources to address the strategic planning, performance management, human capital, project management, and other functional issues that cut across the organization. AOC also faces the challenge of how to shift from reacting to problems as they arise to getting in front of the problems to address root causes, while still responding to the day-to-day service needs of its clients. Change is always risky, but continuing to address problems with only short-term tactical solutions can be even riskier—AOC needs to develop the capacity to identify the risks to achieving its goals and manage them before crises occur. Making such fundamental changes in AOC’s culture will require a long- term, concerted effort. The experiences of successful major change management initiatives in large private and public sector organizations suggests that such initiatives can often take at least 5 to 7 years until they are fully implemented and the related cultures are transformed in a sustainable manner. As a result, it is essential to establish action-oriented implementation goals over the long term and a time line with milestone dates to track the organization’s progress towards achieving those implementation goals. The nature and scope of the changes require the sustained and inspired commitment of the top leadership. Top leadership attention is essential to overcome organizations’ natural resistance to change, marshal the resources needed to implement change, and build and maintain the organizationwide commitment to new ways of doing business. On September 9, 2002, the Comptroller General convened a roundtable of executive branch leaders and management experts to discuss the Chief Operating Officer concept and how it might apply within selected federal departments and agencies as one leadership strategy to address certain systemic federal governance challenges. There was general agreement in the roundtable on a number of overall themes concerning the need for agencies to do the following: Elevate attention on management issues and organizational transformation. The nature and scope of the changes needed in many agencies require the sustained and inspired commitment of the top political and career leadership. Integrate various key management functions and transformation responsibilities. While officials with management responsibilities often have successfully worked together, there needs to be a single point within agencies with the perspective and responsibility—as well as authority—to ensure the successful implementation of functional management and, if appropriate, transformation efforts. Institutionalize accountability for addressing management issues and leading transformation. The management weaknesses in some agencies are deeply entrenched and long-standing and will take years of sustained attention and continuity to resolve. In addition, making fundamental changes in agencies’ cultures will require a long-term effort. In our April 2002 statement, we noted that we were exploring options to strengthen AOC’s executive decision-making capacity and accountability, including creating a Chief Operating Officer (COO) position, which could be responsible for major long-term management and cultural transformation and stewardship responsibilities within AOC. On July 25, 2002, the Senate passed S.2720, the Legislative Branch Appropriations Act, 2003, in which it established a Deputy Architect of the Capitol/COO. This official was to be responsible for the overall direction, operation, and management of AOC. In addition to developing and implementing a long- term strategic plan, including a comprehensive mission statement and an annual performance plan, the bill requires that the Deputy Architect be responsible for proposing organizational changes and new positions needed to carry out AOC’s mission and strategic and annual performance goals. Regardless of whether the Congress decides to pursue a COO position for AOC, concerted efforts will be needed to elevate, integrate, and institutionalize responsibility for transformation at AOC. In our prior work, we have concluded that for strategic planning to be done well, organizations must involve their stakeholders and align their activities, core processes, and resources to support mission-related outcomes. We found that leading results-oriented organizations consistently strive to ensure that their day-to-day activities support their organizational missions and move them closer to accomplishing their strategic goals. In practice, these organizations see the production of a strategic plan—that is, a particular document issued on a particular day— as one of the least important parts of the planning process. This is because they believe strategic planning is not a static or occasional event. It is, instead, a dynamic and inclusive process. If done well, strategic planning is continuous and provides the basis for everything the organization does each day. Therefore, it is important for an organization to go through the strategic planning process first, and then align the organization to accomplish the objectives of that plan. Figure 1 shows how an agency’s strategic plan serves as the foundation for other strategic management initiatives, such as organizational realignment; performance planning, management, and reporting; and improvements to the capacity of the organization to achieve its goals. Since 1997, AOC and a number of its subsidiary offices and jurisdictions have attempted to implement strategic planning processes. In 1997, the Architect led the first effort to produce an AOC-wide strategic plan that laid out AOC’s mission, vision, core values, strategic priorities, and goals and objectives. Similarly, a number of business units within AOC, such as the Human Resources Management Division, the Office of Inspector General, and the House Office Buildings jurisdiction have developed their own strategic plans, and the Capitol Buildings jurisdiction is developing a new master plan for the Capitol, but these plans do not flow directly from, and therefore are not necessarily consistent with, an AOC-wide plan. According to AOC officials, turnover in key planning staff and inability to reach agreement on how to measure performance led AOC management to discontinue the AOC-wide strategic planning effort. Subsequently, in 2001 AOC shifted to a scaled-back strategic planning approach that focused on tasks to be completed in a number of key priority areas: (1) develop a process and establish realistic goals and priorities, (2) improve employee support by, for example, addressing space and equipment needs and improving communication about where the organization is going, (3) improve safety, (4) improve project delivery, and (5) focus on quality assurance. In our April 2002 statement, we stated that AOC needed to refocus and integrate its strategic planning efforts to identify and implement mission-critical goals for key results. Consistent with the preliminary observations in our April 2002 statement, AOC renewed its organizationwide strategic planning process. AOC formed a task force of senior managers to develop a “straw” strategic plan that outlines AOC’s mission; vision; core values; and long-term, mission-critical goals for fiscal years 2003 through 2007. When completed, AOC’s strategic plan should provide the starting point and serve as a unifying framework for AOC’s various business unit and jurisdictional planning efforts. The plan will also position AOC to answer questions such as what fundamental results does AOC want to achieve, what are its long-term goals, and what strategies will it employ to achieve those goals. Successful organizations we studied ensure that their strategic planning fully considers the interests and expectations of their stakeholders. Among the stakeholders of AOC are the appropriations and oversight committees and individual members of the Congress and their staffs, the management and staff of the Supreme Court, the Library of Congress, and the Congressional Budget Office, AOC employees, and, of course, the American public. AOC strategic planning efforts have not yet involved such outreach. To date, AOC’s task force of senior managers has developed a straw 5-year strategic plan that outlines AOC’s mission, vision, core values, and high level goals and objectives for the four strategic focus areas its has identified: strategic management and business initiatives, human capital, facilities management, and project management. Consistent with our constructive engagement with AOC, we have provided several best practice briefings to the agency’s leadership as requested. A senior GAO executive in GAO’s Office of External Liaison briefed the Architect of the Capitol and other AOC senior managers on October 8, 2002, on our continuing process to update and revise our strategic plan. The briefing emphasized the need for continual stakeholder involvement. As a result, according to AOC, it recently defined its key stakeholders and a methodology for obtaining their feedback on the strategic plan. In moving forward with its strategic planning efforts, it will be critical that AOC fully engage its stakeholders and obtain their buy-in to provide a strong foundation for any organizational or operating changes that may be needed to implement the plan. In contrast to previous strategic planning initiatives, AOC needs to move beyond a focus on actions to be completed quickly to a broader focus on the mission-critical, long-term goals needed to serve the Congress. Thus, stakeholder involvement will be especially important for AOC to help it ensure that its efforts and resources are targeted at the highest priorities. Just as important, involving stakeholders in strategic planning efforts can help create a basic understanding among the stakeholders of the competing demands that confront most agencies, the limited resources available to them, and how those demands and resources require careful and continuous balancing. An effective communications strategy is a key success factor for organizations undergoing transformation. In a September 24, 2002, forum convened by the Comptroller General on mergers and transformation issues, there was consensus among the participants that communication is essential to organizational transformation. As we discussed in our April 2002 statement, for successful implementation of strategic planning and change management, AOC must develop a comprehensive communications strategy for its internal and external customers. The Architect of the Capitol agrees that improving communications is one of his top priorities. As AOC continues to develop its strategic plan, it should consider how it can build such a communications strategy to help to achieve the organization’s mission. It is also important for AOC to assess ways that it can measure the success of this strategy. AOC continues to strengthen its internal communications by broadening participation in a series of regular meetings among its senior managers for decision making and routine sharing of information. For example, AOC has expanded participation in its management council meetings (biweekly meetings of AOC’s senior managers to address agency business issues and priorities) to include jurisdictional superintendents and office directors. In our April 2002 statement, we noted that AOC could strengthen its internal communications by developing a communications strategy that would help AOC’s line employees understand the connection between what they do on a day-to-day basis and AOC’s goals and expectations, as well as seek employee feedback and develop goals for improvement. We further stated that one way of implementing such a strategy is to conduct routine employee feedback surveys and/or focus groups. In addition, we continue to believe that AOC could benefit from knowledge sharing to encourage and reward employees who share and implement best practices across the various jurisdictions, teams, and projects. The need for an organizationwide communications strategy is borne out by the results of the focus groups that we conducted with AOC employees and supervisors from June through July 2002. When we analyzed the results of the focus groups, several themes became apparent. One of the themes cited by focus group participants involved supervisory communications and employee relations—specifically, that communications from supervisor to employee is insufficient. AOC plans to followup on our efforts by seeking employee feedback through focus groups and surveys. In a May 23, 2002, memorandum from the Architect to AOC’s employees announcing the focus groups we conducted, the Architect stated that AOC planned to gather the views of employees from the jurisdictions that we did not cover. Moreover, in its draft strategic plan, AOC noted that employee surveys is one strategy it plans to use to help achieve the human capital strategic goal of attracting, developing, and retaining diverse, satisfied, and highly motivated employees. AOC must continue to improve its external communications and outreach by (1) further developing congressional protocols, (2) improving its accountability reporting, and (3) continuing to measure customer satisfaction with its services organizationwide. In our April 2002 statement, we encouraged AOC to consider developing congressional protocols, which would help ensure that AOC deals with its congressional customers using clearly defined, consistently applied, and transparent policies and procedures. After working closely with the Congress and after careful pilot testing, we implemented congressional protocols in 1999. In response to our preliminary observations concerning the need for such protocols at AOC, on June 17, 2002, GAO’s Director of Congressional Relations and her staff briefed the Architect of the Capitol and AOC’s senior managers on lessons learned from GAO’s development of congressional protocols. They shared key lessons and success factors from our experiences in developing the protocols—that it is a time-consuming process that involves (1) the personal commitment and direction from the agency head, (2) senior management participation and buy-in, and (3) continuous outreach to and feedback from external stakeholders. As a result of our preliminary observations and our best practices briefing, AOC drafted an initial set of congressional protocols modeled after our congressional protocols. AOC noted that these protocols need to be finalized and distributed. In doing so, and consistent with the approach for AOC’s strategic plan, AOC needs to continually involve its stakeholders in developing these protocols. Although AOC is not required to comply with the 1993 Government Performance and Results Act (GPRA) because it is a legislative branch agency, we believe that AOC could adopt the reporting elements of GPRA to strengthen accountability and transparency by annually reporting program performance and financial information. For example, although GAO is a legislative branch agency, since fiscal year 1999, we have annually produced performance and accountability reports as well as our future fiscal year performance plan. Such results-oriented accountability reporting would help AOC communicate what it has accomplished, as well as its plans for continued progress to its external stakeholders. In tandem with AOC’s efforts to gather internal feedback from its employees, we noted in April 2002 that AOC’s communications strategy should also include tools for gauging customer satisfaction with its services. Customer feedback is an expectation for AOC’s senior managers and conducting client surveys is one proposed method in AOC’s draft strategic plan to achieve the strategic objective related to facilities management. In June 2002, AOC made a concerted effort to gather the views of some of its clients through a building services customer satisfaction survey for the Senate, House, Capitol building, and Library of Congress jurisdictions, which it plans to conduct annually. The Architect of the Capitol indicated to the survey participants that he will use the results of the survey to initiate service improvements based on the priorities they identify. AOC surveyed a total of 1,883 congressional staff members and received 275 responses. The results of the survey were shared with the jurisdictions’ superintendents. AOC plans to report the results to the congressional leadership and members of the Congress and to the Library of Congress. In response, the jurisdictional superintendents are developing “action plans” to address areas of concern that were raised in the surveys. Continued AOC efforts to routinely measure customer satisfaction AOC- wide with both its congressional customers as well as other customers, such as visitors to the Capitol Hill complex, will help AOC identify its service quality strengths, performance gaps, and improvement opportunities. Another key action AOC needs to take is developing annual performance goals that provide a connection between the long-term strategic goals in the strategic plan and the day-to-day activities of managers and staff members. Measuring performance allows an organization to track the progress it is making toward its goals, gives managers crucial information on which to base their organizational and management decisions, and creates powerful incentives to influence organizational and individual behavior. AOC’s draft strategic plan for 2002 through 2007 describes a number of strategic objectives and outcomes for each of its four focus areas. For example, under Facilities Management, AOC has as a strategic objective to “provide safe, healthy, secure, and clean facilities to our clients.” One of the outcomes described for this focus area is “satisfied visitors and occupants.” The draft plan also lists a performance goal methodology. In the case of Facilities Management, the methodology is “client surveys,” as we discussed above. According to the draft plan, AOC’s strategic plan is to be supplemented by more detailed functional plans that are developed along the same planning time line. These plans are to contain the tactical level actions, performance targets, and milestone data necessary to carry out agency-level strategies. The draft plan states that AOC will use both quantitative and qualitative performance goals and measures to demonstrate progress toward its strategic goals and objectives. As AOC moves forward in developing its performance goals and measures, it should consider the practices of leading organizations we have studied that were successful in measuring their performance. Such organizations generally applied two practices. First, they developed measures that were (1) tied to program goals and demonstrated the degree to which the desired results were achieved, (2) limited to the vital few that were considered essential to producing data for decision making, (3) responsive to multiple priorities, and (4) responsibility linked to establish accountability for results. Second, the agencies recognized the cost and effort involved in gathering and analyzing data and made sure that the data they did collect were sufficiently complete, accurate, and consistent to be useful in decision making. Developing measures that respond to multiple priorities is of particular importance for programs operating in dynamic environments where mission requirements must be carefully balanced. This is the case for AOC where the role of protecting and preserving the historic facilities under its control may occasionally conflict with its role of providing maintenance and renovation services to occupants who use the facilities to conduct congressional business. For example, according to AOC officials, following elections, new members of the Congress may ask AOC to modify office suites containing historic architectural features. In those cases, AOC must balance the members’ needs for functional office design with its responsibility for protecting the architectural integrity of the rooms. Consequently, AOC, like other organizations, must weigh its mission requirements against its priorities. AOC could better gauge its success by first employing a balanced set of measures that encompasses its diverse responsibilities and requirements, such as maintaining historic facilities and satisfying customers and then benchmarking its results both internally—across its jurisdictions—as well as against other leading organizations with comparable facility management operations. Once AOC has reached agreement with its stakeholders on its strategic plan, AOC should revisit both its senior executive and employee performance management systems to strengthen individual accountability to organizational goals and performance. AOC also has not yet aligned and cascaded its performance expectations with its mission-critical goals at all levels of the organization. As our September 2002 report on managing senior executive performance using balanced expectations noted, leading organizations use their performance management systems to achieve results, accelerate change, and facilitate communication throughout the year so that discussions about individual and organizational performance are integrated and ongoing. Thus, effective performance management systems can be (1) strategic tools for organizations to drive internal change and achieve external results and (2) ways to translate organizational priorities and goals into direct and specific commitments that senior executives will be expected to achieve during the year. As we have reported in the past, another critical success factor for creating a results- oriented culture is a performance management system that creates a “line of sight” showing how individual employees can contribute to overall organizational goals. In June 2002, AOC implemented a senior executive performance management system—informed by our human capital policies and flexibilities and structured around the Office of Personnel Management’s Executive Core qualifications—based on six performance requirements: results-driven, leading change, leading people, equal employment opportunity, business acumen, and building coalitions and communications. The senior executive performance management system is based on a balanced measures approach—an approach to performance measurement that balances organizational results with customer, employee, and other perspectives. As a part of this system, AOC instructed its senior executives to incorporate the agency’s strategic goals and responsibilities into their performance requirements and individual commitments for subsequent evaluation by the Architect. The results- driven performance requirement for AOC’s senior executives provides the basis for results-oriented accountability. The senior executive performance management system—once aligned with the strategic goals and objectives in AOC’s strategic plan, will serve as an important means for helping AOC to achieve its desired organizational results. In June 2000, AOC implemented a performance management system— Performance Communication and Evaluation System —for its General Schedule (up to GS-15) and Wage Grade employees (non-bargaining-unit employees). According to the Director of HRMD, approximately 875 bargaining unit and trades employees—about 38 percent of AOC’s workforce—were not covered by these systems. As a next step, AOC should align its employee performance management system with its senior executive system to strengthen individual accountability to organizational goals and performance. For example, as we discuss later in the report, although the incentive to focus on safety has been built into the performance appraisal system for employees, it is not addressed in the senior executive performance evaluation system. While AOC supports this concept, AOC’s senior officials stated that they must balance the need to move forward in aligning these systems with the need to provide continuity in the employee performance management system currently in place. The establishment and integration of organizational competencies into performance management systems is another mechanism to create accountability for achieving mission-critical goals. Competencies, which define the skills or supporting behaviors that employees are expected to exhibit as they effectively carry out their work, can provide a fuller picture of an individual’s performance. Competencies can also help form the basis for an organization’s selection, promotion, training, performance management, and succession planning initiatives. Our August 2002 report on other countries’ performance management initiatives found that the United Kingdom, Australia, and New Zealand are using competencies in their public sector organizations to provide a fuller assessment of individual performance. GAO has also introduced a competency-based performance management system for analysts and specialists, driven by a best practice review of multidisciplinary professional service organizations in both the private and public sectors. AOC should consider developing core and technical competencies as the basis for its performance management systems. Agencywide core and technical competencies can serve as guidance for employees as they strive to meet organizational expectations. The core competencies should be derived from AOC’s strategic plan and workforce planning efforts and reflect its core values. All employees should be held accountable for achieving core competencies as AOC moves to transform its culture. As we reported in April 2002, AOC has added to its professional workforce by hiring new jurisdictional superintendents, deputy superintendents, budget and accounting officers, a Chief Financial Officer, a Director of Facilities Planning and Programming, and worker safety specialists. As AOC works toward developing a cadre of managerial and professional employees, the development of specific technical competencies can assist the agency in creating and developing a successful leadership and managerial team. AOC has made progress in establishing supervisory, management, and executive competencies. AOC’s Human Resources Management Division (HRMD) has also developed a competency model for its professional and administrative staff. HRMD intends to use this competency model to “reinforce its strategic focus … and outline the workforce requirements necessary to develop a highly competent cadre of human resources staff dedicated and committed to providing high-quality, timely and responsive human resources services to managers and employees of the AOC.” As AOC’s efforts move forward, it will identify opportunities to refine and/or develop technical competencies in other managerial and professional areas critical to achieving its mission, including project management, worker safety, financial management, and information technology. AOC can draw from best practices guidance and professional associations and certifications to assist it in developing these technical competencies. Some tools available to identify appropriate competencies are offered by the Joint Financial Management Improvement Program for financial management, and the Project Management Institute for project management. After AOC has established its core and technical competencies, it can use these competencies as the basis for the performance requirements of its performance management systems for both senior executives and employees. The combination of a competency-based performance management system linked to mission-critical goals could provide AOC with a world-class mechanism for holding its workforce accountable for achieving its mission. AOC does not currently collect and analyze workforce data in a comprehensive way that would allow it to determine its workforce needs and to measure its progress in achieving its human capital strategic goals and objectives. The ability to collect and analyze data will greatly enhance AOC’s ability to acquire, develop, and retain talent, while allowing it to effectively plan for the needs of its workforce. High-performing organizations use data to determine key performance objectives and goals that enable them to evaluate the success of their human capital approaches. Reliable data also heighten an agency’s ability to manage risk by allowing managers to spotlight areas for attention before crises develop and identify opportunities for enhancing agency results. Collecting and analyzing data are fundamental building blocks for measuring the effectiveness of human capital approaches in support of the mission and goals of an agency. AOC needs to develop a fact-based, comprehensive approach to the collection and analysis of accurate and reliable information across a range of human capital activities. AOC recognizes the need to comprehensively collect and analyze workforce data and has requested about $1 million in its fiscal year 2003 budget for an automated system to assist it in recruitment, classification, workforce management, and succession planning. Appropriate data sources and collection methods are necessary to measure progress in meeting AOC’s human capital goals and objectives. For example, in order for AOC to determine if it is meeting equal employment opportunity (EEO) and diversity requirements—one of its strategic objectives—it must first establish a reliable data gathering method. We found that AOC does not have comprehensive procedures in place to track its progress to assess whether it is achieving its goal of a diverse workforce. Based on reliable data, AOC can then monitor its progress in meeting EEO requirements and develop appropriate intervention strategies if it is not. AOC can benefit from strategically identifying its current and future workforce needs and then creating strategies to fill any gaps. AOC recognizes the need to conduct workforce planning; however, it has not yet initiated this effort. According to the principles embodied in our Model of Strategic Human Capital Management, effective organizations incorporate human capital critical success factors, such as integration and alignment, and data-driven human capital decisions as strategies for accomplishing their mission and programmatic goals and results. Strategic workforce planning and analysis is one such approach that can help AOC to effectively align its resources with agency needs. Workforce planning efforts linked to strategic program goals and objectives can help the organization to identify such needs as ensuring a diverse labor force, succession planning for scarce skill sets, and other competencies needed in the workforce. For example, in AOC’s draft strategic plan, human capital is one of the four strategic planning focus areas. The strategic goal associated with the human capital focus area is to attract, develop, and retain diverse, satisfied, and highly motivated employees with the skills, talents, and knowledge necessary to support the agency’s mission. AOC established several strategic objectives to achieve this goal. One of the objectives is to develop a human capital plan designed to acquire, develop, and retain a talented workforce while integrating and aligning human capital approaches, equal opportunity requirements, and organizational performance. Specifically, an effective strategic workforce planning effort will entail determining how many employees AOC needs to accomplish its assessing the skills and competencies of the employees currently available to do this work (develop an employee skills and competencies inventory); determining gaps in the number, skills, and competencies of the employees needed to do this work; developing a training and recruitment plan for filling the gap, including a focus on the diversity and EEO goals of the organization; creating a succession plan to address workforce gaps created by employees exiting the organization; and evaluating the contribution that the results of these strategic workforce planning efforts make to achieving mission-critical goals. AOC does not currently have workforce planning efforts under way, although it does recognize the need to strategically plan for its workforce and has requested funding for four positions in its fiscal year 2003 budget to create an organization and workforce management team within the Office of the Architect. The purpose of this proposed team is to conduct workforce planning and analysis. The team would work collaboratively with AOC’s HRMD, Office of the Chief Financial Officer, and other agency managers to focus on skill mix, resource needs, and succession planning. AOC faces many challenges as it seeks to better serve the Congress. This report lays out a complex agenda for organizational transformation at AOC that includes developing the capacity to lead and execute change and becoming a more results-oriented, matrixed, client-focused, and proactive organization. AOC has indicated that it is committed to the long-term effort necessary to improve its service to the Congress and has already begun to make some improvements in areas such as strategic planning, client outreach, and accountability of senior management for achieving results. To make lasting improvements, AOC must continue on this path by demonstrating top leadership commitment to long-term change; involving key congressional and other stakeholders in developing its using its strategic plan as the foundation for aligning activities, core processes, and resources to support mission-related outcomes; establishing a communications strategy to foster change and create shared expectations and build involvement; developing annual goals and a system for measuring performance; and strategically managing its human capital to drive transformation and to support the accomplishment of agency goals. AOC’s needs to improve its executive decision-making capacity and accountability in order to help (1) elevate attention on management issues and transformation, (2) integrate various key management and transformation efforts, and (3) institutionalize accountability for addressing management issues and leading transformation. One option for addressing the transformation issues that AOC faces is to create a COO or similar position that would be accountable for achieving change at AOC. Making such fundamental changes in AOC’s culture will require a long- term, concerted effort. In developing a management improvement plan to address the recommendations in this report, it is essential that AOC work with key congressional and other stakeholders to establish action-oriented implementation goals over the long term, and a time line with milestone dates to track the organization’s progress towards achieving those implementation goals. In order to adopt the elements of the management and accountability framework—strategic planning, organizational alignment, communications, performance measurement, and strategic human capital management—and build on efforts under way at AOC, we recommend that the Architect of the Capitol improve strategic planning and organizational alignment, by involving key congressional and other external stakeholders in AOC’s strategic planning efforts and in any organizational changes that may result from these efforts; develop a comprehensive strategy to improve internal and external communications, by providing opportunities for routine employee input and feedback, completing the development of congressional protocols by involving improving annual accountability reporting through annual performance planning and reporting, and continuing to regularly measure customer satisfaction AOC-wide; strengthen performance measurement and strategic human capital developing annual goals and measuring performance, creating a “line of sight” by linking AOC’s senior executive and employee performance management systems to mission-critical goals, establishing agencywide core and technical competencies and holding employees accountable for these competencies as a part of the performance management system, developing the capacity to collect and analyze workforce data, and identifying current and future workforce needs and developing strategies to fill gaps. In developing a management improvement plan to address the recommendations in this report, we also recommend that the Architect of the Capitol establish action-oriented implementation goals over the long term and a time line with milestone dates to track the organization’s progress towards achieving those implementation goals. The Architect should work with key congressional and other stakeholders to develop this plan. The Congress should consider ways in which to elevate, integrate, and institutionalize accountability for addressing management issues and leading organizational transformation at AOC. One option would be to create a statutory COO or similar position for AOC to improve its executive decision-making process and accountability. To help ensure that AOC implements its management improvement plan, the Congress should consider requiring AOC to provide periodic status reports on the implementation of its plan, including progress made and milestones not met, and any adjustments to the plan in response to internal or external developments. In his comments on this chapter, the Architect agreed with our recommendations and discussed the current efforts AOC has under way in response, including the development of a plan to implement our recommendations. For example, AOC is currently conducting an agencywide strategic planning effort—with stakeholder involvement— focused on developing mission-critical goals and action plans for mission- critical programs, such as facilities management, project management, and human capital. AOC has also formed a team to develop a comprehensive communications strategy to improve its internal and external communications. To strengthen transparency and accountability, as we recommended AOC plans to produce an annual performance plan that outlines the specific actions, milestones, and performance measures planned to achieve its goals for that year and an annual accountability report on progress achieved. In the area of strategic human capital management, AOC stated that it would implement our recommendations in a phased approach that will entail firmly establishing its overall strategy before aligning individual performance management programs to that strategy. AOC plans to explore the benefits of expanding the use of core and technical competencies agencywide, but wants first to ensure that the use of competencies is appropriate for all occupations and jurisdictions. The Architect’s comments are reprinted in appendix II. The effectiveness with which the Office of the Architect of the Capitol (AOC) can use the management reforms discussed in chapter 2—strategic planning, organizational alignment, performance management, improved internal and external communications, and strategic human capital management—to achieve organizational transformation will depend in part on its ability to focus on management improvement in its day-to-day operations. A key factor in helping an agency to better achieve its mission and program outcomes and identify and manage risks while leveraging opportunities is to implement appropriate internal control. Internal control is a major part of managing an organization. It comprises the plans, methods, and procedures used to meet missions, goals, and objectives and, in doing so, supports performance-based management. Internal control also serves as the first line of defense in safeguarding assets and preventing and detecting errors and fraud. In short, internal control, which is synonymous with management control, helps government program managers achieve desired results through effective stewardship of public resources. Effective internal control also helps in managing change to cope with shifting environments and evolving demands and priorities. As programs change and as agencies strive to improve operational processes and implement new technological developments, management must continually assess and evaluate its internal control to assure that the control activities being used are effective and updated when necessary. Other aspects of AOC’s management infrastructure will also require continued management attention to support its new focus on achieving reforms in mission-critical areas of facilities management, project management, strategic planning, and human capital management. AOC will need to further develop and consistently apply transparent human capital policies and procedures in the areas of leave, awards, and overtime and examine discrepancies in job classification and pay levels across the agency. AOC must continue improving its approach to budgeting and financial management to support effective and efficient program management. Finally, AOC will need to adopt an agencywide approach to information technology (IT) management to position itself to optimize the contribution of IT to agency mission performance. AOC has made a number of important and positive efforts to improve its internal control. For example, in response to our 1994 report that AOC’s personnel management system did not follow many generally accepted principles of modern personnel management, AOC developed and implemented basic personnel policies and procedures that are designed to meet the guidelines set forth by the Architect of the Capitol Human Resources Act and the Congressional Accountability Act of 1995 (CAA). More recently, AOC has been developing standard policies and procedures to address various worker safety hazards. In the area of financial management, AOC has contracted for the development of AOC-wide accounting policies and procedures. For information security, in March 2002, AOC completed a partial risk assessment of its systems environment focusing on systems controlled by its Office of Information Resource Management (OIRM), and used that assessment to develop a security plan to address the identified vulnerabilities. These efforts are helping AOC to construct a sound foundation on which to build a high-performing organization. However, Standards for Internal Control in the Federal Government reflects a broader approach to control that addresses, for example, how an agency demonstrates its commitment to competence, how it assures effective and efficient operations, how it communicates the information needed throughout the agency to achieve all its objectives, and how it monitors performance. As AOC moves forward in addressing the management reforms we discuss in this report, it should consider how adopting these standards for internal control could provide a strong foundation for institutionalizing the organizational transformation under way. Internal control should provide reasonable assurance that the objectives of the agency are being achieved in the following categories: effectiveness and efficiency of operations, including the use of the entity’s resources; reliability of financial reporting, including reports on budget execution and financial statements and other reports for internal and external use; and compliance with applicable laws and regulations. A subset of these objectives is the safeguarding of assets. Internal control should be designed to provide reasonable assurance regarding prevention of or prompt detection of unauthorized acquisition, use, or disposition of an agency’s assets. Internal control is not one event, but a series of actions and activities that occur throughout an entity’s operations and on an ongoing basis. Internal control should be recognized as an integral part of each system that management uses to regulate and guide its operations rather than as a separate system within an agency. In this sense, internal control is management control that is built into the entity as a part of its infrastructure to help managers run the entity and achieve their aims on an ongoing basis. People are what make internal control work. The responsibility for good internal control rests with all managers. Management sets the objectives, puts the control mechanisms and activities in place, and monitors and evaluates the control. However, all personnel in the organization play important roles in making it happen. Five standards provide a general framework for the minimal level of quality acceptable for internal control in government and provide the basis against which internal control is to be evaluated: Control environment. Management and employees should establish and maintain an environment throughout the organization that sets a positive and supportive attitude toward internal control and conscientious management. For example, as AOC implements its new strategic planning process, it will need to demonstrate a positive and supportive attitude toward performance-based management by using the plan as the basis for all its programmatic decisions. Risk assessment. Internal control should provide for an assessment of the risks the agency faces from both external and internal sources. For example, as part of AOC’s ongoing strategic planning process, AOC needs to continually assess the risks to achieving its objectives, analyze the risks, and determine what actions should be taken. Control activities. Internal control activities help ensure that management’s directives are carried out. The control activities should be effective and efficient in accomplishing the agency’s control objectives. As AOC identifies areas for management improvements, it also needs to define the policies, procedures, techniques, and mechanisms it will use to enforce management’s directives. For example, as AOC works to improve its information systems acquisition management to standardize its acquisition processes, it will need to establish control activities to ensure the processes are applied consistently and correctly for each acquisition project. Information and communications. Information should be recorded and communicated to management and others within the entity who need it and in a form and within a time frame that enables them to carry out their internal control and other responsibilities. For example, as AOC develops new performance and financial information to support program management, the information needs to be communicated in a way that meets users needs and time frames. Monitoring. Internal control monitoring should assess the quality of performance over time and ensure that the findings of audits and other reviews are promptly resolved. For example, as AOC develops new performance and financial information, it should ensure that this information is both useful to and used by program managers for purposes of managing program performance. AOC is working towards transforming its culture and instituting regularized personnel policies, procedures, and processes, but there are still areas for improvement. In addition to internal control standards, we have found that there are key practices that can assist agencies in effectively using human capital flexibilities. In broad terms, human capital flexibilities represent the policies and procedures that an agency has the authority to implement in managing its workforce to accomplish its mission and to achieve its goals. These practices include educating managers and employees on the availability and use of flexibilities, streamlining and improving administrative processes, and building transparency and accountability into the system. Comments from a majority of our focus group participants indicate that supervisors are not perceived to have applied awards, overtime, and leave policies consistently; that there was supervisory favoritism; and that grade and pay levels are not consistent across jurisdictions and shifts. AOC has been addressing these concerns by developing a comprehensive leave policy and a strategy for communicating this policy, reviewing perceived inequities in job classification, and issuing specific guidelines and procedures for its employee awards program. AOC should continue to develop consistent and transparent human capital policies and procedures and communicate them. AOC has various offices and an employee council engaged in improving employee relations. AOC’s senior managers could benefit from comprehensively collecting and analyzing data from these groups to allow it to determine its employee relations needs, and to measure its progress in achieving its strategic human capital goals and objectives. AOC has recently established its Office of the Ombudsperson, but should realign the office’s reporting relationship directly to the Architect to ensure that it is adhering to professional standards of independence. Effective organizations establish clear and consistent human capital policies and procedures with clearly stated expectations for both employees and supervisors and ensure that there is accountability for following these procedures accordingly. According to internal control standards, such consistent procedures help to create a control environment that encourages employee trust in management. A majority of our focus group participants perceived that supervisors applied awards, overtime, and leave policies inconsistently and that there was supervisory favoritism. For example, some employees stated that supervisors determine on their own when an employee is entitled to sick or annual leave and are not consistent when allowing some employees to take off time from work. Others remarked that there were varying procedures for signing into work and grace periods for lateness were not consistently applied for every employee. Several employees commented that access to working overtime was uneven and felt as if only favored employees had the opportunity to work overtime. In addition, several employees believe that favoritism resulted in uneven and unfair distribution of work, and that hiring and promotions frequently are not based on qualifications and experience but on personal connections. AOC has been addressing employees’ concerns by developing a comprehensive leave policy and a strategy for communicating this policy. According to AOC’s Director of HRMD, AOC has drafted an agencywide comprehensive leave policy—which it expects to issue in November 2002—and is developing a strategy to communicate this policy internally. The issuance of a comprehensive agencywide leave policy is one way in which employees’ perceptions of inconsistent treatment by supervisors could be diminished. The policy could also provide a mechanism to hold supervisors and senior managers accountable for its fair and consistent application. Inconsistencies in grade and pay levels across jurisdictions and shifts was another area of concern noted by a majority of the focus group participants. The perception expressed in focus groups was that employees in other AOC jurisdictions in similar positions and in other federal agencies were classified at higher grade levels, even though their job duties were similar. AOC’s HRMD Director told us that the division is aware that many AOC employees are concerned about possible misclassification and has received many requests from employees to review job classifications. According to AOC’s Employment and Classification Branch Chief, most of the employees who have raised concerns about how their jobs are classified have been upgraded. As a result, AOC is engaged in an ongoing initiative to review certain position descriptions that have not been updated for some time across jurisdictions and to reclassify them, if needed. Employee rewards and recognition programs are an important human capital flexibility that is intended to provide appropriate motivation and recognition for excellence in job performance and contributions to an agency’s goals. In our December 2002 report on the effective use of human capital flexibilities, we report that agencies must develop clear and transparent guidelines for using flexibilities and then hold managers and supervisors accountable for their fair and effective use, and that agency managers and supervisors must be educated on the existence and use of flexibilities. The Architect’s Awards Program, which is AOC’s employee rewards and recognition program, is in its second year of operation. However, several implementation issues remained to be resolved. For example, a majority of the focus group participants felt that the program is not applied consistently across the jurisdictions and shifts for all employees. Some focus group participants also mentioned that they were promised awards by their supervisors for their good work on projects but never received them. Other views expressed by some members of the focus groups were that awards might be distributed, but only to certain members of a project team, even though everyone in the unit had worked on the same project or that supervisors did not always want to fill out the paperwork needed to make an award. In March 2002, AOC issued a policy containing responsibilities and procedures, for the administration of the employee rewards and recognition program. However, as borne out by our focus group results, supervisors may be applying this policy inconsistently. AOC can strengthen and gain support for this program by holding managers and supervisors accountable for the fair and effective use of its rewards and recognition program as a useful tool for motivating and rewarding employees. In April 2002, we stated that to improve labor-management relations, we would explore the relationships between AOC’s various offices engaged in addressing employee relations. Several AOC offices and one employee group provide employees with assistance in resolving disputes or in dealing with other employment-related issues. These offices not only work to resolve disputes, but are also in a position to alert management to systemic problems and thereby help correct organizationwide issues and develop strategies for preventing and managing conflict. The Equal Employment Opportunity and Conciliation Program Office was created to include an affirmative employment program for employees and applicants and procedures for monitoring progress by AOC in ensuring a diverse workforce. The office serves to promote a nondiscriminatory work environment and works to resolve employment concerns informally. AOC’s Office of the Ombudsperson, formerly called the Employee Advocate, was staffed in 2002 and provides advice and counsel to non- bargaining-unit employees concerning employment policies, employment practices, or other employment-related matters. The AOC Employee Advisory Council (EAC), created in 1995, has renewed its efforts to ensure its role of providing a voice for AOC employees on workplace and safety issues, and is another avenue for non-bargaining-unit employees to bring their concerns to management. The EAC consists of AOC employees, and its purpose is to help address AOC policy, procedures, work products and methods, and other issues that relate to the overall efficiency and safety of the agency, as well as the fair treatment of employees. It is not clear whether there is a coordinated approach to tracking agencywide patterns of employee relations issues among these offices and the EAC. If this information were to be collected and analyzed by AOC’s senior managers, it could provide a useful source of information to alert management of the status of employee relations. The advantages of an agencywide tracking method need to be balanced in a way so as not to compromise employee confidentiality. As discussed in chapter 2, AOC has established a strategic human capital goal and corresponding objectives related to acquiring, developing, and retaining a talented and diverse workforce. We believe that AOC senior managers could benefit from gathering and analyzing these data, in conjunction with results from the additional employee focus groups that AOC plans to conduct, to help determine how well it is meeting its human capital strategic goal and objectives. In assessing the functions of these employee relations groups, we also assessed the Ombudsperson position at AOC to determine whether it adhered to the standards of practice for ombudsmen established by professional organizations. Ombudsmen provide an informal option to deal pragmatically with conflicts and other organizational climate issues. In April 2001, we reported that ombudsmen are expected to conform to professional standards of practice that revolve around the core principles of independence, neutrality, and confidentiality. In our discussion with the AOC Ombudsperson, she stated that she was familiar with the standards for ombudsmen and that she provided services confidentially and neutrally. According to AOC officials, the AOC Ombudsperson reports to the Administrative Assistant to the Architect of the Capitol or his or her authorized designate, but not directly to the Architect. In our April 2001 report, the Ombudsman Association Standards of Practice define independence as functioning independent of line management, with the ombudsman having a reporting relationship with the highest authority in an organization. In addition, the American Bar Association’s ombudsman standards for independence discuss that the ombudsman’s office must be and appear to be free from interference in order to be credible and effective. If the Ombudsperson were to directly report to the Architect and not through another senior manager, the core principle of independence would be strengthened. AOC faces significant challenges in building sound budget and financial management functions into the culture of the organization. Accurate and reliable budget formulation and execution and reliable financial accounting and reporting are important basic functions of financial control and accountability and provide a basis for supporting good program management. In the past, AOC has lacked reliable budgets for both projects and operations and has lacked internal policies and procedures to effectively monitor budget execution. In addition, AOC has lacked accounting policies and procedures needed to properly account for and report financial information especially in accounting for, controlling, and reporting assets, including inventory. Moreover, AOC has not prepared auditable financial statements. A Chief Financial Officer (CFO) position was established at AOC, which the Architect filled in January 2002, in response to direction from the Subcommittee on Legislative Branch, Senate Committee on Appropriations that AOC begin essential financial management reforms. The new CFO is a member of the Architect’s Senior Policy Committee and, in carrying out his role in establishing a foundation of financial control and accountability at AOC, he is responsible for the activities of the Budget Office, the Accounting Office, and the Financial Systems Office. Among his first actions, the new CFO assembled a financial management team with the experience needed to establish a strong foundation of financial control and accountability by filling key budget and accounting officer positions. As discussed in our executive guide on best practices in financial management, a solid foundation of control and accountability requires a system of checks and balances that provides reasonable assurance that an entity’s transactions are appropriately recorded and reported, its assets protected, its policies followed, and its resources used economically and efficiently for the purposes intended. The CFO, who has endorsed the executive guide as a road map for making improvements to financial management at AOC, has recognized the need for this foundation of financial control and accountability as well as the challenges his organization faces in establishing such checks and balances AOC-wide. Those challenges include developing and implementing effective budget formulation and execution policies and procedures that govern capital projects and operating activities AOC-wide, developing and implementing formal financial accounting and reporting policies and procedures and related operating procedures, developing and implementing internal controls and monitoring the reliability of financial information and safeguarding of assets, implementing and operating the new financial management system, and preparing auditable comprehensive entitywide financial statements. In response to these challenges, the CFO has set a goal for AOC to prepare auditable AOC-wide financial statements for the first time for fiscal year 2003 and has made measurable progress in this and other areas in establishing a sound foundation of control and accountability at AOC. For example, some of the financial management team’s achievements to date include deploying phase two of the new accounting system AOC-wide, including continuing system support and periodic training; revising budget formulation guidance to include requirements for specific minimum detail needed to justify capital projects requested and support construction cost estimates; conducting an AOC-wide budget execution review to evaluate the effectiveness of AOC’s budget execution; conducting an AOC-wide inventory to establish a basis for closing accounting records for fiscal year 2002 and a establishing a beginning balance for fiscal year 2003; developing a basis for valuing and classifying certain AOC assets, including property and equipment; and contracting for the development of AOC-wide accounting policies and procedures needed to establish internal control and prepare first-time financial statements. A significant factor in the achievements to date is the experience the new financial team brings to AOC in carrying out the fundamentals of sound financial management and the fact that the initiatives fall under the direct control of the CFO. However, much work remains to be done on an AOC- wide basis. Going forward, the CFO faces challenges, including having program managers routinely provide critical project justification and cost information and obligation plans; establishing AOC-wide accounting and control procedures, such as controls over the receipt and use of inventory; and finding a way to interface financial information with the AOC Project Information Center system. Implementing these and other financial-control and accountability-related initiatives will require the buy-in and support of key non-financial managers and staff. As the finance team seeks to build a foundation of financial accounting and control into the organization’s culture, top management must demonstrate a commitment to making and supporting the needed changes throughout the organization. As noted in our executive guide, leading organizations identified leadership as the most important factor in successfully making cultural changes. IT can be a valuable tool in achieving an organization’s mission objectives. Our research of leading private and public sector organizations shows that these organizations’ executives have embraced the central role of IT to mission performance. More specifically, these executives no longer regard IT as a separate support function, but rather view and treat it as an integral and enabling part of business operations. As such, they have adopted a corporate, or agencywide, approach to managing IT under the leadership and control of a senior executive, who operates as a full partner with the organization’s leadership team in charting the strategic direction and making informed IT investment decisions. Complementing a centralized leadership of IT management, leading organizations have also implemented certain institutional or agencywide management controls aimed at leveraging the vast potential of technology in achieving mission outcomes. These management controls include using a portfolio-based approach to IT investment decision making, using an enterprise architecture, or blueprint, to guide and constrain IT investments, following disciplined IT system acquisition and development management processes, and proactively managing the security of IT assets. AOC currently relies heavily on IT in achieving its mission objectives. As an example, AOC uses the Computer Aided Facilities Management system to request and fulfill work orders for maintenance of the Capitol and the surrounding grounds. In addition, it uses the Records Management system to archive architectural drawings pertaining to the U.S. Capitol, Library of Congress, Botanic Garden, and other buildings. According to AOC’s Chief Administrative Officer, the agency’s reliance on IT will increase in the future. Despite the importance and prevalence of IT at AOC, the agency’s current approach to managing IT is not consistent with leading practices, as is described in the following five sections. Until AOC embraces the central role of IT to mission performance and implements an agencywide and disciplined approach to IT management, it is not positioned to optimize the contribution of IT to agency mission performance. Our research of private and public sector organizations that effectively manage IT shows that these organizations have adopted an agencywide approach to managing IT under the leadership of a chief information officer or comparable senior executive, who has the responsibility and authority for managing IT across the agency. According to the research, these executives function as members of the leadership team and are instrumental in developing a shared vision for the role of IT in achieving major improvements in business processes and operations to effectively optimize mission performance. In this capacity, leading organizations also provide these individuals with the authority they need to carry out their diverse responsibilities by providing budget control and management support for IT programs and initiatives. Currently, AOC does not have a senior-level executive who is responsible and accountable for IT management and spending across the agency, and AOC does not centrally oversee IT, according to AOC’s OIRM Director. Rather, budget and acquisition authority is vested in each AOC organizational component that is acquiring a given IT asset. With such a decentralized approach to IT management and spending, AOC does not have an individual focused on how IT can best support the collective needs of the agency, and thus is not positioned to effectively leverage IT as an agencywide resource. If managed wisely, IT investments can vastly improve mission performance. If not, IT projects can be risky, costly, and unproductive investments. Our best practices guide, based on research of private and public sector organizations that effectively manage their IT investments, outlines a corporate, portfolio-based approach to IT investment decision making that includes processes, practices, and activities for continually and consistently selecting, controlling, and evaluating competing IT investment options in a way that promotes the greatest value to the strategic interest of the organization. The first major step to building a sound IT investment management process is to be able to measure the progress of existing IT projects to identify variances in cost, schedule, and performance expectations, and take corrective action, if appropriate, and to establish basic capabilities for selecting new IT proposals. To do this, the organization needs to establish and implement processes and practices for (1) operating an IT investment board responsible for selecting, controlling, and evaluating IT investments and that includes both senior IT and business representatives, (2) providing effective oversight for ongoing IT projects throughout all phases of their life cycle, (3) identifying, tracking, and managing IT resources, (4) ensuring that each IT project supports the organization’s business needs, and (5) establishing criteria for selecting new IT proposals. The second major step toward effective IT investment management requires that an organization continually assess proposed and ongoing projects as an integrated and competing set of investment options. That is, the organization should consider each new investment part of an integrated portfolio of investments that collectively contribute to mission goals and objectives. To do this, the organization needs to establish and implement processes and practices for (1) developing and implementing criteria to select investments that will best support the organization’s strategic goals, objectives, and mission, (2) using these criteria to consistently analyze and prioritize all IT investments, (3) ensuring that the optimal IT investment portfolio with manageable risks and returns is selected and funded, and (4) overseeing each IT investment within the portfolio to ensure that it achieves its cost, benefit, schedule, and risk expectations. AOC has not satisfied the components of either of these two major steps, and as a result does not currently have an agencywide, portfolio-based approach to IT investment management. For example, AOC has not developed the processes and established the key management structures, such as an investment review board, needed to manage and oversee IT investments. However, according to the OIRM Director, he has several activities under way to facilitate the agency’s movement to such an approach, should AOC choose to do so. These include developing an IT capital planning and investment guide that is to define key elements of a portfolio-based approach to IT investment management and acquiring an automated tool to facilitate its implementation, introducing new IT budget categories and collecting corresponding fiscal year 2004 budget information to track and control IT investments, and reassessing the role of its Information Technology Standards and Architecture Committee, including how and when the committee reviews projects, what projects are reviewed, and what information is provided to the committee. Because the OIRM Director could not provide us with drafts or more detailed information on these activities, characterizing them as under development, we could not determine the extent to which these activities address the basic tenets of effective IT management. However, these activities are currently limited because they are confined to OIRM, which is not positioned to implement effective IT investment management on its own. Achieving an agencywide, portfolio-based approach to IT investment management needs the full support and participation of AOC’s senior leadership. Until this occurs, AOC will continue to be limited in its ability to effectively leverage IT to achieve mission goals and objectives. Our experience with federal agencies has shown that attempting to modernize IT environments without an enterprise architecture to guide and constrain investments often results in systems that are duplicative, not well integrated, unnecessarily costly to maintain and interface, and ineffective in supporting mission goals. Managed properly, architectures can clarify and help optimize the interdependencies and interrelationships among related corporate operations and the underlying IT infrastructure and applications that support them. The development, implementation, and maintenance of architectures are recognized hallmarks of successful public and private organizations that effectively leveraged IT in meeting their mission goals. An enterprise architecture—as defined in federal guidance, and as practiced by leading public and private sector organizations—acts as a blueprint and defines, both in logical terms (including business functions and applications, work locations, information needs and users, and the interrelationships among these variables) and in technical terms (including IT hardware, software, data communications, and security) how the organization operates today, how it intends to operate tomorrow, and a road map for transitioning between the two states. This guidance also defines a set of recognized key practices (management structures and processes) for developing and implementing an enterprise architecture. Among other things, these practices include the following: The head of the enterprise should recognize that the enterprise architecture is a corporate asset for systematically managing institutional change by supporting and sponsoring the architecture effort and giving it a clear mandate in the form of an enterprise policy statement. Such support is crucial to gaining the commitment of all organizational components of the enterprise, all of which should participate in developing and implementing the enterprise architecture. The enterprise architecture effort should be directed and overseen by an executive body, empowered by the head of the enterprise, with members who represent all stakeholder organizations and have the authority to commit resources and to make and enforce decisions for their respective organizations. An individual who serves as the chief enterprise architect, and reports to either a chief information officer or comparable senior executive, should lead the enterprise architecture effort and manage it as a formal program. A formal program entails creating a program office, committing core staff, implementing a program management plan that details a work breakdown structure and schedule, allocating resources and tools, performing basic program management functions (e.g., risk management, change control, quality assurance, and configuration management), and tracking and reporting progress against measurable goals. The enterprise architecture should conform to a specified framework. AOC does not have an enterprise architecture or the management foundation needed to successfully develop one. Thus far, AOC’s architecture activities are confined to OIRM, and they consist of meeting with peer agencies, such as the U.S. Capitol Police, to learn about their architecture development experiences, and selecting a framework to use in developing the architecture. OIRM officials also told us that they are finalizing an approach for developing the architecture. AOC has much to do and accomplish before it will have either the means for developing an architecture or the architecture itself. Central to what remains to be done is AOC’s executive leadership providing a clear mandate for the architecture and for managing its development consistent with recognized best practices and federal guidance. To do less risks producing an incomplete architecture that is not used to effectively guide and direct business and technology change to optimize agencywide performance. Our experience with federal agencies has shown that the failure to implement rigorous and disciplined acquisition and development processes can lead to systems that do not perform as intended, are delivered late, and cost more than planned. The use of disciplined processes and controls based on well-defined and rigorously enforced policies, practices, and procedures for system acquisition and development can reduce that risk. Such processes for managing system acquisition/development are defined in various published models and guides, such as Carnegie Mellon University’s Software Engineering Institute’s Capability Maturity ModelSM. Examples of key processes from this model include the following: Requirements management describes processes for establishing and maintaining a common and unambiguous definition of requirements among the acquisition team, the system users, and the software development contractor. Requirements management includes documenting policies and procedures for managing requirements, documenting and validating requirements, and establishing baselines and controlling changes to the requirements. Test management describes processes for ensuring that the software/system performs according to the requirements and that it fulfills its intended use when placed in its intended environment. Test management includes developing a test plan, executing the plan, documenting and reporting test results, and analyzing test results and taking corrective actions. Configuration management describes processes for establishing and maintaining the integrity of work products throughout the life cycle process. Configuration management includes developing a configuration management plan; identifying work products to be maintained and controlled; establishing a repository or configuration management system for tracking work products; and approving, tracking, and controlling changes to the products. Quality assurance describes processes for providing independent verification of the requirements and processes for developing and producing the software/system. Quality assurance includes developing a quality assurance plan, determining applicable processes and product standards to be followed, and conducting reviews to ensure that the product and process standards are followed. Risk management describes processes for identifying potential problems before they occur and adjusting the acquisition to mitigate the chances of the problems occurring. Risk management includes developing a project risk management plan; identifying and prioritizing potential problems; implementing risk mitigation strategies, as required; and tracking and reporting progress against the plans. Contract tracking and oversight describes processes for ensuring that the contractor performs according to the terms of the contract. Contract tracking and oversight includes developing a plan for tracking contractor activities, measuring contractor performance and conducting periodic reviews, and conducting internal reviews of tracking and oversight activities. OIRM has defined some of these key processes, but it has not defined others, and some that are defined are not complete. Moreover, the processes that have been defined have not been adopted and implemented agencywide. In 1995, OIRM developed its Information Systems Life Cycle Directive that defines policies and procedures for software development and acquisition. This directive fully addresses the tenets of two key process areas—requirements management and test management—and partly addresses the tenets of two other areas— quality assurance and configuration management. For example, for quality assurance, the directive includes the need to conduct quality assurance reviews to ensure that product and process standards are followed; however, it does not address the need to first identify the process and product standards to be followed or the development of a quality assurance plan. Similarly, for configuration management, the directive includes requirements for developing and executing a plan; identifying work products to be maintained and controlled; and tracking, controlling, and releasing work products and items. However, it does not include requirements for a repository or for a configuration management system that supports tracking and controlling changes to work products. Finally, the directive does not address two key process areas—risk management and contract tracking and oversight. The OIRM Director told us that OIRM plans to improve its directive and acquire tools to facilitate its implementation. These efforts, if properly implemented and adopted, could allow AOC to institutionalize disciplined processes for system development and acquisition management. Until AOC implements agencywide, disciplined processes for managing the development and acquisition of IT systems, it risks investing in systems that do not perform as intended, are delivered late, and cost more than planned. Effective information security management is critical to AOC’s ability to ensure the reliability, availability, and confidentiality of its information assets, and thus its ability to perform its mission. If effective information security practices are not in place, AOC’s data and systems are at risk of inadvertent or deliberate misuse, fraud, improper disclosure, or destruction—possibly without detection. Our research of public and private sector organizations recognized as having strong information security programs shows that their programs include (1) establishing a central focal point with appropriate resources, (2) continually assessing business risks, (3) implementing and maintaining policies and controls, (4) promoting awareness, and (5) monitoring and evaluating policy and control effectiveness. AOC has taken important steps to establish an effective information security program, but much remains to be done. In May 2001, the OIRM Director established and filled an IT security officer position. The officer’s responsibilities include planning and coordinating security risk assessments, developing IT security policies, conducting security training, and evaluating the effectiveness of IT security policies and controls. In March 2002, the Security Officer completed a partial risk assessment of AOC’s systems environment focusing on systems that are controlled by OIRM, and used that assessment to develop a security plan to address the identified vulnerabilities. The plan contains steps to develop user access and network administrator account policies, as well as a security awareness and training program. However, the Security Officer has since resigned and the position is vacant. Moreover, because the Security Officer was the only staff member dedicated to these tasks, the OIRM Director stated that AOC has yet to begin addressing the tasks outlined in the security plan. Currently, AOC is attempting to hire a new security officer and plans to hire an information systems security specialist. Until AOC addresses the elements of an effective security program, it will not be in a position to effectively safeguard its data and information assets. The effectiveness with which AOC can use the elements of the management and accountability framework—strategic planning, organizational alignment, improved internal and external communications, performance management, and strategic human capital management—to achieve organizational transformation will depend in part on its ability to focus on management improvement in its day-to-day operations. A key factor in helping AOC to better achieve its mission and program outcomes and identify and manage risks while leveraging opportunities is to implement and strengthen appropriate internal controls. As it transforms the agency, AOC will need to ensure that it adopts management controls by (1) further developing and consistently applying transparent human capital policies and procedures, (2) continuing to improve its approach to budgeting and financial management to support effective and efficient program management, and (3) adopting an agency wide approach to IT management to position itself to optimize the contribution of IT to agency mission performance. In order to continue to develop a management infrastructure and strengthen appropriate management controls, we recommend that the Architect of the Capitol take the following actions: Strengthen AOC’s human capital policies, procedures, and processes by continuing to develop and implement agencywide human capital policies and procedures, and holding management and employees accountable for following these policies and procedures; assessing ways in which AOC management could better gather and analyze data from the various employee relations offices and EAC while maintaining employee confidentiality; and establishing a direct reporting relationship between the Ombudsperson and the Architect, consistent with professional standards. Continue to improve AOC’s approach to financial management by developing strategies to institutionalize financial management practices that will support budgeting, financial, and program management at AOC. Such strategies could include developing performance goals and measures and associated roles aimed at increasing the accountability of non-financial managers and staff, such as jurisdictional superintendents, program managers, and other AOC staff—whose support is critical to the success of AOC’s financial management initiatives—and ensuring that these staff receive the training needed to effectively carry out their roles and responsibilities. Adopt an agencywide approach to IT management by doing the following: Establishing a chief information officer, or comparable senior executive, with the responsibility, authority, and adequate resources for managing IT across the agency, who is a full participant in AOC’s senior decision- making processes, and has clearly defined roles, responsibilities, and accountabilities. Developing and implementing IT investment management processes with the full support and participation of AOC’s senior leadership. Specifically, the Architect must develop a plan for developing and implementing the investment management processes, as appropriate, that are outlined in our IT investment guide. At a minimum, the plan should specify measurable tasks, goals, time frames, and resources required to develop and implement the processes. The Architect should focus first on the management processes associated with controlling existing IT projects and establishing the management structures to effectively implement an IT management process. Developing, implementing, and maintaining an enterprise architecture to guide and constrain IT projects throughout AOC. The Architect should implement the practices, as appropriate, as outlined in the Chief Information Officer Council’s architecture management guide. As a first step, the Architect should establish the management structure for developing, implementing, and maintaining an enterprise architecture by implementing the following actions: developing an agencywide policy statement providing a clear mandate for developing, implementing, and maintaining the architecture; establishing an executive body composed of stakeholders from AOC mission-critical programs offices to guide the strategy for developing the enterprise architecture and ensure agency support and resources for it; and designating an individual who serves as a chief enterprise architect to develop policy and lead the development of the enterprise architecture, and manage it as a formal program. Requiring disciplined and rigorous processes for managing the development and acquisition of IT systems, and implementing the processes throughout AOC. Specifically, these processes should include the following: quality assurance processes, including developing a quality assurance plan and identifying applicable process and product standards that will be used in developing and assessing project processes and products; configuration management processes, including establishing a repository or configuration management system to maintain and control configuration management items; risk management processes, including developing a project risk management plan, identifying and prioritizing potential problems, implementing risk mitigation strategies, as required, and tracking and reporting progress against the plans; and contract tracking and oversight processes, including developing a plan for tracking contractor activities, measuring contractor performance and conducting periodic reviews, and conducting internal reviews of tracking and oversight activities. Establishing and implementing an information security program. Specifically, the Architect should establish an information security program by taking the following steps: designate a security officer and provide him or her with the authority and resources to implement an agencywide security program; develop and implement policy and guidance to perform risk use the results of the risk assessments to develop and implement develop policies for security training and awareness and provide the monitor and evaluate policy and control effectiveness. In his comments on this chapter, the Architect generally agreed with our recommendations and discussed the relevant efforts AOC has under way in the areas of human capital policies, financial management, and IT management. For example, the Architect stated that AOC has formed a team including representatives from all key offices and employee groups to explore the development of a confidential process to track employee relations issues agencywide. In the area of financial management, the Architect underscored a number of initiatives under way, including the piloting of financial management training for line managers and staff and indicated that AOC’s implementation plan will include a strategy for incorporating financial management best practices throughout AOC. Finally, the Architect stated that IT is a key enabler of AOC’s strategy for organizational improvement and that OIRM will work closely with the Senior Policy Committee to establish an agencywide approach to IT management. The Architect cautioned that fully implementing the information technology framework that we laid out will take considerable time, but that AOC’s implementation plan will include a more specific approach to developing and implementing this framework. The Architect’s comments are reprinted in appendix II. In the preceding chapters, we discussed the need for the Office of the Architect of the Capitol (AOC) to put in place the management and accountability framework needed for organizational transformation— leadership, strategic planning, organizational alignment, communications, and performance measurement—and the management infrastructure of financial, information technology, and other controls that support the transformation. The management and accountability framework needed for transformation and the management infrastructure of financial, information technology, and other controls cut across AOC’s programs and influence its performance in all areas critical to achieving its mission. Improvements in these areas can also ameliorate the performance of program areas of long-standing concern to AOC’s employees and congressional stakeholders—worker safety, project management, and recycling. In recent years, AOC has had among the highest worker injury rates in the federal government. Furthermore, AOC’s annual appropriations for capital projects have increased substantially in recent years, placing AOC at greater risk of project delays and cost overruns. Finally, high rates of contamination of recyclable materials continue to detract from accomplishing the environmental goals of AOC’s recycling programs. AOC has made recent progress in all these areas. However, significant opportunities exist to build on this progress to bring about significant, lasting performance improvements. For example, the Architect has declared that safety is the agency’s number one priority and established a target for reducing injuries. Nonetheless, relating safety to other pressing priorities and developing a clear strategy for how working safely will become the cultural norm, is still a work in progress at AOC. Similarly, AOC has adopted industry best practices for project management, but implementation is uneven and hampered by weaknesses in leadership, performance and financial management, priority setting, communication, and strategic management of human capital. Finally, although AOC has recently made improvements to the House and Senate recycling programs, contamination of recycled materials remains high, and the goals for the overall program remain unclear. Worker safety at AOC has been the subject of congressional scrutiny for the past several years because AOC had higher injury and illness rates than many other federal agencies and substantially higher rates than the federal government as a whole, as seen in table 1. The Architect responded to these concerns by declaring safety the agency’s top priority and undertaking a number of initiatives that correspond to the components of an effective safety program, as identified by safety experts and federal safety agencies. These core components include management commitment, employee involvement, identification, analysis and development of controls for problem jobs, education and training, and medical management. Key among AOC’s activities is the planned development and implementation, by 2005, of about 43 specialized safety programs on topics ranging from handling asbestos to working safely in confined spaces. These programs are designed to help AOC comply with federal safety and health regulations. Fifteen of these specialized programs have been approved; none have yet been fully implemented across all of AOC’s jurisdictions. AOC’s efforts are commendable and AOC employees who participated in our focus groups noted positive changes in worker safety. As a next step, AOC needs to integrate the safety goals in its draft Safety Master Plan with AOC’s strategic goals in its overall strategic plan, and to develop performance measures to assess its progress in achieving these goals. The Director of AOC’s Safety, Fire, and Environmental Programs, who oversees AOC’s workplace safety program, has acknowledged that the two strategic planning efforts must be further integrated. Also, AOC has established mechanisms to foster employee involvement, such as encouraging employees to report job-related injuries and hazards. Building on these efforts, AOC needs to establish a formal mechanism for reporting to ensure complete reporting of hazards. AOC’s approach to identifying, analyzing, and developing controls for problem jobs is inconsistent and does not ensure that all workplace hazards are being addressed. Moreover, AOC has provided a significant amount of training to its employees, but the training activities could be better linked to AOC’s safety goal of changing its workplace culture to increase staff awareness, commitment, and involvement in safety and health. Finally, AOC’s medical management activities could be better coordinated with the worker safety program, so that information about workplace injuries and illnesses could be more widely shared and used to better target prevention efforts. Safety experts and federal safety agencies agree that, to build an effective safety program, organizations must take a strategic approach to managing workplace safety and health. This objective is generally accomplished by establishing a safety program built upon a set of six core program components, which, together, help an organization lay out what it is trying to achieve, assess progress, and ensure that safety policies and procedures are appropriate and effective. The six core components of an effective safety and health program are (1) management commitment, (2) employee involvement, (3) identification of problem jobs, (4) analysis and development of controls for problem jobs, (5) education and training, and (6) medical management. Table 2 lists these components, along with a description of the key activities upon which each component is built. Our April 2002 statement assessed AOC’s efforts in implementing the first four components. Since that time, we have assessed AOC’s activities in the remaining two areas: education and training and medical management. We also met with DuPont Safety Resources and the Department of Defense to discuss best practices in worker safety. AOC has undertaken a number of actions that demonstrate its commitment to worker safety. As a next step, it needs to develop safety program goals that are integrated with broader agency goals. In an effort to highlight the importance of worker safety, the Architect proclaimed safety to be the agency’s top priority in fiscal year 2001, and established the goal of reducing total injuries and illnesses by 10 percent each year through fiscal year 2005. As we reported in April 2002, AOC further demonstrated its commitment by devoting additional resources to safety, such as increasing staffing levels in its central safety office and assigning safety staff to seven of its eight jurisdictions. Additionally, AOC has consulted with the Department of Labor’s Occupational Safety and Health Administration (OSHA) on how to record illnesses and injuries and with the congressional Office of Compliance on how to comply with OSHA requirements. AOC has also contracted with DuPont Safety Resources to provide a baseline assessment of AOC’s safety activities and to provide best practices briefings for AOC senior executives and safety specialists for adopting a safety culture, including key components of an effective safety and health program. AOC has also contracted with the Department of Health and Human Services’ Public Health Service (PHS), which is developing AOC’s 43 specialized safety programs, providing safety training, and identifying hazards associated with AOC job tasks. AOC is developing a 5-year Safety Master Plan that, when completed, is to be used as a road map to identify its safety goals and philosophy, establish priorities, assign responsibilities, and identify project and funding needs. AOC employees who participated in our focus groups also noted positive changes in communicating worker safety. Many participants felt that AOC takes safety-related incidents seriously and that there has been an increased emphasis on safety. To achieve a safer workplace, AOC needs to integrate the safety goals in its draft Safety Master Plan with the strategic goals in its draft Strategic Plan. The Director of AOC’s Safety Program has acknowledged that as a next step, the two strategic planning efforts must be integrated. Private sector best practices indicate that an organization needs safety goals that are consistent and integrated with other organizational goals. Safety goals should be well integrated into the organizational culture so that it becomes second nature for employees to perform all tasks safely, and so that there is little tolerance for unsafe work practices. AOC has not yet developed performance measures to assess progress in achieving these safety goals. AOC officials have indicated that the development and implementation of the 43 specialized safety programs is their primary focus, and they plan to implement all of these programs by fiscal year 2005. Although 15 of these programs have been written and approved by the Architect, the standard operating procedures that are needed to fully implement these programs in the jurisdictions have not been approved. AOC’s draft Safety Master Plan currently provides information about the development and expected approval dates for the remaining programs, but does not provide other milestones or performance measures for the full implementation of these programs in the jurisdictions, including the anticipated time frames for developing and approving the standard operating procedures. Identifying interim milestones would help AOC assess its progress in achieving its fiscal year 2005 completion target and underscore for AOC employees and external stakeholders the importance AOC places on worker safety. The only performance measure that AOC has developed for assessing the worker safety program is a 10 percent reduction in injuries. This measure was based on a general sense of how much of a reduction would be achievable overall and how high the goal should be to motivate improvements. As we reported in April 2002, AOC is measuring its progress in achieving this reduction using the number of claims for compensation for workplace injuries and illnesses under the Federal Workers’ Compensation Program. However, it provides an incomplete picture of the overall level of safety because the number of claims in any organization can be affected by factors not directly related to safety, such as poor morale among employees or a lack of knowledge about how or when to file a claim. Also, the use of these data as a measure of safety program performance is not directly comparable to key measures used in the private sector, which uses “OSHA recordables” to assess worker safety.We reported in April 2002 that AOC had begun to collect these data on a limited basis. Since that time, AOC has begun to develop a more standardized approach to collect and track OSHA recordables. AOC is also trying to formalize partnerships with the Office of Compliance and OSHA to provide technical assistance that could facilitate standardizing these data. Moreover, AOC employees at all levels need to be held accountable for achieving the safety goals. For example, the first goal in AOC’s draft Safety Master Plan—providing a safe and healthful environment through the identification and elimination of hazards—has as an objective to ensure that all facilities, processes, and equipment include safety considerations in their design, development, and implementation to eliminate hazards. Yet, at this stage, AOC has not fully linked employee performance with the achievement of these safety goals and objectives. For example, there was a recurring observation made by focus group participants that time constraints to complete jobs and supervisory pressure adversely affect attention to safety. Although the incentive to focus on safety has been built into the performance appraisal system for employees, it is not addressed for senior managers and does not apply to employees who do not participate in AOC’s performance appraisal system. We also reported in April 2002 that AOC needed to clearly define roles, responsibilities, and authorities of safety personnel at the central and jurisdictional levels. According to the central and jurisdictional safety staffs, AOC has now clearly defined their respective roles and responsibilities. However, it is still unclear how they are being held accountable for achieving the safety program’s goals. The central safety office staff are responsible for the overall management of the 43 specialized programs, and they rely on the jurisdictional safety specialists to develop the specific procedures necessary for AOC to fully implement these programs. The jurisdictional safety specialists report to jurisdictional superintendents and not to the Director of Safety, Fire, and Environmental Programs, and they have other safety responsibilities and tasks, such as training and investigating accidents and injuries. Because jurisdictional safety specialists must focus on safety priorities as established by superintendents and line managers in their jurisdictions, they have limited time to spend on developing procedures to implement the specialized safety programs. AOC has a number of mechanisms to obtain employee involvement in its safety program and encourages employees to report injuries and hazards. AOC now needs to establish a formal reporting mechanism in order to provide assurance that these safety data are complete. AOC has established employee safety committees at both the jurisdictional and senior management levels. The jurisdictional committees, referred to as Jurisdictional Occupational Safety and Health committees, include frontline employees and jurisdictional specialists who perform a variety of activities ranging from training to accident investigations. The senior management committee, referred to as the Safety, Health, and Environmental Council, or SHEC, consists of superintendents and AOC safety staff. This committee meets quarterly and addresses various topics on an ad hoc basis. As we reported in April 2002, establishing these committees is a positive step toward achieving employee involvement. In its baseline assessment of AOC, DuPont Safety Resources cited these mechanisms as a strength of the agency’s worker safety program. Employee involvement also includes establishing procedures for employees to report job-related illnesses, injuries, incidents, and hazards and encouraging them to do so. In April 2002, the Architect issued a memorandum encouraging employees to report all injuries and illnesses, regardless of severity. Many of the focus group participants indicated that they generally felt comfortable reporting injuries, incidents, and hazards. However, there were participants in some focus groups who indicated that they were hesitant to report hazards because they were not sure how seriously their supervisors would treat these reports. Many participants commented that they did not feel protected from safety and health hazards. For example, some participants said that they were not adequately prepared to deal with hazardous substances. In that respect, policies and procedures for reporting accidents should also apply to hazards and other conditions that may lead to accidents. The recent implementation of a performance appraisal system that holds frontline employees under this system accountable for observing and promptly reporting safety issues to supervisors is a very encouraging step. If effectively implemented, this appraisal system will also help ensure that employees will be encouraged to report hazards, that supervisors will take those reports seriously, and that senior managers will be accountable for acting on these reports. AOC has a number of procedures in place to identify the underlying hazards that make jobs dangerous and to develop remedies for those hazards. However, these efforts are inconsistent and do not ensure that corrective actions are taken to eliminate hazards and prevent future injuries and illnesses. A comprehensive, consistently implemented system is critical to providing AOC with the assurance that its efforts are risk based—targeted directly toward identifying and abating those factors leading to the most severe and frequent incidents, accidents, and hazards. We reported in April 2002 that AOC has provided some assurance that accidents are being investigated and hazards addressed by placing safety specialists in several jurisdictions. Yet, there is no consistent AOC-wide system for conducting investigations and follow-up to ensure that workers across the jurisdictions are receiving the same level of protection. In the absence of an AOC-wide system, we found that some of the jurisdictions have (1) developed their own specific procedures for conducting investigations, (2) involved different staff members in the investigations, and (3) developed their own forms to gather accident or incident data. However, there were a few focus group participants who questioned whether sufficient controls existed to ensure that supervisors acted on all reports, particularly those that are not documented. We found that only two of AOC’s eight jurisdictions have procedures for tracking hazard reports and the follow-up actions taken to address those reports, even when there has not been an accident. In the absence of consistent AOC-wide processes for conducting investigations, we found generally ad hoc or infrequent efforts to use existing information from either the internal workers’ compensation database or from other sources to look for common problem areas to identify potentially hazardous jobs. Because AOC has not yet established an agencywide procedure to ensure that all jurisdictions perform at least a basic level of investigation and data gathering, it does not have the means for assuring that actual and potential causes of accidents will be abated. DuPont Safety Resources also found that AOC could improve its investigation process, and in 1998, the Office of Compliance recommended that AOC develop a system to routinely investigate accidents or hazardous situations and to ensure that hazards are corrected. AOC has recognized the need to have better information on problem jobs and is beginning to make several improvements in this area. For example, AOC has contracted with PHS to conduct agencywide job hazard analyses. Eventually, this information on job hazards will be integrated with the agency’s Computer Aided Facility Management System, although AOC has not set a date for when this will be accomplished. Also, AOC has procured a data system—the Facility Management Assistant system—that it plans to use for recording and monitoring the results of inspections. According to AOC safety officials, this system should help safety personnel identify potential problem areas. However, this system is not scheduled for full implementation until later in fiscal year 2003. Finally, as a part of its long- term effort to develop its 43 specialized safety programs, AOC has included at least 2 programs, scheduled to be implemented by the end of fiscal year 2005 that will address “Mishap Prevention and Reporting” and “Hazard Abatement and Inspections,” but these programs have yet to be developed or approved. In the meantime, at the recommendation of DuPont Safety Resources, AOC has convened several work groups composed of safety and other relevant staff to help improve accident and near-miss reporting and investigations, which we hope will guide AOC’s efforts to develop an agencywide system for conducting investigations and follow-up. AOC has adopted a compliance-based approach to providing safety training to its employees. However, this type of training is not sufficient, in itself, to achieve AOC’s long-range goal of instilling safety as a basic organizational value. In fiscal year 2001 alone, AOC reported that it provided over 13,000 hours of formal training to its employees. Most of this training is driven by federal safety and health regulations, which provide the basis for AOC’s 43 specialized safety programs. This safety training, covering such topics as asbestos management, is offered by or through AOC’s HRMD. AOC safety specialists and supervisors have also provided informal training—such as general safety awareness talks—to frontline staff in the jurisdictions. These efforts were acknowledged in our focus groups, as almost all of the focus group participants reported receiving safety training in the last 12 months. In addition to helping AOC achieve compliance, training should support AOC’s safety goal of changing workplace culture to increase staff awareness, commitment, and involvement in safety and health. Comments from DuPont Safety Resources’ representatives and some AOC safety specialists suggest that in order to change the safety culture, AOC could target its safety awareness training so that it better motivates employees at all levels to incorporate safety into all aspects of their work. Many focus group participants reported that they did not understand how some of the training provided was pertinent to their work. Once AOC has gathered the safety data it needs to help it assess the areas of highest risk for hazards, injuries, accidents, and illnesses, AOC’s safety training could also be targeted to address these high-risk areas. A comprehensive approach to evaluate the effectiveness of training includes assessments of changes in employee behaviors and how the training influences organizational results. While AOC performs quality control assessments for each course offered, it has not evaluated the overall effectiveness of its training activities to determine if they are helping AOC achieve a safer workplace and improving the safety culture. In this regard, as noted above, the majority of the formal training provided is required by federal safety and health regulations, and although AOC routinely obtains feedback from employees and subject matter experts on the quality of individual courses, there is little effort to evaluate whether these courses are having an impact on AOC employees’ work habits, so it is not clear to AOC if this training is effective in achieving this objective. AOC safety and HRMD staffs have not yet established a systematic process to identify training needs for individual employees to help ensure the safety program’s success. Instead jurisdictional safety specialists, working with HRMD, are developing this training on an ad hoc basis. For example, according to the House jurisdiction safety specialist, supervisors needed additional skills to fully understand their role in the safety program. The House jurisdiction worked through HRMD and the National Safety Council of Maryland to deliver this type of training to supervisors in the House. Also, the procedures and responsibilities for monitoring training requirements for the safety program are not well defined. Currently, the HRMD staff, the central safety office staff, jurisdictional safety specialists, and frontline supervisors share responsibilities for monitoring safety training. HRMD maintains a central record of AOC-sponsored training courses and employees’ training attendance but does not identify when employees need training. As a result, jurisdictional safety specialists and frontline supervisors must determine when employees need required training and ensure that they receive such training. For example, jurisdictional safety specialists are tracking this information themselves using individual systems, thus leading to inconsistencies across jurisdictions and potentially duplicative record-keeping activities. AOC’s draft Safety Master Plan refers to a “tickler” that, once developed, is to be included in the central training system and will identify training needs for individual employees. This tool, in addition to a system that inventories employees’ certifications and licenses, should be valuable in helping AOC employees stay abreast of their safety training needs and requirements. AOC’s medical management activities are carried out by several offices with no central coordination, so valuable information about workplace injuries and illnesses is not routinely shared or best used to target prevention efforts. Overall, AOC’s medical management activities are aimed at reducing the incidence and severity of work-related injuries and illnesses and controlling workers’ compensation costs, which have changed little over the last several years. (See figure 2). AOC has partnered with the congressional Office of the Attending Physician (OAP) to conduct OSHA-mandated medical examinations for AOC employees exposed to hazardous substances, while HRMD has developed a return-to-work program that offers modified-duty assignments to enable recovering employees to return to work as soon as practical. HRMD also provides active outreach to AOC employees to keep them informed about their rights and duties with respect to the federal workers compensation program. In addition, HRMD follows up on reports of program abuses through private investigations and ongoing contact with the Department of Labor’s Office of Workers’ Compensation Programs. Although these activities generally support AOC’s safety program, we have observed a lack of clarity regarding the roles of the many offices involved in these efforts. Medical management activities typically involve a number of separate entities, including human resources staff, health care providers, occupational health and safety experts, employees, and managers. To be effective, these activities require a high level of coordination among these entities. However, the lack of clarity at AOC has led to a limited exchange of important information that could be used to improve the safety program’s performance. In particular, the role of OAP could be more clearly defined and expanded, in accordance with the 1998 Memorandum of Understanding between AOC and OAP. OAP provides primary care and emergency, environmental, and occupational health services in direct support of members of the Congress, their staffs, pages, visiting dignitaries, and tourists. As specified in the Memorandum of Understanding, OAP conducts OSHA-mandated medical examinations for AOC employees exposed to hazardous substances, provides first aid for many AOC employees, and approves modified-duty assignments for recovering AOC employees. However, the Memorandum of Understanding allows a broader role for OAP in providing medical expertise, which could potentially include providing valuable data on the hazards causing injuries and illnesses at AOC, providing trend information on the results of medical examinations, and helping AOC standardize reporting procedures. According to the Director of AOC’s Safety, Fire, and Environmental Programs, as many as 30 percent of AOC’s reported injuries are probably not serious enough to warrant medical treatment. However, it is difficult to determine the severity of reported injuries without better injury data, underscoring the need for standardized reporting procedures. OAP could be instrumental in helping AOC develop these procedures. AOC central safety staff and HRMD could coordinate more to facilitate the exchange of information to further control workers’ compensation costs. In particular, HRMD staff uses injury data primarily for processing workers’ compensation claims, but the central safety office does not systematically or routinely analyze these data to better understand and address the causes of injuries and illnesses. Also, superintendents do not routinely receive data on the costs associated with injuries in each jurisdiction, so they are not fully aware of them. Having these data would help AOC hold these managers accountable for reducing these costs. Furthermore, although HRMD encourages supervisor involvement in identifying and overseeing modified-duty assignments that would enable AOC to engage injured workers in productive work to reduce injury costs, some jurisdictional staff we spoke with generally do not feel it is their responsibility to do so. One way to ensure that information is fully disclosed and analyzed is to provide a regular forum, such as a work group of superintendents, HRMD staff, OAP staff, and safety specialists, to discuss new and ongoing claims. This strategy has been adopted by the Department of Defense (DOD) and has proved to be useful in managing workers’ compensation claims and costs, according to DOD officials who specialize in this area. By focusing management attention on workers’ compensation claims and costs, AOC may provide a clearer incentive for staff at all levels to be more actively involved in modified-duty assignments and in other safety activities. AOC has taken significant steps toward implementing the necessary components of an effective worker safety and health program, and the level of effort it has devoted to worker safety is unquestionable. However, achieving a safer workplace at AOC will depend in part on AOC’s ability to integrate the safety goals in its draft Safety Master Plan with the strategic goals in its draft Strategic Plan to bring about long-term cultural change so that there is little tolerance for unsafe work practices. AOC’s potential to realize success is greater if it develops safety goals and measures that are fully integrated with AOC’s other agencywide goals; this is the best way to ensure that management and employees are clear about where safety stands in relation to the many other work priorities AOC faces every day. For example, in order to ensure that AOC achieves its fiscal year 2005 completion target for the 43 specialized safety programs, we believe that identifying interim milestones and measures would help AOC assess its progress in achieving its target. AOC could also benefit from having clearly defined and documented policies and procedures for reporting hazards, much like those that exist for injuries and illnesses, for this is the best way to ensure that AOC fully understands problem areas. There is also merit to having consistent procedures for conducting investigations and follow-up, so AOC will be assured that potential hazards are being addressed consistently in all jurisdictions. Regarding its safety training and medical management activities, AOC has made initial efforts to incorporate the knowledge and skills of various offices to help the safety program. Nonetheless, there are untapped resources within AOC that could be better utilized to help the safety program achieve its goals. For example, training that is more directly linked to AOC’s goal of adopting a safety culture, as well as more effective assessments of that training, will help AOC achieve its goals more efficiently. Also, AOC could benefit from a clearer definition of responsibilities for tracking and recording training that is received. AOC could also make better use of OAP’s resources. There are a number of additional functions OAP can provide for AOC that we believe are consistent with the current Memorandum of Understanding, such as providing valuable data on the hazards causing injuries and illnesses at AOC. We also believe that a senior-management group that routinely discusses workers’ compensation claims and costs will help highlight these issues to all managers, and ultimately make managers more accountable for reducing these costs. By taking advantage of these opportunities, AOC could ensure that these medical management activities are better linked to the goals of the safety program and the overall mission of the agency. To enhance AOC’s ongoing efforts to establish a strategy for the worker safety program by establishing safety program goals that are fully integrated with AOC’s agencywide goals, we recommend that the Architect of the Capitol identify performance measures for safety goals and objectives, including measures for how AOC will implement the 43 specialized safety programs and how superintendents and employees will be held accountable for achieving results; establish clearly defined and documented policies and procedures for reporting hazards similar to those that apply to injury and illness reporting; establish a consistent, AOC-wide system for conducting investigations establish a safety training curriculum that fully supports all of the goals of the safety program and further evaluate the effectiveness of the training provided; assign clear responsibility for tracking and recording training received by AOC employees, including maintaining an inventory of employees’ certifications and licenses; clarify and explore the possibility of expanding the role of OAP in helping AOC meet its safety goals, consistent with the broad responsibilities laid out in the 1998 Memorandum of Understanding between AOC and OAP; and establish a senior management work group that will routinely discuss workers’ compensation cases and costs, and develop strategies to reduce these injuries and costs. AOC is responsible for the maintenance, operation, preservation, and development of the buildings and grounds primarily located within the Capitol Hill complex. The historic nature and high-profile use of many of these buildings create a complex environment in which to carry out this mission. As a part of that mission, AOC is responsible for making all necessary capital improvements within the complex, including major renovations and new construction. Over the next few years, four high- profile capital projects are expected to cost over a half billion dollars: the $265 million Capitol Visitors’ Center project, the $122.3 million Supreme Court Modernization project, the $81.8 million West Refrigeration Plant Expansion project, and a combined $72 million for the House and Senate Perimeter Security projects initiated following the events of September 11, 2001. The magnitude of AOC’s recent projects and the recent growth in annual appropriations highlights the importance of managing this large portfolio of projects according to leading industry practices. As shown in figure 3, AOC’s annual appropriations for capital projects has increased over the last 10 years from $23.6 million in fiscal year 1994 to $190.3 million in fiscal year 2003. AOC’s capital appropriations peaked in fiscal year 2001 at $279 million due to a $244 million emergency supplemental appropriation following the terrorist attacks—over a 700 percent increase above the original capital appropriation. The growth in capital appropriations is most evident in the last 5 years: the average capital appropriation from fiscal years 1999 through 2003 was $186.5 million, while the average capital appropriation over the previous 5 years was $35.6 million. AOC’s Office of the Chief of Design and Construction (OCODC) is responsible for the planning, design, and construction of capital projects vital to achieving the agency’s mission. The office also provides technical assistance to AOC jurisdictions as they handle their day-to-day operations. The office is divided into separate divisions that provide the direct and indirect services that are required throughout a project’s life cycle: (1) Architecture Division, (2) Engineering Division, (3) Construction Management Division, and (4) Technical Support Division. The office also has a Planning and Programming Division, which is not currently staffed, and there is a proposal for a separate Project Management Division. As of May 2002, OCODC had a total of 128 full time equivalents, excluding Davis-Bacon workers assigned to the Construction Branch. Responsibility for the management of individual projects, including schedule, budget, scope, and quality, primarily falls to architects and engineers who are assigned as project managers. However, AOC jurisdiction staff can also be assigned as project managers for capital projects within their jurisdictions. As of July 2002, AOC had 83 individuals—58 from the Office of Design and Construction and 25 from AOC jurisdictions—listed as project managers in some capacity. The majority, however, are not dedicated solely to the task of project management. AOC supplements its staff by contracting for many of the design, construction, and construction management services. AOC divides capital projects into four categories: small capital projects—those valued at less than $250,000 and estimated to take an average of 1 year to complete; medium capital projects—those valued from $250,000 to $5 million and estimated to take an average of 3 years to complete; large capital projects—those valued at more than $5 million and estimated to take an average of 5 years to complete; and, large capital projects with construction managers—those valued at more than $20 million and estimated to take an average of more than 5 years to complete. As of June 2002, AOC’s workload consisted of 30 small capital projects, 94 medium capital projects, 12 large capital projects, and, 4 large capital projects with construction managers. This does not include hundreds of other projects, such as floor plan redesigns, sketches, and jurisdiction- funded projects that are a core part of OCODC responsibilities. AOC recognizes that a disciplined project management process can help it complete capital projects on schedule, on budget, within scope, and of the highest quality. In 1999, AOC initiated several reviews of its project planning and delivery processes by independent consultant firms. The goal of the reviews was to streamline the agency’s processes and staff organization based upon “best practices” drawn from AOC and industry, as well as to address a management concern that a lack of continuity in project management resulted in a loss of effectiveness and efficiency in overall project delivery. As a result of these initiatives, AOC sought to create a consistent process to which all projects and project managers adhere, create a system where project managers are dedicated to individual projects from “cradle-to-grave”—that is, from a project’s initiation to its completion, and increase the use of consultants to reduce the burden on in- house staff. According to AOC, the best practices process took a year to develop and a year to implement. The policies and procedures are codified in various manuals produced and updated by AOC since 1999, two of which have recently been finalized. Since then, AOC has continued to review its best practices initiatives. For example, the consulting firm CenterLine Associates is assessing how the best practice standards and procedures have been applied across five capital projects. The effort is expected to identify improvements that can be incorporated into AOC standards, policies, and procedures, as well as identify areas to be covered in future project-delivery training sessions. AOC is also implementing a formal process for planning and budgeting for its capital projects, adapted from the DOD’s military construction budgeting process, that is intended to clearly define requirements and priorities, and requires that requests be reviewed, validated, and approved before submission to the Congress. This formal process augments the policy requiring a 100 percent complete design before AOC requests construction funds. As AOC moves forward with its project management initiatives and consistent with the strategic management framework discussed in chapter 2, AOC needs to ensure that it has the overall infrastructure in place to effectively implement and take full advantage of the best practices that are designed to improve project planning, design, and construction management: master planning for the Capitol Hill complex, transparent process to prioritize projects, strategy and tools to communicate, outcome-oriented goals and performance measures, proper alignment of staff and resources, and strategic human capital management. Without these elements, AOC and the Congress have no assurance that the project management initiatives are being employed to their fullest potential. Consequently, AOC cannot be assured that the capital projects it is managing can be completed on schedule, on budget, and within scope and are of high quality and meet the needs of their customers. As with other critical management issues, the sustained commitment of top leadership will be vital to the success of AOC’s project management initiatives. On an ongoing basis, AOC leadership must set the clear expectation that staff adhere to the established best practice policies and procedures and then hold project management staff and contractors accountable for meeting this expectation. However, several project managers stated that they rely more heavily on their own experiences than on the specific policies and procedures laid out in the project manager manuals. One AOC official with project management responsibilities specifically noted, with respect to the best practices, that he did not know “if any of it was required,” but that if something was required he would be doing it. AOC officials responsible for overall project management can also show leadership by initiating a shift in the way AOC supervises its projects from solely focusing on crisis management to more active oversight. Several senior OCODC officials noted that their principal supervisory role is to resolve problems faced by project management staff. Interaction between project managers and senior management then is often limited to times when they “kick problems upstairs.” This reactionary approach leaves open the possibility that other risks or opportunities exist that are not being addressed. We are not suggesting, however, that AOC supervisors engage in micromanagement of the project managers’ day-to-day activities. Rather, more active supervision would help ensure that project managers are held accountable for following best practices, achieving measurable results or outcomes, meeting the needs of clients, and communicating routinely with project stakeholders—both internal and external. AOC’s best practice initiatives are intended to begin with a planning process that incorporates four sets of plans: a 20-year master plan, a 10- year facility assessment, a 5-year capital spending plan, and a 1-year jurisdiction plan. All capital projects are supposed to be consistent with those planning efforts, except for projects requested by individual members of the Congress outside of the normal budget cycle. However, AOC does not yet have a master plan or a facility assessment plan, nor does AOC have formalized capital spending or jurisdiction plans. In July 2001, at the direction of the Senate Committee on Appropriations, AOC contracted with the National Academy of Sciences (National Academy) to hold a planning workshop to determine the scope of a Capitol Hill complex Master Plan. Based on the results of the workshop, which was held September 23 through 24, 2002, AOC will develop a request for proposal for the master plan. While these initial efforts are positive steps, the overall effort has been slow to take shape given that the workshop took place over a year after the Senate’s directive. Moreover, as stated in chapter 2, AOC needs to tie its various long-term planning initiatives, including the master planning effort, to the agencywide strategic planning effort and obtain stakeholders’ input throughout the process. This message was reinforced by participants in the National Academy’s workshop, who explained that the master plan must be guided by a vision statement for the Capitol Hill complex, which is developed with stakeholder input and consistent with AOC’s strategic plan. A key component of a master plan is building condition assessments (BCA), which are systematic evaluations of an organization’s capital assets. Such assessments will help AOC to “evaluate deferred maintenance and funding requirements; plan a deferred maintenance reduction program; compare conditions between facilities; establish baselines for setting goals and tracking progress; provide accurate and supportable information for planning and justifying budgets; facilitate the establishment of funding priorities; and develop budget and funding analyses and strategies.” A number of AOC executives agreed that BCAs are a necessary first step to a comprehensive preventive maintenance program. However, according to AOC officials, AOC has never completed formal condition assessments of the facilities it is responsible for maintaining. Senior AOC and jurisdiction executives also stated that preventive maintenance of AOC’s assets has never been a major focus. According to AOC officials, AOC’s recent pilot effort to conduct an assessment of the Capitol Building was unsuccessful due to miscommunication of expectations between the agency and the contractor performing the assessment. Without BCAs, the agency has no assurance that it has fully documented the Capitol Hill complex’s preventive maintenance needs and cannot develop an overall plan with which to address those needs. As a result, AOC is unable to assure the Congress that the facilities in the Capitol Hill complex will be effectively and efficiently maintained and preserved consistent with the historic and high-profile nature of those facilities. AOC recently formed a condition assessment team with representatives from each of the larger jurisdictions to develop a detailed statement of work that specifies exactly what is required of a BCA contractor. When conducted, the BCAs must be carried out consistently across all jurisdictions to help ensure that all assets are evaluated in the same manner and that AOC-wide priorities can be set and trade-offs made. The project manager focus group participants also pointed out that the BCAs will require substantial involvement from employees of many of the jurisdictional shops who will be asked to provide technical information, logistical support, and other forms of assistance to the assessment teams. Therefore, AOC must also plan for and set aside resources required by AOC jurisdictions for the effort. According to the National Research Council, using a risk-based approach, the initial assessments should focus on life, health, and safety issues and on critical building system components needed to operate effectively. AOC also needs staff dedicated to ensuring that the master plan and building condition assessments are successfully completed. AOC officials told us they had hired a new Director of Facilities Planning and Programming in early December 2002. AOC officials stated that they are in the process of hiring an assistant planner and an assistant programmer. Because these individuals and the office would be the champions for the master planning and building condition assessment efforts, it is important that AOC fully staff the office with qualified individuals. An agencywide strategic plan and a complexwide master plan will help AOC determine priorities and then communicate them both internally to employees and externally to clients. In the absence of a strategic plan and a master plan to help determine overall priorities, AOC does not have a transparent process to prioritize its current projects. In the near term, a transparent process that incorporates stakeholder input would allow AOC to prioritize projects in a well documented manner. In the long term, AOC would be able to integrate the guidance of the strategic plan and master plan within a transparent priority setting process. AOC assigns a priority designation for each of the projects in its appropriations request—1-A, 1-B, 1-C, and 2-A, 2-B, and so on through 3-C at the lowest end of the priority scale. These priorities are further categorized as Life Safety, Americans with Disabilities Act, Security, Cyclical Maintenance, Improvement, and Technology-Management Systems. According to an OCODC official, each jurisdiction prioritizes capital projects and safety programs on a building-by-building basis for the coming fiscal year. Priorities are determined based on the subjective decisions made by jurisdiction officials and not on predefined criteria. The priorities are then converted into the 1-A, 1-B, etc., priority designations by the Budget Office staff. However, based on our review, it is not clear that project managers use this prioritization scheme to guide their day-to-day activities. The only practical, day-to-day prioritization of projects we found being used was a “hot projects” list. Projects were placed on the list by a group of senior OCODC officials who based their decisions on two undefined, subjective criteria: (1) time sensitivity and (2) high dollar volume. According to one AOC official, however, the process for placing priority projects onto the list is neither formal nor consistently applied. In fact, the official stated that the current hot project list needs to be updated to reflect fiscal year 2003 projects. We also found a general consensus among AOC officials and project managers that prioritization of projects is a major weakness at the agency. Many lamented that AOC is unable to manage client requests for projects effectively. More specifically, AOC lacks a process that can communicate, both internally and externally, the trade-offs in prioritizing one project over another or how individual projects fit within a broader AOC framework. The confusion about overall agency priorities has also led to confusion about what individual priorities should be. Upon establishing priorities, AOC must then incorporate the communication of priorities and progress of projects within an agencywide communications strategy. Internally, that means AOC needs to communicate its priorities to staff and provide details on how related projects are linked to one another. However, we found that AOC lacks the project management tools necessary to assist in doing these tasks. For example, officials responsible for overall project management use the Project Information Center (PIC) system to prioritize work and ascertain the progress of individual projects. However, PIC is not capable of producing a unified document that shows schedules of active projects, their interrelationships, and required staffing. Without a resource-loaded master project planning document, it is difficult to determine the effect of priority changes and to quantify project manager staffing requirements. AOC also needs to communicate the agency’s overall priorities to its clients and report progress on projects of importance to clients. The strategic and master planning efforts and BCAs discussed above will assist AOC in determining its project priorities. As discussed in chapter 2, an effort to establish congressional protocols could also help the agency determine how those priorities should be communicated, as well as how individual project priorities will be reported. AOC has made strides in communicating with its clients on the progress of projects. For example, AOC has developed a web site that includes a Capitol Hill complex map of several ongoing projects. However, opportunities exist for additional progress in how AOC communicates with clients and reports progress to the Congress. Also, at the direction of the Senate Committee on Appropriations, AOC has begun issuing quarterly capital project reports on the status of all ongoing capital projects. However, the reports we reviewed described the status of all ongoing capital projects without highlighting those projects that were behind schedule, over budget, or otherwise of interest to clients. AOC needs to begin to incorporate stakeholder feedback to better structure this reporting mechanism. For example, in our April 2002 statement, we discussed the possibility of using a “reportable events” approach to accountability reporting that is based on predefined, risk-based events that would trigger a report to the Congress and prompt immediate attention. However, the information reported is only as good as the information entered into the PIC system, which is the source of all project-related information. We have found that the data produced by the system and reported out by AOC are questionable because project managers do not consistently update the information in PIC. For example, a majority of the participants in the project manager focus group said they failed to consistently put information into the system because they viewed PIC as an administrative burden that provided no direct benefit to their own day-to- day activities. Additionally, our case studies showed that the project managers do not always keep PIC completely updated. For example, the Senate Recording Studio project had a current working estimate listed in PIC that was nearly double the amount appropriated for the project. Although we were told that the estimate was outdated, the information had not been updated in the PIC system. AOC officials recognize the inadequate data entry into PIC. In response, an OCODC official has recently met with all of AOC’s project managers to reinforce the importance of keeping PIC updated and to instruct them on how and what needs to be entered. While this is a positive step designed to improve the documentation of project information, AOC would benefit from more routine, systematic reviews of PIC data to uncover pervasive problems and their root causes. As we discussed in chapter 2, AOC needs to work with its stakeholders to determine its long-term strategic goals for project management and develop annual performance goals that provide a connection between long- term goals and the day-to-day activities of its managers and their staff. This effort will enable AOC to track its progress, provide critical information for decision making, and create incentives for individual behavior by providing a basis for individual accountability. In its draft strategic plan, AOC has identified facilities management and project management as two “focus areas,” and defined strategic goals for each focus area. However, AOC has not yet clearly defined the outcome- oriented goals and performance measures in each focus area. For example, as an outcome within the project management focus area, AOC lists “Projects and related services are executed and delivered on time and on budget.” To further clarify its goals, AOC could define terms such as “projects and related services” and establish quantitative performance measures for outcomes such as “on time” and “on budget.” Because AOC lacks specific measures, it is unclear whether AOC will be able to assess its current performance baseline, or how AOC will seek to improve. For example, it is unclear to internal and external AOC stakeholders if AOC’s goal is to improve on time delivery by a percentage point, or if it is to achieve some undefined standard. As AOC moves forward and establishes goals and measures for project management, it will be in a better position to consider how to balance competing needs, such as client satisfaction and quality against the need to meet deadlines and stay within budgets. In June 2002, AOC officials responsible for overall project management identified several changes that were needed to improve the delivery of capital projects. Primarily, AOC recognized that the current “soft matrix” approach of assigning mostly architects and engineers as project managers who are assisted by task leaders from various sub-disciplines was ineffective because, according to an AOC report on a proposed staff realignment within the Office of the Assistant Architect, there was no “clear objective, no supervisory authority that can exercise accountability over the Project Managers, and no clear lines of communication.” AOC officials responsible for project management proposed to senior AOC executives the creation of a new and independent Project Management Division, led with strong leadership, to “improve accountability, enforce organizational discipline, focus on client service needs, and tailor the skills of existing staff to necessary tasks.” The proposed staff realignment, which is in its early stages of implementation, is a good step within the framework of implementing best practices, particularly the concept of dedicated, cradle-to-grave project managers. However, AOC must ensure that this and other interim steps ultimately support the agency in meeting mission-critical goals and objectives as it develops the agencywide strategic plan. Moving forward with this realignment will require AOC to determine which individuals have the skills to be dedicated projects managers, as well as to identify the specific projects they should manage. Officials within OCODC recognize that not all of the architects and engineers who are currently assigned as project managers have the requisite skills for the job. With qualified staff, however, the realignment will ultimately address accountability issues by clarifying roles and responsibilities and creating true cradle-to-grave project management staff. Many of the project managers in our focus group stated that they are currently being asked to wear “too many hats,” which often distracts them from their primary duty to manage projects, and wanted AOC to move more quickly to a dedicated project management staff environment. We also observed that the initially slow progress of the Relocation of the Senate Recording Studio project and the Coal Handling Modernization project improved once dedicated project managers were assigned. However, a missing component of the realignment proposal is the role of supervisors in the new project management division. AOC has not yet defined who will supervise the project managers, the number of supervisors that will be needed, nor the approach they will take with respect to supervision. AOC needs to evaluate all of these issues and integrate the role of day-to-day supervisors into the new Project Management Division. As discussed in chapter 2, strategic human capital management can transform an agency into a results-oriented organization by aligning employee performance with goals and by providing tools to better plan its workforce needs. AOC has taken initial steps to address the strategic workforce analysis criteria set forth in chapter 2, by identifying its project management workforce needs in its staff realignment proposal. That plan detailed tactical approaches to reassigning current project management staff and determining where additional staff would be placed within a restructured OCODC. Consistent with the strategic human capital challenges it faces in other areas, AOC has opportunities to strengthen its efforts for its project management workforce as well. Developing a set of core technical competencies for project management and implementing a training and development program for those competencies are two areas requiring particular attention. AOC has not developed project management-specific technical competencies that define for what project managers will be held accountable. Defined competencies are important for ensuring that the right people are employed in the right positions and that they are routinely held accountable for their work. As a basis for developing these competencies, AOC can refer to standards developed by leading professional organizations. For example, the Project Management Institute (PMI) has published A Guide to the Project Management Body of Knowledge (PMBOK) that organizes the components of project management into nine knowledge areas: project integration management, project scope management, project time management, project cost management, project quality management, project human resources management, project communications management, project risk management, and project procurement management. Other entities have successfully used these knowledge areas as the basis for developing technical competencies. For example, the Australian government uses PMBOK as the basis for its National Competency Standards for Project Management. As a next step, AOC could identify and implement training programs that are linked to the core and technical competencies required of project managers. Doing so is an essential component of building an effective and professional project management staff. To date, it is unclear whether AOC’s training fully supports the implementation of best practices throughout the agency. For example, some project manager focus group participants noted that they were not given initial orientation that familiarized them with AOC, including services provided by other offices, or that familiarized them with their ultimate client—the Congress. And if AOC is to effectively implement best practices, newly hired project managers must be trained in the policies and procedures and all project managers must receive ongoing best practice training as policies and procedures are revised. We found that neither of the project managers for the case studies we reviewed was provided best practice training when they were first hired, and one was not provided a copy of the project manager manual. AOC officials also stated that they have not yet provided updated best practices training sessions for project managers, although they said they plan to use the ongoing best practices assessments mentioned above to tailor such training. Finally, AOC should require professional development certification and training, although this may not need to be provided internally by AOC. PMI administers a globally accepted and recognized professional certification program for project managers, which requires a specific level of education and experience, adherence to a code of professional conduct, successful completion of an examination, and ongoing continuing education requirements. As AOC moves forward with its project management initiatives, several elements are critical to the thorough implementation of best practices that are designed to improve project planning, design, and construction management. Project management could be improved by demonstrating top leadership commitment to change, planning, establishing outcome- oriented goals, and strategically managing human capital to achieve those goals. A Capitol Hill complex-wide master planning effort, including building condition assessments, will help AOC establish long-term priorities. Similarly, a transparent process to prioritize agency capital projects will help AOC clarify its short-term (1 to 5 years) focus. As a part of a broader communication strategy, effective reporting mechanisms will help AOC convey these long- and short-term priorities, as well detail the progress of projects to stakeholders. Clearly defining project-management- related measures will also help AOC achieve mission-critical strategic and annual performance goals. Finally, the alignment of project management staff and resources in accordance with best practices policies and procedures will help institutionalize those practices and help AOC meet mission-critical goals. Without these elements, AOC and the Congress have no assurance that the project management initiatives are being employed to their fullest potential. Consequently, AOC cannot be assured that the capital projects it is managing can be completed on schedule, on budget, and within scope and are of high quality and meet the needs of their customers. To improve project management—project planning, design, and construction—at AOC, we recommend that Architect of the Capitol develop a Capitol Hill complex master plan and complete condition assessments of all buildings and facilities under the jurisdiction of AOC; develop a process for assigning project priorities that is based on clearly defined, well documented, consistently applied, and transparent criteria; develop tools to effectively communicate priorities and progress of projects, as a part of a broader communication strategy; define project-management-related performance measures to achieve mission-critical strategic and annual performance goals; and align project management staff and resources with AOC’s mission- critical goals. Programs that separate and collect recyclable materials from the waste stream produce numerous benefits. It is estimated that recycling 1 ton of paper saves 17 mature trees, 3.3 cubic yards of landfill space, 7,000 gallons of water, 380 gallons of oil, 4,100 kilowatt hours of energy, and 60 pounds of air pollutants. Recently, AOC has taken several steps to improve the effectiveness of its office recycling programs; however, it could increase the benefits derived from its recycling program by taking a more strategic approach. Such an approach would include revisiting and clarifying recycling mission and goals as part of an AOC planned environmental strategy, measuring and monitoring performance against goals to gauge and improve program effectiveness, and reexamining the roles and responsibilities of the recycling program staff to ensure accountability for achieving recycling goals. We provide observations on how AOC could improve recycling results by organizationally replicating its own and others’ best practices. AOC is responsible for implementing recycling programs for much of the Capitol Hill complex. Consistent with the preliminary observations in our April 2002 statement, AOC, both centrally and at the jurisdiction level, has taken recent steps to improve the overall effectiveness of its recycling programs. Some of the steps include adopting a consultant’s recommendation to simplify the Senate’s recycling program to improve participation and increase effectiveness, developing a draft set of performance indicators and starting to collect increasing recycling promotion and education efforts, surveying recycling clients in the House to determine if the program is meeting their needs, and sharing information on recycling promotion and education strategies among the House and Senate recycling program managers. Office recycling programs can have a variety of environmental and financial benefits. A typical goal is reducing to the extent possible the amount of solid waste sent to landfills. Another typical goal is generating as much revenue as possible from the sale of the recyclable materials collected. A key to achieving either goal is making the recycling program as easy as possible for employees to use. Generally, the less sorting, decision making, and walking required by individual participants, the more successful the program will be. Although the two goals of waste reduction and revenue generation are not mutually exclusive, the relative importance placed on these goals generally affects the design of the recycling program implemented. Specifically, a recycling program with the goal of generating revenue, commonly referred to as a source separation program, is more complicated, expensive, and difficult to implement than a program designed for waste reduction. This is because separating a greater variety of recyclable materials at the source requires more resources for educating clients and the recycling staff, collecting the recyclable materials, and monitoring for compliance. The complexity of source separation, unfortunately, also increases the likelihood of contamination of the recyclable materials collected (potentially recyclable materials are mixed together with other categories of recyclables or wet waste), reducing their value and increasing the volume of waste sent to landfills. Given the complexity and potential performance problems with a source separation program, an organization needs to analyze the costs and benefits of such a program compared to other, simpler options to determine whether such a program will be cost-effective. High levels of contamination have prevented the House and Senate recycling programs from substantially achieving either waste reduction or revenue generation. AOC’s recycling contractor does not pay for high-grade (e.g., white copy) paper with greater than 5 percent contamination or mixed-grade (e.g., glossy or colored) paper with greater than 10 percent contamination. From fiscal years 2001 through 2002, AOC did improve its recycling results. According to General Services Administration (GSA) data, the rate of contamination of recyclable paper products collected dropped from 70 percent in fiscal year 2001 to 55 percent in fiscal year 2002 in the House jurisdiction and from 60 percent to 37 percent in the Senate jurisdiction. However, although AOC avoided the cost of disposing of the waste, the contaminated materials generated no revenue. The recycling contractor may sort and recycle some of this contaminated waste, but some potentially recyclable materials may be too contaminated and will ultimately go to a landfill. During fiscal year 2002, the Senate jurisdiction implemented a consultant’s recommendation to change from a source separation to a simpler combined-paper recycling program. According to the consultant’s report, simplifying the program by reducing the amount of source separation required could both increase revenue and decrease the contamination levels. In contrast, the House jurisdiction continues to operate a more complex source-separation program. Similar to the conclusions made in the review of the recycling program operations for the Senate program, a recently completed consultant study of the House program made the point that a mixed-paper program is easier to administer and usually leads to increased participation, decreased contamination, and less collection time. However, the consultant’s report did not recommend making any changes to the House’s program at this time because it found the existing program to be “user-friendly” and accepted. Nonetheless, given the high rates of contamination in the House recycling program, AOC needs to closely monitor contamination to determine if a simpler program design is warranted. AOC’s goals for its recycling programs are unclear. AOC has not documented any mission and goals for its recycling programs. We found various references—albeit indirect and inconsistent—to AOC recycling goals. For example, a 1999 audit by the AOC Inspector General, indicated that AOC is pursuing the goal of waste reduction. A similar goal is indicated in the position description of the AOC Resource Conservation Program Manager. In contrast, the position descriptions for the House and Senate recycling program managers state that these managers are responsible for, among other things, increasing the financial returns of their programs. If AOC’s goal is to generate as much revenue as possible through a source separation program, then based on the high rate of contamination it will need to design a program that is much more aggressive in terms of the education, training, and equipment it provides to participants and the collection staff. However, if the goal is reducing the volume of waste sent to landfills, then AOC should implement a simpler program, requiring as little separation as possible to increase participation and compliance as was done in the Senate. In addition, AOC has made some effort to expand its recycling program to other facilities within the Capitol Hill complex, such as the Botanic Garden, the Page Dormitory, and—in response to our recent suggestion—the Capital Power Plant. Furthermore, according to AOC officials, AOC recycles fluorescent lamps, batteries, scrap metal, and some computer equipment and has required its contractors to recycle their construction debris. However, it has no formal plans to expand its recycling programs to include other types of recyclable materials, such as waste from its own landscaping or construction activities. Incorporating these materials into its overall recycling program could improve AOC’s overall performance in reducing waste sent to landfills. However, AOC management stated that adequate resources are not presently available to carry out such expanded recycling programs, although it has requested funding for an additional position in fiscal year 2003 to assist the recycling program manager, allowing for further expansion of the recycling program. AOC recycling program staff recently discussed their view that the mission of their recycling programs ought to be primarily reducing waste sent to landfills rather than maximizing recycling revenues. AOC management stated that it would be important to obtain input from congressional stakeholders before making any changes to the mission or goals of the program. Furthermore, clarifying the goals of the program is something AOC management would address only as part of the long-term environmental management plan for the Capitol Hill complex that it plans to undertake after completing its Safety Master Plan. Consistent with the communication strategy we outline in this report, AOC will need to seek input from its stakeholders to determine the most appropriate mission and goals for its recycling program(s). Whether the resulting program is Capitol Hill complex-wide or is tailored to meet the specific requirements of the House or Senate, AOC needs to clarify whether the primary focus of the recycling program is to reduce the total amount of waste sent to landfills, to generate a desired level of revenue, or both. As discussed in our April 2002 statement, to support the accomplishment of AOC’s recycling mission and goals, a performance measurement system should (1) show the degree to which the desired results were achieved, (2) be limited to the vital few measures needed for decision making, (3) be responsive to multiple priorities, and (4) establish accountability for results. Also, as part of its responsibility for handling waste from government facilities, including recyclable materials, the GSA has developed a guide that describes a number of steps an agency can take to measure and monitor recycling efforts that could be useful to AOC in developing its system. These steps are listed in table 3. In response to the Senate Committee on Appropriations’ requirement for quarterly updates on the recycling program in the Senate, AOC developed a performance measurement system that it is using to monitor both the Senate and the House recycling programs. Initially, the indicators on which AOC collected data included, among other things, a two-digit increase in tonnage recycled, revenue generated from the sale of recyclables, market prices for various recycled materials, customer satisfaction, education of participating offices, results of desk side container inspections, status of equipping offices with recycling containers, rate of office participation, and training of recycling collection staffs. Consistent with the preliminary observations in our April 2002 statement, AOC significantly reduced the number of indicators it is collecting and reporting to two: total tonnage collected by type of material and total tonnage contaminated. This more focused approach to measuring the effectiveness of its program is noteworthy. As AOC revisits its program mission, goals, and design, it will have opportunities to reexamine and refine its performance measurement efforts to ensure that it has the right set of performance measures to support program monitoring and decision making. The absence of AOC recycling program goals does not allow measures to be linked to a desired level of performance and thus AOC cannot demonstrate the extent to which performance is achieved. For example, AOC seeks to decrease contamination rates for recyclable materials collected, but does not state a goal for a desired level of contamination against which to measure progress. As shown in table 3, steps 2 and 3, AOC should determine how much waste the Capitol Hill complex generates overall and analyze how much of that waste could be recycled. AOC officials have told us that they plan to conduct such an analysis as part of its future, long term environmental management plan and use the information to form the basis of AOC’s overall waste reduction goals. Furthermore, AOC should develop its performance measurement system with input from recycling program staff members to ensure that the data gathered will be sufficiently complete, accurate, and consistent to be useful in decision making. As AOC clarifies its goals and performance measures for its recycling program, it will likely identify opportunities to create a balanced set of measures that respond to multiple priorities, such as increasing customer satisfaction while also achieving recycling performance goals. Consistent with our preliminary observations, AOC recycling program staff has begun surveying its clients to obtain feedback on their satisfaction with the program. This performance information could be a useful addition to the set of measures AOC is currently collecting and monitoring. After establishing its mission and goals and building a performance measurement system, the next key step for AOC is to put performance data to work. As shown in table 3, steps 4 through 8 and step 10 provide guidance on ways to monitor and evaluate program performance. AOC has proposed a quarterly monitoring system. Such monitoring of performance against goals will enable AOC program managers to identify where performance is lagging, investigate potential causes, and identify actions designed to improve performance. The roles and responsibilities of AOC’s recycling program staff members have evolved in recent years, without the guidance of a clearly defined mission and goals. In revisiting its recycling program mission and goals, AOC should also reexamine the roles and responsibilities of its program staff members to ensure that they are performing the right jobs with the necessary authority. AOC recently changed the responsibilities of its recycling program management positions to incorporate a greater focus on program planning and evaluation. However, according to these staff members, much of their time is spent in day-to-day program implementation activities, leaving little time to fulfill their expanded roles. The AOC Resource Conservation Manager, originally responsible for only the AOC hazardous waste program, currently is responsible for planning and developing policies and programs for an AOC-wide approach to waste management, analyzing waste removal programs, developing and presenting briefing and training materials on agency recycling efforts, and serving as the administrator and technical representative for the recycling collection contract. However, according to the Resource Conservation Manager, about half of her effort is devoted to hazardous waste management activities. She has little time and no staff to carry out the broad, agencywide planning and evaluation activities required by the position. In fiscal year 2001, AOC replaced its recycling coordinator position with a Recycling Program Manager position in the House and Senate jurisdictions. These positions are responsible for working with other Capitol Hill complex recycling specialists to carry out agencywide recycling, planning and developing recycling policies and programs, reviewing program effectiveness and monitoring implementation (e.g., compliance inspections), and analyzing the financial returns of waste recycling contracts. However, the House Recycling Program Manager told us that her current focus has been primarily on implementation activities, such as providing recycling equipment to offices, limiting the time available to focus on other responsibilities, such as program monitoring and evaluation. However, according to this manager, the recent hiring of an assistant to focus on operations will allow her to devote more time to recycling program management activities. As previously stated, AOC needs to provide a results-oriented basis for individual accountability. With respect to recycling, AOC has neither established clear goals nor assigned accountability for achieving results. Because program implementation occurs in the House and Senate jurisdictions, AOC needs to incorporate its desired recycling goals into its performance management system and cascade those goals down through the jurisdictions to the individuals responsible for program implementation. Overlapping responsibilities for planning, education, monitoring, and evaluation between the Resource Conservation Manager and jurisdiction recycling program managers raise questions about the appropriate number of staff members and mix of responsibilities needed to carry out AOC’s recycling programs at the central and jurisdictional levels. For example, the jurisdiction recycling managers focus primarily on the implementation of the recycling program, including equipping offices, educating participants, and collecting recyclable materials. Furthermore, the AOC Resource Conservation Manager has little time and no staff to carry out broad management and oversight responsibilities. As a result, little capacity exists to carry out the planning, development, monitoring, and evaluation of AOC’s recycling programs on an AOC-wide basis. Although the Architect of the Capitol has managed an office recycling program in the House and Senate jurisdictions for more than a decade, high levels of contamination present in the materials collected has prevented it from fully realizing either the environmental or financial benefits that it could have achieved. Adopting a more strategic approach to recycling— clarifying AOC’s recycling mission and goals to assess whether it has the right program design, organization, and implementation strategies in place to achieve desired results, measuring and monitoring performance against goals, and reexamining the roles and responsibilities of the recycling program staff to ensure accountability for achieving recycling goals—could improve the environmental results of the program. AOC officials have indicated that the recycling program will be included in an overall environmental master plan that it will develop in 2003. We agree with this approach and believe that developing a clear mission statement for the recycling program and using that statement as a basis for establishing reasonable performance goals, developing a set of performance measures, and aligning the organization to hold managers accountable for results, would help AOC further improve its recycling program results. In order adopt a strategic approach to recycling, we recommend that the Architect of the Capitol take the following actions: Develop a clear mission and goals for AOC’s recycling program with input from key congressional stakeholders as part of its proposed environmental master plan. AOC may want to establish reasonable goals based on the total waste stream—information it plans to obtain as part of its long term environmental management plan—that could potentially be recycled. Develop a performance measurement, monitoring, and evaluation system that supports accomplishing AOC’s recycling mission and goals. Examine the roles and responsibilities of AOC’s recycling program staff to ensure that they are performing the right jobs with the necessary authority, and holding the staff accountable for achieving program and agency results through AOC’s performance management system. In his comments on this chapter, the Architect generally agreed with our recommendations and discussed the relevant efforts AOC has under way in the areas of worker safety, project management, and recycling. In the area worker safety, in addition to initial efforts to target areas that have the potential for danger to life and health, the Architect stated that AOC is in the process of developing program policies for incident reporting and investigation, inspection, and hazard abatement and control. AOC disagreed with our statement that its 5-year Safety Management Plan was drafted independent of the broader strategic planning effort. Although we believe this statement was true at the time of our review, AOC has subsequently made efforts to improve the alignment between its draft strategic and worker safety plans. Therefore, we deleted this statement. The Architect stated that AOC’s implementation plan will focus on strategic, long-term planning, training, and continuous improvement in worker safety. The Architect stated that AOC plans to address our recommendations in the area of project management as another focus of its implementation plan. Current initiatives include developing and scheduling training for project managers; conducting condition assessments of the Senate, House, and Capitol buildings this fiscal year, and of other Capitol Hill complex buildings in subsequent fiscal years; and developing a 5-year capitol improvement plan and the scope of work for a 20-year master plan of the Capitol Hill complex. In the area of recycling, the Architect stated that AOC is committed to defining clear goals for its recycling program and will establish a dedicated environmental function. AOC’s implementation plan will discuss its approach to establishing program goals, integrating environmental concerns into AOC’s overall strategy, and ensuring that measures reflect goals and are linked to performance of key activities. The Architect’s comments are reprinted in appendix II.
The Office of the Architect of the Capitol (AOC) plays an important role in supporting the effective functioning of the Congress and its neighboring institutions. With a budget of $426 million, AOC is responsible for the maintenance, renovation, and new construction of all buildings and grounds within the Capitol Hill complex. GAO was mandated by the Legislative Branch Appropriations Act, 2002, to conduct a comprehensive management review of AOC's operations to help identify improvements in strategic planning, organizational alignment, and strategic human capital management to help AOC better achieve its mission and to address long-standing program issues. To address these objectives, GAO reviewed AOC's legislative authority and internal documents, interviewed key AOC officials and senior managers, and conducted employee focus groups. AOC is an agency working to transform itself and has planned management improvement efforts, such as a new strategic planning process, to help it make this transition. GAO found that without AOC establishing a management and accountability framework, it might have difficulty leading and executing its organizational transformation. Leading organizations undergoing transformation efforts draw from the following management and accountability components: (1) demonstrating top leadership commitment to organizational transformation; (2) involving key stakeholders in developing an organizationwide strategic plan; (3) using the strategic plan as the foundation for aligning activities, core processes, and resources to support mission-related outcomes; (4) establishing a communications strategy to foster transformation and create shared expectations and build ; (5) developing annual goals and a system for measuring performance; and (6) managing human capital and information technology strategically to drive transformation and to support the accomplishment of agency goals. To support its transformation initiatives and to cope with shifting environments and evolving demands and priorities, AOC also should continue to develop its management infrastructure and controls. Establishing this management and accountability framework and further developing its management infrastructure and controls can also help AOC improve performance in program areas of long-standing concern to AOC's employees and congressional stakeholders--worker safety, project management, and recycling.
Since 1955, the executive branch has encouraged federal agencies to obtain commercially available goods and services from the private sector when the agency determines it is cost-effective. However, in the past, both the private and public sectors expressed concern about the fairness with which these sourcing decisions were made. In response, Congress in 2000 mandated a study of government sourcing conducted by the Commercial Activities Panel and chaired by the Comptroller General. In April 2002, the panel released its report with recommendations that stressed the importance of linking sourcing policy with agency missions, promoting sourcing decisions that provide value to the taxpayer regardless of the service provider selected, and ensuring greater accountability for performance. For example, the panel found that federal sourcing policy should: support agency missions, goals, and objectives; be consistent with human-capital practices designed to attract, motivate, retain, and reward a high-performing federal workforce; recognize that inherently governmental functions and certain others should be performed by federal workers; avoid arbitrary full-time equivalent or other arbitrary numerical goals; and provide for accountability in all sourcing decisions. Government contracting has more than doubled to reach over $500 billion annually since the panel issued its report. This increased reliance on contractors to perform agency missions increases the risk that government decisions can be influenced by contractor employees, which can result in a loss of control and accountability. Agencies buy services that range from basic operational support, such as custodial and landscaping, to more complex professional and management support services, which may closely support inherently governmental functions. Such services include acquisition support, budget preparation, and intelligence services. Our work at DOD and the Department of Homeland Security (DHS) has found that it is now commonplace for agencies to use contractors to perform activities historically performed by government employees. Inherently governmental functions require discretion in applying government authority or value judgments in making decisions for the government, and as such they should be performed by government employees, not private contractors. The closer contractor services come to supporting inherently governmental functions, the greater this risk of influencing the government’s control over and accountability for decisions that may be based, in part, on contractor work. In part to address the increased reliance on contractors, the Fiscal Year 2008 National Defense Authorization Act required DOD to develop and implement insourcing guidelines. In April 2008, DOD issued its initial insourcing guidelines, and on May 28, 2009, DOD issued implementing guidance for the insourcing of contracted services. The guidance is designed to assist DOD components as they develop and execute plans to decrease funding for contractor support and increase funding for new civilian manpower authorizations. Similarly, the Omnibus Appropriations Act of 2009, required the heads of executive branch agencies to devise and implement insourcing guidelines and procedures. The guidelines and procedures were to ensure that “consideration” was given to using, on a regular basis, federal employees to perform new functions and functions that are performed by contractors and could be performed by federal employees. In July 2009, OMB issued guidance for agencies to begin the process of developing and implementing policies, practices, and tools for managing the multisector workforce. This guidance included insourcing criteria intended to provide the civilian agencies with a framework for consistent and sound application of insourcing guidance, in accordance with statutory requirements. The criteria consisted of four sections: (1) general management responsibilities; (2) general consideration of federal employee performance; (3) special consideration of federal employee performance; and (4) restriction on the use of public-private competition. Each criterion addresses different aspects of the mandate for insourcing guidelines and procedures and describes circumstances and factors agencies should consider when identifying opportunities for insourcing. (See app. I for a more detailed description of OMB’s insourcing criteria.) Additionally, the guidance, as part of a planning pilot, requires each agency to conduct a multisector human-capital analysis of an organization, program, project, or activity where there are concerns about reliance on contractors and report on the pilot by May 1, 2010. In response to the mandate in the 2009 Omnibus Appropriations Act, we reviewed the status of civilian agencies’ efforts to develop and implement insourcing guidance. We reported in October 2009 that none of the nine civilian agencies we met with between July and October 2009 had met the statutory deadline to produce insourcing guidance. One agency had issued preliminary guidelines, and two others had drafted but not issued their guidelines as of our review, but most of the agencies’ efforts were still in the early stages. For example, two of the nine agencies reviewed at the time had designated the offices responsible for leading the effort to develop the guidelines and were in the process of deciding what approach they would take. In contrast, two other agencies had drafted guidelines, with one waiting on management approval to issue them and the other planning to finalize its guidelines once OMB issued additional guidance regarding outsourcing and inherently governmental functions. Agency officials cited a number of reasons as to why they did not meet the statutory deadline and had not issued final insourcing guidelines. The reasons included, but were not limited to the following: Wanting to ensure their guidelines were consistent with OMB’s guidance, issued in July 2009, which caused them to delay finalizing or drafting their guidelines. Waiting for additional OMB guidance and clarification regarding outsourcing and inherently governmental functions. Several officials stated that they anticipated this guidance would have a significant effect on their development and implementation of insourcing guidelines. Similarly, OMB indicated when it provided the insourcing criteria in July 2009 that it expected to refine the criteria as it developed guidance on when outsourcing is and is not appropriate. Intending to use the results, best practices, and lessons learned from the multisector workforce planning pilots to better inform their insourcing guidelines and procedures. For example, one agency told us it planned to use its experience with its planning pilot as the basis for its final guidelines, while another planned to issue initial guidelines to be used during the pilot and then revise the guidelines as appropriate based on the experiences during the pilot. Stressing that developing effective insourcing guidelines is complex and involves many agency functions, including human capital, acquisition, and finance and budget, all of which requires a great deal of coordination and takes time. They added that their ability to focus on the development of the guidelines has been constrained by their capacity to deal with multiple management initiatives in addition to their regular core duties. Although OMB and agencies have yet to issue insourcing guidance, OMB reported in December 2009 that 24 agencies had launched planning pilots to address the use of contractors in one or more of their organizations. Agencies were due to report the results of their pilots to OMB by May 1, 2010. Following the initiative of the March 2009 Presidential memo on government contracting and in response to a congressional mandate, OMB’s Office of Federal Procurement Policy issued a public notice on March 31, 2010 that provides proposed policy for determining when work must be performed by, or reserved for, federal employees. The proposal provides the following guidance to executive branch agencies: Adopts the statutory definition in the Federal Activities Inventory Reform (FAIR) Act of 1998 as a single, governmentwide definition of inherently governmental functions. This definition classifies an activity as inherently governmental when it is so intimately related to the public interest that it must be performed by federal employees. Such activities include determining budget priorities and awarding and administering contracts, which are reserved exclusively for federal employees. Retains the illustrative list of examples of “closely associated with inherently governmental functions” from the Federal Acquisition Regulation, such as preparing budgets and developing agency regulations, and provides guidance to help agencies decide whether to use contractors to perform these functions. Unlike inherently governmental functions, agencies can determine whether contractor performance of these functions is appropriate. The proposed policy lays out the responsibilities agencies must perform, such as ensuring sufficient government capacity for oversight during the contract award and administration process, if they decide to use a contractor for these services. Introduces the category of “critical functions,” as functions whose importance to the agency’s mission and operation requires that at least a portion of the function must be reserved for federal employees to ensure the agency has sufficient internal capability to effectively perform and maintain control. Outlines a number of new management determinations and actions that federal agencies should employ to avoid allowing contractor performance of inherently governmental functions, including developing agency procedures, providing training and designating senior officials responsible for implementation of the proposed policy. Comments from agencies and the public on the proposed policy are due to OMB by June 1, 2010. Agency efforts to effectively insource certain functions now performed by contractors will in large part depend on their ability to assess their human- capital and mission requirements and develop and execute plans to fulfill those requirements so they have a workforce that possesses the necessary education, knowledge, skills, and competencies to accomplish their mission. We and others have shown that successful public and private organizations use strategic management approaches to prepare their workforces to meet present and future mission requirements. Strategic human-capital management—which includes workforce planning—helps ensure that agencies have the talent and skill mix they need to address their current and emerging human-capital and other challenges, such as long-term fiscal constraints and changing demographics. A strategic human-capital plan helps agency managers and stakeholders to systematically consider what is to be done, how it will be done, and how to gauge progress and results. Our prior work has identified workforce planning challenges that can affect an agency’s ability to obtain the right mix of federal employees and contractor personnel. Strategic workforce planning is an iterative, systematic process that addresses two critical needs: (1) aligning an organization’s human-capital program with its current and emerging mission and programmatic goals and (2) developing long-term strategies for acquiring, developing, and retaining an organization’s workforce to achieve programmatic goals. These strategies should include contractor as well as federal personnel and link to the knowledge, skills, and abilities agencies need. As agencies develop workforce strategies, they also need to consider the extent to which contractors should be used and the appropriate mix of contractor and federal personnel. With the increased reliance on contractors, there has been an increased concern about the ability of agencies to ensure sufficient numbers of staff to perform some functions that should only be performed by government employees. Strategic workforce planning can position federal agencies to meet such workforce challenges. However, our prior work has found that the increased reliance on contractors to perform the work of government is in part attributed to difficulties in hiring for certain hard-to-staff positions, training and retaining government employees. For example, we have previously reported that federal agencies have relied increasingly on contractors to support the acquisition function due to the fact that the capacity and the capability of the federal government’s acquisition workforce to oversee and manage contracts have not kept pace with increased spending for increasingly complex purchases. This pattern can also be found in other functions such as information technology and intelligence activities. Importantly, federal agencies also face competition in hiring and retaining government employees as contractors can offer higher salaries in some cases. In 2001, we first identified strategic human-capital management as a high- risk area because of the federal government’s long-standing lack of a consistent approach to human-capital management. In 2010, while agencies and Congress have taken steps to address the federal government’s human-capital shortfalls, strategic human-capital management remains a high-risk area because of the continuing need for a governmentwide framework to advance human-capital reform. We have reported that federal agencies have used varying approaches to develop their strategic workforce plans, depending on their particular circumstances. For example, an agency with a future workload that could rise or fall sharply may focus on identifying skills to manage a combined workforce of federal employees and contractors. We and the Office of Personnel Management (OPM) have identified the following six leading principles that agencies should incorporate in their workforce planning efforts: Align workforce planning with strategic planning and budget formulation; Involve managers, employees, and other stakeholders in planning; Identify critical occupations, skills, and competencies and analyze workforce gaps; Develop strategies to address workforce gaps; Build capability to support workforce strategies; and Monitor and evaluate progress. Furthermore agencies face other operational and administrative challenges as our 2009 review of civilian agency insourcing efforts identified with respect to implementing guidance to facilitate the conversion of contractor personnel to government positions, including the following: Infrastructure. The complex nature of insourcing and the many functional parts of an agency involved in the hiring process require managers to share responsibility and coordinate activities. The various functions involved in an agency’s insourcing efforts—such as human capital, acquisition, finance and budget—must be identified, as well as the roles each will play. Culture. Insourcing represents a major shift in the focus and culture of the multisector workforce. Established processes and procedures are geared toward outsourcing and shifting to insourcing and a “total workforce” approach—that considers both contractors and federal employees—will take time and requires flexibility to meet the needs of an agency within an ever-changing environment. Data. Agencies face difficulties in gathering and analyzing certain types of service contracting data needed for making insourcing decisions. For example, information on the type of service contracts and the number of contractor-equivalent personnel may not be readily available, even though some officials indicated that such information may be needed to review contracted-out services and make insourcing decisions. The lack of reliable data on contractors has been a recurrent theme in our work over the past several years. For example, we have reported that agencies faced challenges with developing workforce inventories under the FAIR Act of 1998, especially as it relates to the classification of positions as inherently governmental or commercial. Our work on the acquisition workforces at DHS and DOD reported that the departments lacked sufficient data to fully assess total acquisition workforce needs including the use of contractors. And, more recently, our review of DOD service contractor inventories for fiscal year 2008 found that each of the military departments used different approaches and data sources to compile their inventory data and, as a result, DOD data on service contracts are inconsistent and incomplete. Resources. Limited budgets and resources may constrain insourcing efforts. For example, if after applying its guidelines, an agency determines that a function should be insourced and additional government employees need to be hired, the agency must ensure the funds are available to pay for them. Agency implementation of insourcing efforts could be facilitated by tools that we identified in prior work. These tools will allow agencies to capture information, make strategic decisions and implement those decisions for their multisector workforce. They include: inventories, business case analysis, and human capital flexibilities. Inventories. The inventories that federal agencies are required to develop under congressional mandate will be used to inform a variety of workforce decisions. For example, at DOD, the inventories are to contain a number of different elements for service contracts, including information on the functions and missions performed by the contractor, the funding source for the contract, and the number of contractor full-time equivalents working under the contract. Once compiled, the inventories may be used to inform a variety of workforce decisions, including how various agency functions should be sourced. Business Case Analysis. A balanced analytical approach, used by some agencies when deciding to outsource functions, could facilitate agency decisions in determining whether insourcing a particular function has the potential to achieve mission requirements. Such an analysis may consider questions such as the following: o How critical is the function’s role in relationship to the agency’s mission? o What is the risk to program integrity and control of sensitive information if the function is not insourced? o What is the long-term trend of demand for the function; is there periodic fluctuation in demand for the function (i.e. stability of demand)? o What is the current state of technology used by the function and what is the likelihood of the agency being able to acquire and sustain the technology if the function is brought in-house? o What are the number of staff and skill level of staff needed o What is the ability of the agency to recruit the workforce to perform the function? with the appropriate skills to continue to provide services the contractor currently provides? o What is the likelihood of contractor staff in the function applying to work for the agency? o What is the estimated cost to maintain an acceptable level of performance if the function is brought in-house? Human Capital Flexibilities. Once agencies determine which functions they want to have provided by federal employees, taking advantage of the variety of human-capital flexibilities is crucial to making improvements in agencies’ efforts to recruit, hire, and manage their workforces. For example, monetary recruitment and retention incentives and special hiring authorities provide agencies with flexibility in helping them manage their human-capital strategically to fulfill insourcing needs. OMB’s criteria for insourcing decisions provide a basis for agencies in establishing their insourcing plans and can be used to facilitate balancing the mix of federal employees and contractors to better assure government control over critical functions. However, it will be in the implementation of agency plans and in the individual sourcing decisions that federal agencies make that will determine the ultimate success of this effort. Making use of the full range of information and human-capital tools available to implement these plans will be important to assuring effective government control of critical functions, mitigating risks, and providing value to the taxpayer. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions you or the other members of the subcommittee may have at this time. For further information regarding this testimony, please contact John Needham at (202) 512-4841 or needhamjk1@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this product. Staff making key contributions to this statement were Amelia Shachoy, Assistant Director; Brendan Culley; Noah Bleicher; Erin Carson; Lauren Heft; and John Krump.
Federal agencies face a complicated set of decisions in finding the right mix of government and contractor personnel to conduct their missions. While contractors, when properly used, can play an important role in helping agencies accomplish their missions, GAO has found that agencies face challenges with increased reliance on contractors to perform core agency missions. Congress and the Executive branch also have expressed concern as to whether federal agencies have become over-reliant on contractors and have appropriately outsourced services. A March 2009 Presidential memorandum tasked the Office of Management and Budget (OMB) to take several actions in response to this concern. Based on GAO's prior work, this statement discusses (1) civilian agencies' development and implementation of guidelines to consider whether contracted functions should be brought in-house --a process known as insourcing; (2) OMB's proposed policy on work reserved for federal employees; (3) challenges agencies face in managing the federal workforce; and (4) key tools available for insourcing and related efforts. GAO reviewed the status of civilian agencies efforts to develop and implementinsourcing guidance and reported in October 2009 that none of the nine civilian agencies with whom we met had met the statutory deadline to produce insourcing guidance. Primarily, they were waiting to ensure their guidance was consistent with or receive additional OMB guidance, and to use the results, best practices, and lessons learned from their multisector workforce pilots to better inform their insourcing guidelines. Since the time of our review, OMB reported in December 2009 that 24 agencies had launched pilots to address overuse of contractors in one or more of their organizations. Agencies were due to report the results of their pilots to OMB by May 1, 2010. In response to a congressional mandate, OMB recently issued a public notice that provides proposed policy for determining when work must be performed by federal employees. Comments on the policy are due from federal agencies and the public by June 1, 2010. The proposed policy provides the following guidance to executive branch agencies: it adopts a single, governmentwide definition of inherently governmental functions in accordance with the definition in the Federal Activities Inventory Reform Act of 1998, which classifies an activity as inherently governmental when it is so intimately related to the public interest that it must be performed by federal employees; it provides guidance for determining functions "closely associated with inherently governmental;" and it introduces the category of "critical functions," as work that must be reserved for federal employees in order to ensure the agency has the internal capability to maintain control of its missions and operations. Agency efforts to effectively insource functions performed by contractors will in large part depend on the ability to assess mission and human capital requirements and develop and execute plans to fulfill those requirements so agencies have a workforce that possesses the necessary knowledge, skills, and competencies to accomplish their mission. Furthermore, GAO's 2009 review of civilian agency insourcing efforts identified operational and administrative challenges agencies face with respect to implementing the conversion of contractor personnel to government positions. For example, agencies face difficulties in gathering and analyzing certain types of service contracting data needed for making insourcing decisions. Agency implementation of insourcing efforts could be facilitated by tools that GAO has previously identified, including: (1) Inventories to identify inherently governmental functions; (2) Business case analysis to facilitate agency decisions in determining whether insourcing a particular function has potential to achieve mission requirements; and (3) Human-capital flexibilities to efficiently fill positions that should be brought in-house.
The Military Selective Service Act requires virtually all male U.S. citizens worldwide and all other males residing in the United States ages 18 through 25 to register with the Selective Service System within 30 days of turning 18 years of age under procedures established by a presidential proclamation and other rules and regulations. The Selective Service System currently budgets for 130 full-time civilian positions and 175 part- time Reserve Force OfficersVirginia; its Data Management Center, in Chicago, Illinois; and its three regional headquarters, located in Chicago, Illinois; Smyrna, Georgia; and Denver, Colorado. In 2011, the Selective Service System’s Data Management Center added 2.2 million records to its database and sent a series of letters to males reminding them of their obligation to register. According to Selective Service System officials, in calendar year 2010, their database contained approximately 16.4 million names, and the estimated registration compliance rate was 92 percent. The Selective Service System also carries out other peacetime activities such as conducting public registration awareness and outreach, responding to public inquiries about registration requirements, and providing training and support to volunteer local board members, state directors, and Reserve Force Officers. in its national headquarters in Arlington, The Military Selective Service Act does not currently authorize use of a draft for the induction of persons into the armed forces. Congress and the President would be required to enact a law authorizing a draft, were they to deem it necessary to supplement the existing force with additional military manpower. In the event of a draft, the Selective Service System would be tasked with conducting a lottery and sending induction notices to selected males to supply the personnel requested by the Secretary of Defense. A network of over 11,000 local, district, and national board volunteers, who are now managed by the Selective Service System, would be activated to review and process claims for exemption, deferment, or postponement of service. Selected males would be directed to report to Military Entrance Processing Stations, managed by DOD, to determine whether they are qualified for military service, and then sent to military training centers. In addition to drafting inductees, the Selective Service System would be responsible for providing options and managing the program for alternative civilian service to conscientious objectors and would also be required to induct health care specialists if necessary. The Selective Service System’s time frames for mobilizing inductees are based on DOD’s recommendations developed in accordance with its manpower requirements as defined in 1994; therefore, the appropriateness of these time frames to helping DOD meet its current manpower needs in excess of the current all-volunteer force is unclear. Even though DOD has not used the draft since 1973, DOD officials told us that the Selective Service System provides a low-cost insurance policy in case a draft is ever necessary and a structure and organization that would help ensure the equity and credibility of a draft should one be authorized and implemented. The Selective Service System also offers capabilities that are hard to quantify in terms of dollars, including its structure of unpaid volunteers who could be activated as soon as a draft is implemented and its no-cost agreements with civilian organizations that have agreed to supply jobs to conscientious objectors. Selective Service System officials expressed concern that, as currently resourced, they cannot meet DOD’s requirements to deliver inductees without jeopardizing the fairness and equity of the draft. However, that requirement was based on the national security environment that existed in 1994. The lack of an updated requirement from DOD presents challenges to policymakers for determining whether the Selective Service System is properly resourced or necessary. DOD developed its manpower requirements for the Selective Service System in 1994 and has not reexamined these requirements in the context of recent military operations and changes in the security environment and national security strategy. In a 1994 memorandum to the Director of the Selective Service System, the Assistant Secretary of Defense for Force Management stated that DOD expected that its active and reserve forces would be sufficient for most conceivable scenarios involving two Major Regional Conflicts, citing two then-current documents, the 1993 Report on the Bottom-Up Review and the 1994 A National Security Strategy of Engagement and Enlargement. Because of this expectation, DOD recommended extending the time it would require the Selective Service System to provide the first inductees from 13 days to 193 days after mobilization (13 days plus 6 months) and to provide 100,000 inductees from 30 days to 210 days after mobilization (30 days plus 6 months). The Selective Service System considers this requirement to be its most recent and official requirement from DOD. The memorandum also stated that DOD’s position was that an all-male draft remained valid and legal and that medical personnel continued to be the only skilled group that would be required in conceivable contingency scenarios. Specifically, the document states that DOD’s Health Care Personnel Delivery System calls for the rapid postmobilization registration of up to 3.5 million health care personnel in more than 60 specialties. DOD also stated in its memorandum that the time for the Selective Service System to conduct a mass registration of medical personnel could be extended by 6 months, from 13 days to 193 days, with induction orders to follow 3 weeks later. DOD relies on its national defense strategy and the Quadrennial Defense Review to identify its priority mission areas and determine its overall force structure needs. The national defense strategy provides the foundation and strategic framework for the department’s Quadrennial Defense Review, which is performed every 4 years. During this review, DOD is required to define a national defense strategy and the force structure and other elements necessary to successfully execute the range of missions identified in that national defense strategy. Changes in the security environment require the department and the services to reassess their force structure requirements, including how many and what types of units are necessary to carry out the national defense strategy. For example, as DOD stated in its January 2012 strategic guidance, even when U.S. forces are committed to a large-scale operation in one region, they will need to be capable of denying the objectives of—or imposing unacceptable costs on—an opportunistic aggressor in a second region. Specifically, the United States will need to be prepared for an increasingly complex set of challenges in South Asia, the Middle East, and the Asia- Pacific region. In prior work, we have emphasized the importance of agencies taking actions to ensure that their missions are current and that their organizations are structured to meet those missions. We have also reported that many agencies find themselves encumbered with structures and processes rooted in the past and designed to meet the demands of earlier times. Further, we have stated that high-performing organizations stay alert to emerging mission demands and remain open to reevaluating their human capital practices to meet emerging agency needs. Changes in the security environment and defense strategy represent junctures at which DOD can systematically reevaluate service personnel levels to determine whether they are consistent with strategic objectives. While DOD officials stated that the 1994 manpower requirement may still be valid, without an updated assessment of requirements for the Selective Service System, policymakers cannot be certain whether the resources to support the Selective Service System are necessary to meet DOD’s manpower needs, whether the Selective Service System is prepared to supply the skills most critical to DOD in the 21st century, or whether the Selective Service System is necessary at all. In a letter to GAO dated April 16, 2012, the Deputy Assistant Secretary for Military Personnel Policy stated that determining the military necessity for the Selective Service System and its registration of young men is a complex issue that requires significant examination not possible during the period of GAO’s review. However, DOD does recognize that such an examination is prudent. The Deputy Assistant Secretary noted that, while the military necessity of the Selective Service System in the 21st century has yet to be determined, the department recognizes that there are benefits to the continuation of the Selective Service System. According to official spokespersons for the Selective Service System, the agency is not currently resourced to meet DOD’s requirement for it to deliver the first inductees in 193 days and 100,000 inductees in 210 days, without jeopardizing the fairness and equity of the draft. However, DOD officials believe that the Selective Service System provides a low-cost insurance policy in case a draft is ever necessary. The Selective Service System also provides benefits that would help to ensure a draft was fair and equitable. Specifically, Selective Service System officials stated that since fiscal year 1997, the agency has undergone various cuts and attained efficiencies in an attempt to meet DOD’s manpower requirements. The Selective Service System officials said that due to reductions in the number of personnel available to set up area offices across the country, it now estimates it could not deliver the first inductees until 285 days after mobilization. In fiscal year 1997, the Selective Service System’s budget was $22.9 million (in then-year dollars), or $31.5 million in fiscal year 2013 dollars. Since then, the agency’s annual budget has declined steadily in constant dollars, and its requested budget for fiscal year 2013 was $24.4 million. Once a man reaches his 26th birthday, his name is dropped from the Selective Service System’s list of possible draftees. 712,000 transactions each year, including manual registrations, registrant file updates, compliance additions and updates, post office returns, and miscellaneous forms. The Data Management Center also serves as the agency’s national call center, which the public contacts to verify registrations that are needed to be eligible for benefits and programs linked to this registration, such as student loans and government jobs. In addition, the Selective Service System undertakes general national outreach and public awareness initiatives to publicize the requirement for males to register. These efforts have included convention exhibits, public service announcements, high school publicity kits, focus group studies, and outreach meetings. The Selective Service System also conducts outreach visits to areas of low registration compliance. In addition to registration, the Selective Service System structure helps to ensure that a draft would be fair and equitable. For example, it maintains a structure that could be activated as soon as a draft is implemented to conduct nationwide local review boards to determine draftees’ eligibility for deferments. The Selective Service System’s three regional offices are responsible for maintaining this board structure and making sure that personnel are trained to perform their assigned tasks. Each state and territory has a part-time state director who is compensated for an average of up to 12 duty days per year. In 2011, the Selective Service System also relied on 175 Reserve Force Officers from all branches of the military services. These part-time officers perform peacetime and preparedness tasks, such as training civilian board members, and function as field contacts for state and local agencies and the public. The largest component of the Selective Service System’s workforce is approximately 11,000 uncompensated men and women. According to Selective Service System officials, these men and women are selected to be representatives of the geographic area in which they reside and are trained to serve as volunteer local, district, and national appeal board members. If a draft were to occur, these trained volunteers would decide the classification status of men seeking exceptions or deferments based on conscientious objection, hardship to dependents, their status as ministers or ministerial students, or any other reason. Selective Service System officials believe that having local board members representative of the geographic areas in which they reside helps to ensure that these board members would make fair and equitable decisions. If a draft occurred, the Selective Service System is also required to manage a 2-year program of alternative civilian service for conscientious objectors. The Selective Service System maintains no-cost agreements with civilian organizations that, in the event of a draft, have agreed to supply jobs to conscientious objectors who oppose any form of military service, even in a noncombat capacity. To be prepared to implement an alternative service program for registrants classified as conscientious objectors, the Selective Service System conducts outreach to various civilian employers, such as the Methuselah Foundation and the Mennonite Mission Network, to arrange memoranda of agreement for these organizations to be prepared to offer alternative service to up to 30,000 conscientious objectors should a draft be necessary. Restructuring or disestablishing the Selective Service System would require consideration of various fiscal and national security implications, some of which may be difficult to quantify. We reviewed estimated costs and savings for two alternatives to the current structure of the Selective Service System: (1) placing it in a deep standby mode where active registration is maintained and (2) disestablishing the agency. In addition to the potential costs and savings of these alternatives, other factors, with both tangible and intangible costs and benefits, may need to be considered if either alternative were pursued. We identified factors that may affect costs and various considerations and limitations that may affect whether another agency or database could perform the functions of the Selective Service System while maintaining the capability to perform a fair and equitable draft. Officials from the Selective Service System provided details on the personnel and resources required for each of the alternatives we reviewed, as well as their estimated cost savings (see table 1). The Selective Service System estimates were based on the assumption that either alternative would be fully implemented in fiscal year 2013, and officials based their estimates on their fiscal year 2013 requested budget. Most of the estimated cost savings result from reductions in the numbers of civilian and Reserve Force Officer personnel for the two alternatives we examined. As shown in table 1, if the Selective Service System were placed in a deep standby mode and maintained its registration program and database, Selective Service System officials estimated that the first-year cost savings would be approximately $4.8 million, with subsequent annual savings of approximately $6.6 million. Selective Service System officials estimated that costs for closing the regional offices, severance pay, and other termination costs would be $1.8 million. The Selective Service System estimates it would require a budget of $17.8 million and 93 full- time civilian personnel at the national headquarters and Data Management Center to continue inputting and processing registrations, maintain registration awareness and compliance, and facilitate plans to reconstitute the agency if needed. The estimates assume that the Selective Service System would reduce its civilian workforce by 37 positions, would no longer employ Reserve Force Officers or state directors, and would reduce its physical infrastructure costs by closing its three regional offices. According to Selective Service System officials, disestablishing the agency would produce first-year cost savings of approximately $17.9 million and subsequent annual savings of $24.4 million. This scenario assumes that all full-time civilians, Reserve Force Officers, and state directors would be terminated or dismissed, and the agency headquarters, three regional headquarters, and data management center would be closed. Selective Service System officials estimated that costs for closing the agency and terminating employees and contracts would total approximately $6.5 million in the first year. In both of the alternatives presented in table 1, the 11,000 civilian volunteer board members would be dismissed, eliminating the volunteer board infrastructure currently in place to review claims for deferring or postponing military service. Selective Service System officials also identified the estimated time and potential resources required to reestablish the agency to its current operations should either of these options be pursued. Selective Service System officials estimated that if the agency were in a deep standby mode or disestablished, it would cost approximately $6.6 and $28 million, respectively, to restore the agency to its current operating capacity. Officials estimated that if the agency were put in a deep standby mode with registration, it would take approximately 18 months to rehire and train essential civilian and Reserve Force Officer personnel, reestablish regional offices, and appoint state directors and civilian volunteer board members. If the agency were disestablished, officials estimated it would take an additional 6 months—or a total of approximately 2 years—to perform mass registrations, reconstitute the Data Management Center and regional offices, build the necessary information technology infrastructure, and rehire and train personnel. Selective Service System officials also provided estimates for the time and resources required to perform a draft from its current operations if the agency were in deep standby or disestablished. According to Selective Service System officials, they have no previous experience transitioning from disestablishment or a standby mode to draft operations. While their estimates are loosely based on the agency’s mobilization plans, officials noted that their plans have not recently been updated and do not reflect their current staffing or budget. To perform a draft from its current operating status, Selective Service System officials said that they would require approximately $465 million to hire the full-time civilian personnel necessary to populate the field structure by staffing area and alternative service offices and district and local boards. If either deep standby or disestablishment were pursued and a draft became necessary, Selective Service System officials said they would need funds in addition to the $465 million it would currently require to perform a draft. Selective Service System officials estimated that if the agency were in a standby mode or disestablished, they would require approximately 830 days and 920 days, respectively, to provide DOD with inductees. In addition to the potential costs and savings for each option, officials from the Selective Service System and DOD identified other factors that would need to be considered if the agency were disestablished or placed in a deep standby mode. Officials reaffirmed several benefits that they stated had been previously identified in a 1994 National Security Council recommendation to maintain the Selective Service System and the registration program. For example, DOD and Selective Service System officials said that the presence of a registration system and the Selective Service System demonstrates a feeling of resolve on the part of the United States to potential adversaries. Officials also stated that, as fewer citizens have direct contact with military service, registering with the Selective Service System may be the only link some young men will have to military service and the all-volunteer force. Selective Service System officials noted that the Selective Service System and registration requirement provide a hedge against unforeseen threats. Officials from DOD also cited some secondary recruiting benefits they receive from the Selective Service System. DOD relies on the Selective Service System to mail out recruiting pamphlets in conjunction with the registration materials the agency routinely sends to new registrants. DOD officials told us that using the Selective Service System to mail these materials costs approximately $370,000 a year, which is significantly less than the department would spend on postage to mail the recruiting materials separately and which results in approximately 60,000 recruiting leads a year. In addition, DOD officials said that DOD relies heavily on the Selective Service System’s database to help populate its recruiting and marketing database at no cost to the department. Other costs and considerations may need to be evaluated as well. A number of federal and state programs require registration as a prerequisite, such as state drivers’ licenses and identification cards, federal student aid programs, U.S. citizenship, federally sponsored job training, and government employment. Selective Service System officials said there could be costs to remove language from forms and program materials stating that registering with the Selective Service System is a prerequisite to qualifying for these programs. Furthermore, Selective Service System officials said that agreements with civilian agencies to provide alternative civilian service for conscientious objectors would be terminated if registration were discontinued or the agency were disestablished, and reinstituting these agreements in the event of a draft would take time. Terminating the Selective Service System would also require amending the Military Selective Service Act and potentially other laws involving the Selective Service System. Selective Service System and DOD officials identified factors that should be considered if the functions of the Selective Service System were to be performed by another federal or state agency or with another database. We were unable to identify specific costs associated with these options because, according to officials from DOD and the Selective Service System, there is no database that is comparable to or as complete as the Selective Service System’s database. However, officials did identify several factors and limitations that could affect the costs and feasibility of having the Selective Service System’s functions performed by another entity. Officials from the Selective Service System identified several databases and agencies that currently help populate their registration database. For example, Selective Service System officials said they have agreements with the Social Security Administration and the American Association of Motor Vehicles to supply names of 18- through 25-year-olds who have registered social security numbers or who apply for drivers’ licenses, at a cost of $14,200 and $42,177 a year, respectively. Selective Service System officials also said they rely on the U.S. Census Bureau to provide a breakdown of the total number of men aged 18 through 25 by state and county, which the Selective Service System uses to determine its overall registration compliance rate. Selective Service System officials agreed that other agencies’ databases, like those of the Social Security Administration and the American Association of Motor Vehicles, could be used or combined to populate a registration database but noted that a draft using these systems might not be fair and equitable because these databases would target certain portions of the pool of possible inductees but not others. For example, if a draft were performed using only names in the Social Security Administration’s database, immigrant men residing in the United States who do not have social security numbers would not have the same likelihood of being drafted as male U.S. citizens would. Selective Service System officials also stated that there could be costs associated with combining other databases to achieve the compliance rate of the Selective Service System’s database. The Selective Service System database represents 92 percent of the eligible population, and Selective Service System officials said they rely on a number of sources to maintain a high registration compliance rate and have established a process that gives everyone an equal chance of being selected. The Selective Service System therefore believes it can perform a fair and equitable draft of the population and said that other databases, unless similarly combined, could not replicate the completeness of the Selective Service System database. DOD and Selective Service System officials also expressed concern with having another federal agency perform the Selective Service System’s functions. Selective Service System officials said that any transfer of their responsibilities to DOD or another federal agency would raise independence concerns with respect to ensuring that a draft would be fair and equitable.officials, the independence of the agency helps to ensure that conscientious objector and pacifist communities will comply with registration requirements because the public trusts that the registration and induction process is performed fairly. DOD officials said that a significant evaluation would need to be performed to determine the costs and feasibility of the department taking on the Selective Service System’s tasks and that they are unable to identify the potential costs for the department to assume the responsibilities of the Selective Service System. DOD officials were able to provide the approximate costs to maintain the department’s recruiting and marketing database, but they emphasized that this database would be inappropriate to use as a replacement for the Selective Service System’s database because the Joint Advertising Market Research and Studies office relies on third-party data to populate its database, which is used strictly for the purpose of performing recruiting and market research. Officials from DOD’s Joint Advertising Market Research and Studies office indicated that their office currently spends approximately $2.8 million a year to operate and maintain their database of recruiting and marketing names and that it would cost an additional $3 million to replace the names it receives from the Selective Service System free of charge, more than doubling DOD’s operating costs for this database. In addition, DOD and Selective Service System officials stated that they are uncertain whether any savings would be realized by transferring the Selective Service System’s function to DOD or any other federal agency. Officials said the same number of personnel and resources would likely be required, and according to Selective Service System officials, there could be additional costs involved in having another agency learn how to recreate the components of the Selective Service System. While the Selective Service System states that it is not resourced to provide first inductees within 193 days of mobilization and 100,000 inductees within 210 days, DOD has not reevaluated this requirement since 1994. Since that time, the security environment and the national security strategy have changed significantly. Without an updated assessment by DOD of its specific requirements for the Selective Service System, it is unclear whether DOD would need 100,000 inductees in 210 days or even whether draftees would play any role in a military mobilization. Further, while DOD officials believe that the Selective Service System provides a low-cost insurance policy and benefits DOD in other ways—some that are hard to quantify—determining the value of these benefits is ultimately a policy decision for Congress, as is the determination of the cost and benefit trade-offs of the various alternatives to reducing the agency or transferring its functions. A reevaluation of the department’s manpower needs for the Selective Service System in light of current national security plans would better position Congress to make an informed decision about the necessity of the Selective Service System or any other alternatives that might substitute for it. To help ensure that DOD and Congress have visibility over the necessity of the Selective Service System to meeting DOD’s needs, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness to take the following two actions: (1) evaluate DOD’s requirements for the Selective Service System in light of recent strategic guidance and report the results of this evaluation to Congress and (2) establish a process of periodically reevaluating DOD’s requirements for the Selective Service System in light of changing threats, operating environments, and strategic guidance. In commenting on a draft of this report, DOD agreed with our recommendations and noted its plans for implementation. Specifically, DOD concurred with our first recommendation—to evaluate DOD’s requirements for the Selective Service System to reflect recent strategic guidance and report the results of its evaluation to Congress. The department stated that the Office of the Under Secretary of Defense for Personnel and Readiness, in coordination with the Joint Staff and the services, will perform an analysis of DOD’s manpower requirements for the Selective Service System, with an anticipated completion date of December 1, 2012. DOD also concurred with our second recommendation—to establish a process to periodically reevaluate DOD’s requirements for the Selective Service System in light of changing threats, operating environments, and strategic guidance. The department stated that it will establish a process to review the mission and requirements for the Selective Service System during its reevaluation of its current requirements for the Selective Service System. DOD’s comments are reprinted in appendix II. We also provided a draft of this report to the Selective Service System for comment. In its written comments, the Selective Service System noted its support of DOD’s views of the Selective Service System. Specifically, it cited the Secretary of Defense’s 2011 testimony in support of maintaining registration as a mechanism to ensure the department is prepared for an unexpected event. The Selective Service System’s comments are reprinted in appendix III. The Selective Service System also provided technical comments, which we incorporated as appropriate. We also provided the Office of Management and Budget a draft, but we did not receive any comments. We are sending copies of this report to appropriate congressional committees; the Secretary of Defense; the Under Secretary of Defense for Personnel and Readiness; and the Director of the Selective Service. We will also make copies available to other interested parties upon request. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-3604 or farrellb@gao.gov. Major contributors to this report are listed in appendix IV. To determine the extent to which the Department of Defense (DOD) has evaluated the necessity of the Selective Service System to meeting DOD’s future manpower requirements in excess of the all-volunteer force, we analyzed documentation and information obtained from interviews with relevant officials from the Office of the Under Secretary of Defense for Personnel and Readiness, Office of Management and Budget, and Selective Service System. To determine DOD’s manpower requirements, we reviewed DOD guidance and documents, including guidance on wartime manpower mobilization procedures and mobilization requirements. We also analyzed Selective Service System annual reports and budget justification documents, as well as input provided by the Selective Service System to the Office of Management and Budget. We reviewed relevant legislation establishing the Selective Service System and registration requirements in title 50 of the United States Code. We obtained DOD and Selective Service System officials’ perspectives on the role of the Selective Service System, as well as the Selective Service’s ability to meet its current need for inductees as defined by DOD’s manpower mobilization requirements. To obtain criteria for how frequently agencies should reevaluate their missions, we consulted our body of work on this subject. To review the fiscal and national security considerations of various alternatives to the Selective Service System, we obtained cost estimates from Selective Service System officials for two scenarios involving reducing or eliminating the Selective Service System: (1) disestablishing the Selective Service System and (2) placing the agency in a standby mode while having it continue to register potential draftees. We interviewed Selective Service System officials to identify their assumptions and sources for calculating the costs to implement these two scenarios. To assess the reliability of their cost estimates, we gathered and analyzed the agency’s budget documents to verify their calculations and assumptions and provided updates to the estimates for the Selective Service System to review. To assess the reliability of computer-processed data used to estimate costs, we interviewed Selective Service System officials and obtained documentation from the Department of the Interior to confirm the data and internal controls used in the system. We determined that the data were sufficiently reliable for the purposes of this audit. We also interviewed DOD and Selective Service System officials to identify and describe federal or state agencies or comparable databases that could replace the Selective Service System’s registration database. We obtained DOD and Selective Service System officials’ perspectives about the considerations and potential limitations involved in using another agency or database, as well as factors that could affect the cost and feasibility of another agency or database being used to perform the functions of the Selective Service System. We also reviewed GAO’s previous reports on the Selective Service System. We conducted this performance audit from February to June 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Brenda S. Farrell, (202) 512-3604 or farrellB@gao.gov. In addition to the contact above, Margaret Best, Assistant Director, Melissa Blanco, Greg Marchand, Charles Perdue, Meghan Perez, Bev Schladt, and Erik Wilkins-McKee made key contributions to this report.
The Selective Service System is an independent agency in the executive branch. Its responsibilities include maintaining a database that will enable it to provide manpower to DOD in a national emergency, managing a program for conscientious objectors to satisfy their obligations through a program of civilian service, and ensuring the capability to register and induct medical personnel if directed to do so. Section 597 of the National Defense Authorization Act for Fiscal Year 2012 (Pub. L. No. 112-81) requires that GAO assess the military necessity of the Selective Service System and examine alternatives to its current structure. Specifically, GAO (1) determined the extent to which DOD has evaluated the necessity of the Selective Service System to meeting DOD’s future manpower requirements beyond the all-volunteer force and (2) reviewed the fiscal and national security considerations of various alternatives to the Selective Service System. GAO reviewed legislation, analyzed relevant documents, verified cost data provided by the Selective Service System, and interviewed DOD, Office of Management and Budget, and Selective Service System officials. The Department of Defense (DOD) has not recently evaluated the necessity of the Selective Service System to meeting DOD’s future manpower requirements for carrying out the defense strategy or reexamined time frames for inducting personnel in the event of a draft. DOD officials told GAO that the Selective Service System provides a low-cost insurance policy in case a draft is ever necessary. The Selective Service System maintains a structure that would help ensure the equity and credibility of a draft. For example, the Selective Service System manages the registration of males aged 18 through 25 and maintains no-cost agreements with organizations that would offer alternative service to conscientious objectors. The Selective Service System also has unpaid volunteers who could be activated as soon as a draft is enacted to review claims for deferment. However, DOD has not used the draft since 1973, and because of its reliance and emphasis on the all-volunteer force, DOD has not reevaluated requirements for the Selective Service System since 1994, although significant changes to the national security environment have occurred since that time. Periodically reevaluating an agency’s requirements is critical to helping ensure that resources are appropriately matched to requirements that represent today’s environment. Selective Service System officials expressed concern that, as currently resourced, they cannot meet DOD’s requirements to deliver inductees without jeopardizing the fairness and equity of the draft. However, the lack of an updated requirement from DOD presents challenges to policymakers for determining whether the Selective Service System is properly resourced or necessary. Restructuring or disestablishing the Selective Service System would require consideration of various fiscal and national security implications. GAO reviewed data on costs and savings associated with maintaining the Selective Service System’s current operations, operating in a deep standby mode with active registration, and disestablishing the Selective Service System altogether. If Congress disestablishes the Selective Service System it would need to amend the Military Selective Service Act and potentially other laws involving the Selective Service System. There are also limitations that would need to be considered if Selective Service System functions were transferred to another agency. Selective Service System officials said that while other databases could be used for a registration database, these databases might not lead to a fair and equitable draft because they would not be as complete and would therefore put some portions of the population at a higher risk of being drafted than others. GAO recommends that DOD (1) evaluate its requirements for the Selective Service System in light of recent strategic guidance and (2) establish a process of periodically reevaluating these requirements. In written comments on a draft of this report, DOD agreed with the recommendations.
Under a variety of statutes, federal employees, including postal workers, can file a complaint alleging unlawful employment discrimination. Each discrimination complaint contains two key elements that provide information about the nature of the conflict. The first of these two elements is the “basis” of the allegation under federal antidiscrimination law. An employee can allege discrimination on any of seven bases—race, color, national origin, sex, religion, age, and disability. In addition, federal employees can claim an eighth basis—reprisal—if they believe that they have been retaliated against for having filed a complaint, participated in an investigation of a complaint, or opposed a prohibited personnel practice. Depending upon the employee’s situation, he or she can claim more than one basis when filing an EEO complaint. The second of the two elements that help define the nature of the conflict in a discrimination complaint is the “issue”—that is, the specific condition or event that is the subject of the complaint. Issues that employees can file complaints about include nonsexual and sexual harassment, nonselection for promotion, performance evaluations, duties that are assigned to them, and disciplinary actions (e.g., demotion, reprimand, suspension, and termination). (See app. I for a listing of categories of issues). As is true with respect to bases for complaints, an employee can raise multiple issues in a single complaint. Agencies are required by regulations (29 C.F.R. 1614.602) and the EEOC Federal Sector Complaint Processing Manual, Equal Employment Opportunity Management Directive (EEO MD)-110 to report annually to EEOC data about the bases and issues cited in complaints, along with other complaint-related statistics. EEOC compiles the data from the agencies for publication in the annual Federal Sector Report on EEO Complaints Processing and Appeals. According to the Management Directive, “The analyses of the data collected enable the EEOC to assist in refining the efficiency and effectiveness of the Federal EEO process.” This objective conforms with one of the goals contained in EEOC’s Annual Performance Plans for fiscal years 1999 and 2000. Likewise, as indicators of the nature and extent of workplace conflict, these data could be important to EEOC as it carries out its broader mission, which, as stated in the agency’s Strategic Plan, “is to promote equal opportunity in employment by enforcing the federal civil rights employment laws through administrative and judicial actions, and education and technical assistance.” In assessing why the data collected and reported by EEOC were not helpful in answering fundamental questions about the nature and extent of conflict in the federal workplace, we examined several sources. We reviewed instructions for EEOC Form 462, Annual Federal Equal Employment Opportunity Statistical Report of Discrimination Complaints, the form that agencies use to report complaint basis and issue data to EEOC, particularly part IV of the form, Summary of Bases and Issues in Complaints Filed (see app. I for a copy of part IV of EEOC Form 462.) We examined statistics on complaint bases and issues published in EEOC’s Federal Sector Report on EEO Complaints Processing and Appeals for fiscal years 1991 to 1997. Because postal workers accounted for about half of the discrimination complaints federal workers filed in fiscal year 1997, we obtained and analyzed forms 462 covering fiscal years 1991 to 1997 that the Postal Service submitted to EEOC in order to compare statistics for the postal workforce with the nonpostal workforce. In addition, the Postal Service provided us additional data on bases and issues generated by its complaint information system. We did not examine forms 462 for nonpostal agencies as we did for the Postal Service. Although Form 462 data that each agency submits show the number of times the different issues were raised in each basis category, EEOC does not aggregate these data from all agencies to prepare a consolidated Form 462 (part IV). At our request, EEOC prepared a consolidated Form 462 (part IV). Because EEOC does not routinely compile data this way, we requested this information only for fiscal year 1997. EEOC provided data for all federal agencies and, by subtracting Postal Service data, also provided data for nonpostal agencies. Further, we spoke with officials at EEOC and the Postal Service and representatives of the Council of Federal EEO and Civil Rights Executives. These officials provided observations about trends in the bases for and issues cited in complaints. Their comments, they said, were based on their experiences, rather than on specific studies. In addition, Council members from the Departments of Treasury and the Army provided information on how their respective agencies report complaint basis and issue data. Finally, we reviewed sections of EEOC’s Strategic Plan and its Annual Performance Plans for fiscal years 1999 and 2000 pertaining to the agency’s federal sector operations. We requested comments on a draft of this report from the Chairwoman, EEOC, and the Postmaster General. Their comments are discussed near the end of this report. We did our work from October 1998 through March 1999 in accordance with generally accepted government auditing standards. EEOC does not collect relevant data in a way that would help answer some fundamental questions about the nature and extent of workplace conflict alleged in federal employees’ discrimination complaints. Among the kinds of questions that cannot be answered are: How many individuals filed complaints? In how many complaints was each of the bases for discrimination alleged? What were the most frequently cited issues in employees’ discrimination complaints and in how many complaints was each of the issues cited? Answers to such questions would help decisionmakers and program managers understand the extent to which different categories of employees are filing complaints and the conditions or events that are causing them to allege discrimination. One fundamental question that cannot be answered is the number of individual employees who have filed complaints. EEOC does not collect data on the number of employees who file complaints, nor on how often individual employees file complaints. These numbers would be crucial to an analysis of the extent to which the increase in the number of complaints in the 1990s (see p. 1) was due to individuals filing first-time complaints or included individuals who had filed other complaints in the past. Without data on the number of complainants and the frequency of their complaints, decisionmakers do not have a clear picture of the nature and extent of alleged discrimination in the workplace and the actions that may be necessary to deal with these allegations. For example, a number of factors indicate that the increase in the number of discrimination complaints does not necessarily signify an equivalent increase in the actual number of individuals filing complainants. First, an undetermined number of federal employees have filed multiple complaints. According to EEOC and Postal Service officials and representatives of the Council of EEO and Civil Rights Executives, while they could not readily provide figures, they said it has been their experience that a small number of employees—often referred to as “repeat filers”—account for a disproportionate share of complaints. Additionally, an EEOC workgroup that reviewed the federal employee discrimination complaint process reported that the number of cases in the system was “swollen” by employees filing “spin-off complaints”—new complaints challenging the processing of existing complaints. Further, the work group found that the number of complaints was “unnecessarily multiplied” by agencies fragmenting some claims involving a number of different allegations by the same employee into separate complaints rather than consolidating these claims into one complaint. In addition, there has been an increase in the number of complaints alleging reprisal, which, for the most part, involve claims of retaliation by employees who have previously participated in the complaint process. Questions about the prevalence of bases and issues in the universe of complaints are not answerable because of the manner in which EEOC collects these data. Accurate answers to such questions are necessary to help decisionmakers and program managers discern trends in workplace conflicts, understand the sources of conflict, and plan corrective actions. These data could give managers a clearer picture of the extent to which particular groups of employees may feel aggrieved and the conditions or events that trigger their complaints. For example, managers would be able to better discern trends in the numbers of black employees alleging racial discrimination and the issues they have raised most frequently. EEOC prescribes a format for agencies to report complaint bases and issues data (see app. I). The form is a matrix that, according to EEOC instructions, requires agencies to associate the basis or bases of an individual complaint with the issue or issues raised in that complaint. However, there are problems in counting bases and issues this way. Complaints with two or more bases and/or issues can result in the same basis and/or issue being counted more than once. For example, suppose an employee specifies that race, sex, age, and disability discrimination were the bases for his or her complaint, while nonselection for promotion, a poor performance evaluation, and an assignment to noncareer-enhancing duties were the issues. In preparing the report to EEOC, the agency would record each of the three issues in the columns corresponding with each of the four bases. Table 1 illustrates how this complaint would fit into the preparation of the overall report to EEOC. The table is a matrix with excerpts of similar rows and columns that appear on the form submitted to EEOC (see app. I). To determine the number of times each basis is alleged, EEOC instructs agencies to add the number of times each issue was recorded in each column of the matrix. In this illustration, the agency would count each basis three times—once for each of the three issues recorded in each of the columns. To determine the number of times each issue is alleged, EEOC instructs agencies to add each row of the matrix. In this illustration, the agency would count each issue four times—once for each of the four bases under which they were recorded. Overall, the agency would report that 12 bases and 12 issues were alleged in this single hypothetical complaint rather than the 4 bases and 3 issues actually cited. EEOC uses these data from agencies to compile the number of times each basis and each issue was alleged governmentwide, which it publishes in the annual Federal Sector Report on EEO Complaints Processing and Appeals. The figure reported for the number of times that a particular basis was alleged, however, represents the sum of the number of times that the various issues were recorded in the column under that basis, not the actual number of complaints in which that basis was alleged. Similarly, the figure reported for the number of times that a particular issue was cited represents the sum of the number of times the issue was recorded under each of the bases, not the actual number of complaints in which that issue was cited. EEOC does not know the extent to which bases and issues may be counted more than once for the same complaint. EEOC’s Complaint Adjudication Division Director said that while the reporting procedures result in overreporting of the number of times the different bases and issues were alleged, he believes that the data provide a “fair approximation” of bases and issues included in complaints. He agreed, however, that recording data in a way that would establish the number of times the different bases and issues are cited in the universe of complaints would make sense. The way EEOC collects basis and issue data does, however, yield some insight into the importance of the different issues to the different categories of complainants. The form that each agency is to complete shows the issues raised under each basis and the number of times that a particular issue was raised. With these data, an agency manager can determine, for example, the issues that female employees alleging sex discrimination complained about and the number of times each of those issues was raised. The one essential statistic that is missing, however, is the actual number of complaints made by women alleging sex discrimination. Further, while EEOC collects information showing the extent to which specific issues are associated with specific bases at each agency, it does not aggregate this information for all federal agencies. The discrimination complaint data that EEOC has collected and reported are of questionable reliability because (1) agencies did not always report data consistently, completely, or accurately and (2) EEOC did not have procedures that ensured the data were reliable. Federal agencies take varying approaches to reporting data on complaint bases and issues to EEOC. We reviewed the Postal Service’s data submissions to EEOC, as well as the process to prepare these submissions, and found that the agency did not follow EEOC’s instructions to associate the issue or issues raised in each complaint with the basis or bases involved. For each complaint, regardless of the number of issues raised by the employee, the Postal Service identified and reported only one “primary” issue. In commenting on a draft of this report, the Postal Service’s Manager, EEO Compliance and Appeals, said that the Postal Service adopted this approach to give the data more focus by identifying the primary issues driving postal workers’ discrimination complaints. We did not review reports and reporting practices among nonpostal agencies for consistency and attention to completeness and accuracy. However, we spoke with officials from two large nonpostal agencies who indicated that they followed EEOC instructions, which, as discussed above, can result in an overcounting of bases and issues. EEOC’s Complaints Adjudication Division Director said that agencies might be using different approaches to reporting the data. However, he said that he did not know the extent to which such variation may exist because EEOC had not examined how agencies complete their reports. The issue of incomplete or inaccurate reporting of data was evident in our analysis of the data that the Postal Service reported to EEOC for fiscal years 1992 and 1995 through 1997. We analyzed Postal Service statistics because postal workers accounted for about half of the discrimination complaints filed by federal employees in fiscal year 1997. In addition to not completely reporting all issues raised in complaints, we found that the Postal Service’s statistical reports to EEOC for fiscal years 1996 and 1997 did not include data for certain categories of issues. Further, we found certain underreporting of bases for complaints and issues by the Postal Service in fiscal year 1995. Postal Service officials also told us that complaint statistics were incomplete for fiscal year 1992. Another, especially significant, reporting error we identified involved the number of race-based complaints. As a result of a computer programming error, the number of complaints reported by the Postal Service to contain allegations by white postal workers of discrimination based on race was overstated in fiscal years 1996 and 1997 by about 500 percent. After we brought these errors to the attention of Postal Service officials, they provided corrected data to us and EEOC for all errors except those relating to the fiscal year 1992 data. Postal Service officials said that because EEO-related staff had been reassigned during restructuring of the Postal Service that began in fiscal year 1992, not all complaints were properly accounted for that year. The officials also said that the computer program used to generate reports to EEOC had been modified to correct the fault in the way race-based complaints are to be counted. Errors in data reported to or by EEOC were a recurring problem in our work identifying trends in federal sector EEO complaints. In addition to the Postal Service data errors, during our prior work, we found errors for nonpostal agencies’ data. EEOC does not audit or verify the data it receives from agencies and publishes in the annual Federal Sector Report on EEO Complaints Processing and Appeals because of time considerations and staff limitations, according to the Complaints Adjudication Division Director. He said, however, that EEOC staff review agencies’ data to identify figures that appear unusual or inconsistent with other data reported. As we observed, this procedure did not ensure the reliability of the data EEOC collected and put in print. For example, in preparing the aggregated figures that it published in its federal sector report for fiscal year 1996, EEOC used the Postal Service’s vastly overstated data on racial discrimination complaints by white employees, thereby skewing the portrayal of discrimination complaint trends governmentwide. Data about the bases for complaints and the issues giving rise to them can be valuable in gauging conflict in the federal workplace. However, EEOC does not collect or report relevant agency data in a way that would help answer fundamental questions about the number of complainants and the prevalence of bases and issues in the universe of complaints. In addition, some of the data collected and reported by EEOC have lacked the necessary reliability because agencies did not report their data consistently, completely, or accurately, and because EEOC did not have procedures that ensured the data were reliable. Consequently, the data do not provide a sound basis for decisionmakers, program managers, and EEOC to understand the nature and extent of workplace conflict, develop strategies to deal with conflict, and measure the results of interventions. To help ensure that relevant and reliable data are available to decisionmakers and program managers, we recommend that the Chairwoman, EEOC, take steps to enable EEOC to collect and publish data on complaint bases and issues in a manner that would allow fundamental questions about the number of complainants and the prevalence of bases and issues in the universe of complaints to be answered, and develop procedures to help ensure that agencies report data consistently, completely, and accurately. We received comments on a draft of this report from EEOC and the Postal Service. In its written comments (see app. II), EEOC agreed that the data collected from federal agencies could be more comprehensive and accurate. EEOC said that it would expedite its efforts to revise the instructions for data collection and that it would address the concerns we raised in this report. EEOC further stated that, given the required review and approval processes, including allowing time for federal agencies to comment, it would take about 8 months to issue the changes and an additional 12 months for the agencies to report complaint data to EEOC in accordance with the new instructions. Under EEOC’s timetable, it will be several years before EEOC’s annual federal sector reports reflect the results of the agency’s efforts to revise instructions for data collection and to promote more comprehensive and reliable reporting. EEOC’s revised instructions would be issued in the beginning of fiscal year 2000, and the first complete fiscal year for which the instructions would be applicable would be fiscal year 2001. Agencies’ statistical reports for the fiscal year ending 2001 would not be submitted to EEOC until fiscal year 2002 for later publication in the Federal Sector Report on EEO Complaints Processing and Appeals. EEOC did not indicate, however, when the first federal sector report containing these data would be published. EEOC also said that that it would take action to address our concerns about data consistency, completeness, and accuracy. To deal with problems in the reliability of the data collected from agencies, EEOC said that it would urge agencies to give higher priority to the accuracy of their data. EEOC said it will ask agencies to certify the reliability of the data they provide and to explain how they ensure the quality of their data. In addition, EEOC said that if additional resources it has requested for fiscal year 2000 become available, it would be able to conduct on-site reviews to assess the reliability of agency data, more closely examine the nature of workplace disputes, and work with agencies to improve their EEO programs. We believe that the actions proposed by EEOC are generally responsive to our recommendation and would add some measure of reliability to the data it collects and reports. By urging agencies to give higher priority to data reliability, EEOC would be reiterating its current policy, as stated in Management Directive 110, that “Every effort should be made to ensure accurate recordkeeping and reporting of federal EEO data and that all data submissions are fully responsive and in compliance with information requests.” By proposing that agencies certify the reliability of the their data and explain how they ensure data quality, EEOC will be providing a mechanism for holding agencies more accountable for producing reliable and accurate data and, if followed, would have some basis to assess the extent to which an agency’s processes may ensure the data’s reliability and accuracy. An assessment of agencies’ quality control procedures and consideration of discrepancies contained in previous data submissions, among other factors, would enable EEOC to select agencies for any future on-site reviews based on the estimated risk of agencies submitting unreliable data. On April 9, 1999, the Postal Service’s Manager, EEO Compliance and Appeals, provided oral comments on a draft of this report. He said that the report, in general, accurately describes the data shortcomings and opens the door for dialogue on how data could be collected in a manner that would better serve decisionmakers. He agreed with the recommendation that data be collected on the number of complainants. In addition, he suggested that data be collected on the number of repeat filers. The official said it has been his experience that between 60 and 70 individuals account for every 100 complaints in a fiscal year. He also suggested that EEOC collect data about the race and sex of complainants along basis and issue lines. He further suggested that similar data be collected for individuals seeking counseling. The official said that the Postal Service’s complaint information system is capable of producing this kind of information because it tracks individuals by their Social Security number. For example, he said that his office has been able to provide Postal Service management with complaint data for each of the Service’s 85 districts in order to identify the extent of workplace conflicts at the different locations and the primary issues driving the conflicts. He said, however, that the issues listed on EEOC Form 462 (see app. I) need to be revised to make them more relevant to the agencies reporting to EEOC. He suggested that EEOC convene a working group of federal agency representatives to deal with this and other data issues. We believe the Postal Service official’s suggestion that EEOC develop a working group of federal agency representatives to participate in revising data collection requirements would allow stakeholders to be active partners in the development of data collection requirements that affect them. Although we did not identify all of the data that would be useful to decisionmakers and program managers, a working group would provide a forum for developing a consensus on data needs. It might be appropriate to include congressional stakeholders in any working group because of their oversight and policymaking responsibilities. It should be noted that other agencies that deal with redress and human capital issues—the Office of Personnel Management and the Merit Systems Protection Board—have working groups or panels to assist them in carrying out their missions. The Postal Service official also said it would be helpful if EEOC revised its system of collecting data to facilitate more timely collection and publication of federal sector EEO complaint data. He noted that the federal sector reports are published nearly 2 years after the fiscal year’s end. More timely data, he said, would make data more useful to decisionmakers. We agree that more timely data are more likely to be useful to decisionmakers. Although timeliness is not an issue we reviewed, we did observe what appeared to be lengthy periods before data were made available. For example, EEOC published the fiscal year 1997 Federal Sector Report on EEO Complaints Processing and Appeals on April 27, 1999, 18 months after the end of fiscal year 1997. The working group proposed by the Postal Service official could be a forum for further exploring this issue. As agreed with your offices, we plan no further distribution of this report until 30 days after its issuance, unless you publicly release its contents earlier. We will then send copies of this report to Senators Daniel K. Akaka, Thad Cochran, Joseph I. Lieberman, and Fred Thompson; and Representatives Robert E. Andrews, John A. Boehner, Dan Burton, William L. Clay, Chaka Fattah, William F. Goodling, Steny H. Hoyer, Jim Kolbe, John M. McHugh, David Obey, Harold Rogers, Joe Scarborough, Jose E. Serrano, Henry A. Waxman, and C. W. Bill Young in their capacities as Chair or Ranking Minority Members of Senate and House Committees and Subcommittees. We will also send copies to The Honorable Ida L. Castro, Chairwoman, EEOC; The Honorable William J. Henderson, Postmaster General; The Honorable Janice R. Lachance, Director, Office of Personnel Management; The Honorable Jacob Lew, Director, Office of Management and Budget; and other interested parties. We will make copies of this report available to others on request. Major contributors to this report are listed in appendix III. Please contact me on (202) 512-8676 if you or your staff have any questions concerning this report. Stephen E. Altman, Assistant Director, Federal Management and Workforce Issues Anthony P. Lofaro, Evaluator-in-Charge Gary V. Lawson, Senior Evaluator Sharon T. Hogan, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch-tone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the nature and extent of workplace conflicts that underlie the rising number of discrimination cases, focusing on: (1) the statutory bases (e.g., race, sex, or disability discrimination) under which employees filed complaints; (2) the kinds of issues (e.g., nonselection for promotion, harassment) that were cited in these complaints; and (3) why the data collected and reported by the Equal Employment Opportunity Commission (EEOC) were not helpful in answering the questions raised. GAO noted that: (1) relevant and reliable data about the bases for federal employee discrimination complaints and the specific issues giving rise to these complaints would help decisionmakers and program managers understand the nature and extent of conflict in the federal workplace; (2) these data could also be used to help plan corrective actions and measure the results of interventions; (3) however, EEOC does not collect and report data about bases and issues in a way that would help answer some fundamental questions about the nature and extent of workplace conflicts, such as: (a) how many individuals filed complaints; (b) in how many complaints each of the bases for discrimination was alleged; and (c) the most frequently cited issues in employees' discrimination complaints and in how many complaints was each of the issues cited; (4) moreover, the reliability of the data that EEOC collects from agencies and reports is questionable; (5) GAO found that agencies reported basis and issue data to EEOC in an inconsistent manner; (6) GAO also found that agencies did not report to EEOC some of the data it requested and reported some other data incorrectly; and (7) in addition, because EEOC did not have procedures that ensured the reliability of the data it collected from agencies, it published some unreliable data in its annual Federal Sector Report on Equal Employment Opportunity Complaints Processing and Appeals.
The Corporation for National and Community Service was created to help meet community needs in education, the environment, and public safety and to expand educational opportunity by rewarding individuals who participate in national service. The Corporation is part of USA Freedom Corps, a White House initiative to foster a culture of citizenship, service, and responsibility and help all Americans answer the President’s call to service. The Corporation receives appropriations to fund program operations and the National Service Trust. The Corporation makes grants from its program appropriations to help grant recipients carry out national service programs. AmeriCorps is one of three national service programs the Corporation oversees. Most of the grant funding from the Corporation for AmeriCorps programs goes to state service commissions, which award subgrants to nonprofit groups and agencies that enroll the AmeriCorps’ participants. Participants in the AmeriCorps program can receive a stipend as well as health benefits and childcare coverage. For example, about one-half of AmeriCorps’ participants received an annual living allowance of $9,300 and health benefits. Those participants who successfully complete a required term of service earn an education award that can be used to pay for undergraduate school, or graduate school, or to pay back qualified student loans. In exchange for a term of service, full-time AmeriCorps participants earned an education award of $4,725 in program year 2002. Participants have up to 7 years from the date of completion of service to use the education award. AmeriCorps also enrolls participants on a part- time basis and as “education awards only” participants. Part-time participants who serve 900 or fewer hours annually earn education awards proportional to those earned by full-time participants. Under the “education awards only” program, AmeriCorps does not pay the participant a living allowance or other benefits, but provides grant funding for administrative purposes only, about $400 per full-time participant annually. However, each participant receives an education award equivalent to that earned by a paid AmeriCorps participant. The number of AmeriCorps participants increased by nearly 20,000 from 1998 to 2001. The program year 2002 data indicate the number of positions awarded will decrease by about 8,000. (See figure 1.) In November 2002, the Corporation suspended enrollments in AmeriCorps because total enrollments were potentially higher than the Corporation had expected. No new funds had been requested by and appropriated to the Trust for fiscal year 2002, and under the continuing resolution at the start of fiscal year 2003, no new funds would be deposited into the Trust until the Corporation’s fiscal year 2003 appropriations were enacted. The Corporation concluded that if its grantees and subgrantees were to fully enroll new participants up to the maximum number of enrollments the Corporation had approved in its grants, the Trust would not have a sufficient amount to provide the educational awards to those participants. Enrollments in AmeriCorps were frozen from November 2002 through March 2003. Three factors contributed to the Corporation’s need to suspend enrollments in AmeriCorps. Although the Corporation specified the maximum number of new participants in the grants it awarded, the Corporation did not recognize its obligation to fund participant education awards until it actually paid the benefits. Had the Corporation properly tracked and recorded its obligations in the Trust at the time of grant award when it approved new enrollments, it likely would not have needed to suspend enrollments. In addition, there was little, if any, communication among the AmeriCorps program office, the grants management office, and the Trust about the number of positions that the Trust could support. Furthermore, by allowing grantees various flexibilities and not requiring them to provide timely enrollment information, the Corporation and AmeriCorps managers could not be certain about the number of participants. The Corporation did not appropriately record or track its obligations for education awards to program participants. Generally, an agency incurs an obligation for the amount of the grant award with the execution of a grant agreement. The Corporation enters into grant agreements with state service commissions in which it specifies the budget and project period of the award, the total number of positions approved, the total amount awarded for program costs for the approved positions, and the terms of acceptance. The award for the program costs is used to pay participants’ stipends and health and child care coverage. The Corporation incurs an obligation for these program costs at the time of grant award. While the costs of education awards for the new participants are not specified in the grants, in the grant agreements the Corporation commits to funding education awards for all of the qualified positions initially approved in a grant if the subgrantee enrolls all of the participants before the Corporation modifies the terms or conditions of the grant. In other words, upon award of the grant, the Corporation, at a minimum, has accepted “ legal duty … which could mature into a legal liability by virtue of actions on the part of the other party beyond the control of the United States.” However, the Corporation has concluded that it is not necessary to obligate funds until an individual actually enrolls in AmeriCorps. Therefore, the Corporation recorded education award obligations on an outlay basis. That is, obligations were recorded at the time of the quarterly drawdown of amounts for education awards from the Trust. By failing to recognize and record its obligations at the time of grant award, the Corporation had no assurance that the number of positions approved in grant awards did not exceed the amount of educational awards the Trust could support. Proper recording of obligations serves to protect the government by ensuring that it has adequate budget authority to cover all of its commitments and prevent agencies from over-obligating its budget authority. Corporation executives we interviewed said that there was little if any coordination between the AmeriCorps program office and officials responsible for the management of the Trust about the number of positions that the Trust could support. The AmeriCorps director said that she considered the grant budget independent from the Trust and she neither consulted with nor received direction from the Trust director when making decisions about the grants. In addition, in recent years, AmeriCorps has tried to increase the number of participants by enrolling them in the “education awards only” program. Under this program, which was an effort to lower the per participant program cost, AmeriCorps provides funding to grantees for administrative purposes only, currently about $400 per full-time participant annually. Increasing the number of participants in this way is at a low cost to the AmeriCorps program appropriation, but at full cost to the Trust, which funds the education awards, because each participant receives an education award equivalent to that earned by a paid AmeriCorps participant. Consequently, the number of positions funded by AmeriCorps grants was not reconciled with the number supportable by the Trust. According to Corporation officials we spoke with, the Trust’s funding needs were based on an expected enrollment of 50,000, while the AmeriCorps program office approved grants for about 75,000 participants. Corporation officials also said that prior to suspending enrollments in AmeriCorps, the Trust was so well funded it did not warrant their attention. They told us that early in the AmeriCorps program, a goal of 50,000 participants annually was used for Trust budgeting purposes. However, it was found that fewer than that number of participants enrolled, and not all of those who participated earned education awards. Additionally, a Corporation budget official said that in the past those who earned education awards were not using them as quickly as expected. Even as the number of AmeriCorps participants grew, the Trust’s accounting records showed an unobligated balance that was high enough for Congress to rescind $111 million over fiscal years 2000 and 2001, resulting in the deobligation of the Trust by this amount. Given this history, Corporation managers did not see the need to reconcile the number of positions created by grant funding with the number the Trust could support. The Trust balance was not viewed as a constraining factor. Because the number of positions approved in the grants was not reconciled with the Trust before grants were awarded, there was the potential for grantees to enroll more participants than the Trust could support. Two program management policies affected the number and type of participants and, therefore, the use of Trust funds. One policy permitted grantees to over enroll participants under certain circumstances with approval from their AmeriCorps program officer. Specifically, the policy allowed grantees to over enroll up to 20 percent. The program year 2002-03 data indicate that while only a few of the grantees increased their enrollment, some increased theirs by more than 20 percent. Another policy allowed grantees to convert positions from full-time to part-time as long as the total number of full-time equivalents supported by the grant did not change. While this practice did not affect the program funds, it did affect the Trust. After the enrollments were suspended, Corporation officials determined that part-time participants used their education awards at a higher rate than full-time participants and therefore the number of part- time participants resulted in a relatively higher level of use for the education award. The Corporation did not have reliable data on the number of AmeriCorps participants during the period leading up to the suspension. Enrollments are recorded by grantees through the Corporation’s Web-Based Reporting System (WBRS). While the enrollment information in WBRS was uploaded into the Corporation’s database and used to track education award obligations on a weekly basis, Corporation officials said that discrepancies existed between the number of participants enrolled and the number the Corporation was aware of, because of the length of time between when a participant started to serve and when the grantee entered information into WBRS. A Corporation official said that it was not unheard of for some grantees to be 60 to 90 days late in entering an enrollment into WBRS. By allowing grantees the flexibility to change the number and type of participants coupled with delays in receiving information on enrollments, the Corporation and AmeriCorps managers could not be certain about the number of participants. Corporation officials said that this resulting lack of confidence in the data was a contributing factor to the decision to suspend enrollments. In response to concerns that the AmeriCorps program may have enrolled participants without adequately providing for their education awards, the Corporation has developed several new policies. While the Corporation is modifying its practice of when it records obligations, the Corporation overlooks the legal duty it incurs at the time of grant award. Other policy changes are directed to improving communication among key executives, limiting grantees’ flexibilities and requiring more timely information on participants. While these policies were only recently introduced, they could, if implemented, help the Corporation keep track of the day-to-day aspects of the AmeriCorps program and provide information needed to monitor the use of the Trust in order to determine whether the Corporation should make adjustments, such as deobligating excess funds. However, data integration problems between WBRS and the program the Corporation uses to track the education awards earned by AmeriCorps participants may hamper the effectiveness of the new procedures. The Corporation is in the process of modifying its practices regarding when it will record obligations. The Corporation’s General Counsel explained that the Corporation will record obligations at the time of enrollment, instead of on a quarterly drawdown basis and that the obligations will be based on estimates of what these enrolled members will draw down in the future. The Corporation is of the opinion that it does not incur an obligation for an education award until the time of enrollment because it may modify the terms and conditions of a grant, including a reduction in the number of new participants the grantee may enroll, prior to the enrollment of all positions initially approved in a grant, to prevent a shortfall in the Trust. The General Counsel also said “…a binding agreement between the Government and an AmeriCorps member exists only upon the member’s authorized enrollment in the Trust.” While it may be true that the Corporation has no binding agreement with a participant until the participant enrolls in AmeriCorps, this is not the controlling consideration for fund control purposes. In our opinion, this view overlooks the legal duty the Corporation incurs at the time of grant award when it commits to funding a specified number of participants and the constraint imposed on the Corporation by the National and Community Service Act. Specifically, the act says “…he Corporation may not approve positions as national service positions…for a fiscal year in excess of the number of positions for which the Corporation has sufficient available funds in the National Service Trust for that fiscal year…”. The Corporation, by its own admission, may modify the number of approved participants only if it amends the grant agreement to reduce the number of enrolled positions prior to enrollment. When a grant is awarded, the number of new participants approved in the grant establishes a legal duty that can mature into a legal liability for education awards by virtue of actions of the grantee, unless the Corporation modifies the grant prior to participant enrollment. While the Corporation may unilaterally reduce the number of authorized positions awarded to a grantee prior to participant enrollment, from the time of grant award until the Corporation acts to reduce the approved number of positions, the grantee and its subgrantee, not the Corporation, will control the number of participants who may enroll, up to the maximum number of participants the Corporation has approved in the grant agreement. It is also significant to note that the grantee and subgrantee, by their actions in enrolling participants, not the Corporation, control the amount, ultimately, of the Corporation’s liability. If the amount of liability to the government is under the control of the grantee, not the Corporation, the government should obligate funds to cover the maximum amount of the liability. As more information is known, the Corporation should adjust the obligation—deobligate funds or increase the obligation level—as needed. The Corporation also said that at the time a member enrolls it would record its “…best estimate of the Government’s ultimate liability of education awards provided to members enrolled in the National Service Trust.” According to the Corporation’s General Counsel, the Corporation’s estimates of the amount that enrolled members will draw down is based on historical information, such as attrition rate and actual usage by participants who complete a term of service and earn an education award. It appears to us that the Corporation is confusing its accounting liability—projections booked in its accounting systems for financial statement purposes, with its legal liability—amounts to be recorded in its obligational accounting systems and tracked in order to ensure compliance with fiscal laws. One of the federal financial accounting standards states that a liability for proprietary accounting purposes is a probable and measurable future outflow or other sacrifice of resources as a result of past transactions or events. Traditionally, projections of accounting liability consider the same factors, such as historical trends, that are considered in the Corporation’s model. To track its obligations, the Corporation should be recording its unmatured legal liability for the education awards, which is the total cost associated with the enrollment of all approved positions. The Corporation’s obligation should be recorded as it is incurred and should be calculated by multiplying the number of approved positions in a grant by the total cost of a national service educational award. Policy changes at Corporation headquarters are designed to improve communication between several key offices and officials. A major change is that the Trust balance is to be a limiting factor on grant awards and, therefore, enrollment levels. In addition, beginning with the 2003 grant cycle, one new policy calls for the AmeriCorps director to work with the grants director, the Chief Financial Officer (CFO), and the Trust director to compare projections of positions to be approved in grants with those supported by actual appropriations, and the Chief Executive Officer (CEO) will only approve the number of positions the Trust can support. Additionally, the CEO will approve all AmeriCorps grants after consultation with the CFO on the number of education awards that can be supported by the Trust. Also, the policy states that the CEO, CFO, the Trust director, and the AmeriCorps director will meet at least monthly to review and reconcile enrollment data and Trust data. Through bi-weekly reports, the AmeriCorps director and the Trust director are to keep the CEO and CFO informed of the number of approved and filled positions. The Trust director is to monitor factors relevant to forecasting Trust liabilities and report regularly to the CFO, highlighting deviations from assumptions in the model. Each month the CFO is to use actual enrollment data to re-evaluate the model for forecasting Trust liabilities. If the revision results in a need to change enrollment targets, the CFO will notify the CEO and AmeriCorps director immediately. The CEO will take appropriate action and report any such action to Congress, the Corporation’s Board, and the Office of Management and Budget. Regular meetings and attention to the enrollment data should help the Corporation keep track of the day-to-day aspects of the AmeriCorps program. Such updated information is an important step in monitoring the use of the Trust in order to determine whether the Corporation should make adjustments. For example, if the Corporation obligated the full cost for each of the positions approved at the time of grant award, and later determined that many of the positions will not be filled, it could reduce the number of approved positions and deobligate some of the funds. The policy changes and new procedures were announced in January. We will continue to monitor the implementation of these policy changes. The Corporation has changed policies regarding its grantees ability to over enroll participants, replace participants who leave with new enrollees and change positions from full-time to part-time. In a January 22, 2003, memorandum, the director of AmeriCorps cancelled the policy that allowed grantees to over enroll members by up to 20 percent over the ceiling established in the grant award in order to take account of attrition. Furthermore, an official said AmeriCorps now considers a position to be filled for the term of the grant once the grantee enrolls a participant, even if the participant later drops out of the program, whether or not an education award was earned. The official said that in the past, grantees could enroll a new member to serve out the balance of the term if grant funds were available. A Corporation official also said that there is a new policy that restricts grantees from converting full-time positions to part- time positions. Grantees must now request and receive approval from the Corporation before such changes can be made. Since grantees will not be permitted to modify the number and type of authorized positions, the Corporation’s ability to manage the AmeriCorps program should improve. Most 2003 grant positions have not yet been awarded; therefore, it is too early to tell whether these new policies will be effective. We will monitor these policies and assess the extent to which they have been implemented as we complete our work. In January 2003 the Corporation informed all grantees that AmeriCorps will require timely reporting of participant information to ensure that the Trust database receives current information on the number of participants eligible for an education award. Grantees will be required to keep AmeriCorps informed of the number of participants offered positions and the number who accept and enroll and to document enrollment through WBRS no later than 30 days after participants start working. The memorandum warns grantees that failure to comply with this requirement could result in reductions in the number of positions or termination of the grant. Additionally, the memorandum directs state commissions and other AmeriCorps grantees—the organizations responsible for the oversight of subgrantees—to implement procedures to ensure that timely notification of participant commitments and enrollments is part of their review and oversight functions. Furthermore, the Corporation has made changes to WBRS, which is used to track participant, grant, and budget information. First, controls have been put in place to limit the number of positions listed in WBRS to no more than the number of approved positions. The Corporation’s Biweekly Trust Enrollment Summary, as of March 2003, shows that award totals are being tracked and compared with the data estimates in the Trust. However, officials told us that there are some data reconciliation problems between WBRS and the program used by the Corporation to track the education awards earned by AmeriCorps participants. Corporation staff have had to make manual adjustments to reconcile the data. Accurate and timely information about enrollments should help the Corporation and AmeriCorps manage the program. As grants are awarded, we will be able to assess whether the policies have been fully implemented. The Corporation’s new policies, if fully implemented, should help the Corporation manage the AmeriCorps program by providing better information on day-to-day operations. However, without obligating the full amount associated with all of the positions authorized in the grants, the Corporation remains at risk of having the actual number of enrollments exceed the estimated number the Trust can support. We will monitor the implementation of the Corporation’s new policies as we continue our review. For further information regarding this statement, please call Cornelia M. Ashby at (202) 512-8403 or Susan A. Poling at 202-512-5644. Individuals making key contributions to this testimony included Carolyn M. Taylor, Tom Armstrong, Anthony DeFrank, Joel Marus, and Hannah Laufe. Appendix I: Obligational Practices of the Corporation for National and Community Service This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In November 2002, the Corporation for National and Community Service suspended enrollments in the AmeriCorps program due to concern that the National Service Trust may not contain enough funds to meet the education award obligations resulting from AmeriCorps enrollments. This testimony reflects GAO's preliminary review of the factors that contributed to the need to suspend enrollments and GAO's preliminary assessment of the Corporation's proposed changes. The number of participants enrolled in AmeriCorps increased by about 20,000 from program year 1998 to program year 2001. However, the number of AmeriCorps participants was not reconciled with the number of education awards that the National Service Trust could support. GAO identified several factors that led the Corporation to suspend enrollments. The factors included inappropriate obligation practices, little or no communication among key Corporation executives, too much flexibility given to grantees regarding enrollments, and unreliable data on the number of AmeriCorps participants. The Corporation has established new policies that may improve the overall management of the National Service Trust if the policies are fully implemented. However, the Corporation has not made policy changes to correct a key factor--how it obligates funds for education awards.
On August 29, 2005, and in the ensuing days, Hurricanes Katrina, Rita, and Wilma devastated the Gulf Coast region of the United States. Hurricane Katrina alone affected more than a half million people located within approximately 90,000 square miles spanning Louisiana, Mississippi, and Alabama, and ultimately resulted in over 1,600 deaths. Hurricane Katrina severely tested disaster management at the federal, state, and local levels and revealed weaknesses in the basic elements of preparing for, responding to, and recovering from a catastrophic disaster. Beginning in February 2006, reports by the House Select Bipartisan Committee to Investigate the Preparation for and Response to Hurricane Katrina, the Senate Homeland Security and Governmental Affairs Committee, the White House Homeland Security Council, the DHS Inspector General, DHS, and FEMA all identified a variety of failures and some strengths in the preparation for, response to, and initial recovery from Hurricane Katrina. Our findings about the response to Hurricane Katrina in a March 2006 testimony and a September 2006 report focused on the need for strengthened leadership, capabilities, and accountability to improve emergency preparedness and response. The Post-Katrina Act was enacted to address various shortcomings identified in the preparation for and response to Hurricane Katrina. The act enhances FEMA’s responsibilities and its autonomy within DHS. FEMA is to lead and support the nation in a risk-based, comprehensive emergency management system of preparedness, protection, response, recovery, and mitigation. Under the act, the FEMA Administrator reports directly to the Secretary of Homeland Security; FEMA is now a distinct entity within DHS; and the Secretary of Homeland Security can no longer substantially or significantly reduce the authorities, responsibilities, or functions of FEMA or the capability to perform them unless authorized by subsequent legislation. The act further directs the transfer to FEMA of many functions of DHS’s former Preparedness Directorate. The statute also codified FEMA’s existing regional structure, which includes 10 regional offices, and specified their responsibilities. It also contains a provision establishing in FEMA a National Integration Center, which is responsible for the ongoing management and maintenance of the National Incident Management System (NIMS)—which describes how emergency incident response is to be managed and coordinated—and the National Response Plan (NRP)—now revised and known as the National Response Framework (NRF). In addition, the act includes several provisions to strengthen the management and capability of FEMA’s workforce. For example, the statute calls for a strategic human capital plan to shape and improve FEMA’s workforce, authorizes recruitment and retention bonuses, and establishes requirements for a Surge Capacity Force. The Post-Katrina Act extends beyond changes to FEMA’s organizational and management structure and includes legislative reforms in other emergency management areas that were considered shortcomings during Hurricane Katrina. For example, the Post-Katrina Act includes an emergency communications title that requires, among other things, the development of a National Emergency Communications Plan, as well as the establishment of working groups within each FEMA region dedicated to emergency communications coordination. The act also addresses catastrophic planning and preparedness; for example, it charges FEMA’s National Integration Center with revising the NRF’s catastrophic incident annex, and it makes state catastrophic planning a component of one grant program. In addition, the act addresses evacuation plans and exercises and the needs of individuals with disabilities. In November 2008, we reported the actions FEMA and DHS had taken in response to more than 300 distinct provisions of the Post-Katrina Act that we had identified. We also reported on areas where FEMA and DHS still needed to take action and any challenges to implementation that FEMA and DHS officials identified during our discussions with them. In general, we found that FEMA and DHS had made some progress in their efforts to implement the act since it was enacted in October 2006. For most of the provisions we examined, FEMA and DHS had at least preliminary efforts under way to address them. We also identified a number of areas that still required action, and noted that it was clear that FEMA and DHS had work remaining to implement the provisions of the act. Throughout this statement, unless otherwise noted, the actions reported that DHS and FEMA have taken to address provisions of the Post-Katrina Act are drawn from our November 2008 report. Our 2006 report noted that in preparing for, responding to, and recovering from any catastrophic disaster, the legal authorities, roles and responsibilities, and lines of authority at all levels of government must be clearly defined, effectively communicated, and well understood in order to facilitate rapid and effective decision making. We further noted that the experience of Hurricane Katrina showed the need to improve leadership at all levels of government to better respond to a catastrophic disaster. Specifically, we reported that in the response to Hurricane Katrina there was confusion regarding roles and responsibilities under the NRP, including the roles of the Secretary of Homeland Security and two key federal officials with responsibility for disaster response—the Principal Federal Official (PFO), and the Federal Coordinating Officer (FCO). The Post-Katrina Act clarified FEMA’s mission within DHS and set forth the role and responsibilities of the FEMA Administrator. These provisions, among other things, required that the FEMA Administrator provide advice on request to the President, the Homeland Security Council, and the Secretary of Homeland Security, and that the FEMA Administrator report directly to the Secretary of Homeland Security without having to report through another official. As a result of the limitations in the NRP revealed during the response to Hurricane Katrina and as required by the Post-Katrina Act, DHS and FEMA undertook a comprehensive review of the NRP. The result of this process was the issuance, in January 2008, of the NRF (the name for the revised NRP). The NRF states that it is to be a guide to how the nation conducts an all-hazards response and manages incidents ranging from the serious but purely local to large-scale terrorist attacks or catastrophic natural disasters. The NRF became effective in March 2008. As reflected in the NRF and confirmed by FEMA’s Office of Policy and Program Analysis and FEMA General Counsel, there is a direct reporting relationship between the FEMA Administrator and the Secretary of Homeland Security. According to officials in FEMA’s Office of Policy and Program Analysis, the FEMA Administrator gives emergency management advice as a matter of course at meetings with the President, the Secretary of Homeland Security, and the Homeland Security Council. The NRF also states that the Secretary of Homeland Security coordinates with other appropriate departments and agencies to activate plans and applicable coordination structures of the NRF, as required. The FEMA Administrator assists the secretary in meeting these responsibilities. FEMA is the lead agency for emergency management under NRF Emergency Support Function #5, which is the coordination Emergency Support Function for all federal departments and agencies across the spectrum of domestic incident management from hazard mitigation and preparedness to response and recovery. We reported in 2006 that in response to Hurricane Katrina, the Secretary of Homeland Security initially designated the head of FEMA as the PFO, who then appointed separate FCOs for Alabama, Louisiana, and Mississippi. It was not clear, however, who was responsible for coordinating the overall federal effort at a strategic level. Our fieldwork indicated that the lack of clarity in leadership roles and responsibilities resulted in disjointed efforts of federal agencies involved in the response, a myriad of approaches and processes for requesting and providing assistance, and confusion about who should be advised of requests and what resources would be provided within specific time frames. The Post-Katrina Act required that the Secretary of Homeland Security, through the FEMA Administrator, provide a clear chain of command in the NRF that accounts for the roles of the FEMA Administrator, the FCO, and the PFO. According to the NRF, the Secretary of Homeland Security may elect to designate a PFO to serve as his or her primary field representative to ensure consistency of federal support as well as the overall effectiveness of federal incident management. The NRF repeats the Post- Katrina Act’s prohibition that the PFO shall not direct or replace the incident command structure or have directive authority over the FCO or other federal and state officials. Under the NRF, the PFO’s duties include providing situational awareness and a primary point of contact in the field for the secretary, promoting federal interagency collaboration and conflict resolution where possible, presenting to the secretary any policy issues that require resolution, and acting as the primary federal spokesperson for coordinated media and public communications. According to DHS officials, at the time of our 2008 report, no PFO had been operationally deployed for any Stafford Act event since the response to Hurricane Katrina. DHS’s appropriations acts for fiscal years 2008 and 2009 have each included a prohibition that “none of the funds provided by this or previous appropriations acts shall be used to fund any position designated as a Principal Federal Official” for any Stafford Act declared disasters or emergencies. Our Office of General Counsel plans to address the implications of this funding prohibition in future work. According to the NRF, the primary role and responsibilities of the FCO include four major activities: representing the FEMA Administrator in the field and discharging all FEMA responsibilities for the response and recovery efforts under way; administering Stafford Act authorities, including the commitment of FEMA resources and the issuance of mission assignments to other federal departments or agencies; coordinating, integrating, and synchronizing the federal response, within the Unified Coordination Group at the Joint Field Office; and interfacing with the State Coordinating Officer and other state, tribal, and local response officials to determine the most urgent needs and set objectives for an effective response in collaboration with the Unified Coordination Group. The Catastrophic Incident Annex to the NRP (now NRF) was a source of considerable criticism after Hurricane Katrina. The purpose of this annex is to describe an accelerated, proactive national response to catastrophic incidents and establish protocols to pre-identify and rapidly deploy essential resources expected to be urgently needed. Lack of clarity about the circumstance under which the annex should be activated contributed to issues with clear roles and lines of responsibility and authority. Because questions surrounded whether the annex should apply only to events that occur with little or no notice rather than events with more notice that have the potential to evolve into incidents of catastrophic magnitude, like a strengthening hurricane, it did not provide a clear guidance about the extent to which the federal government should have been involved in the accelerated response role that it describes. We noted in 2006 that our review of the NRP and its catastrophic incident annex—as well as lessons from Hurricane Katrina—demonstrated the need for DHS and other federal agencies to develop robust and detailed operational plans to implement the catastrophic incident annex and its supplement in preparation for and response to future catastrophic disasters. Under the Post-Katrina Act, FEMA’s National Integration Center is statutorily responsible for revising the Catastrophic Incident Annex and for finalizing and releasing an operational supplement—the Catastrophic Incident Supplement. The annex was revised and released in November 2008. Officials from FEMA’s National Preparedness Directorate told us in March 2009 that operational annexes of the Catastrophic Incident Supplement are being updated to reflect the current response capabilities of the federal government. FEMA officials told us that the annex and its operational supplement were not activated during the 2008 hurricane season because none of the storms resulted in a catastrophic incident that would require their use. In our 2006 report, we noted that developing the capabilities needed for large-scale disasters is part of an overall national preparedness effort that is designed to integrate and define what needs to be done, where, based on what standards, how it should be done, and how well it should be done. The response to Hurricane Katrina highlighted the limitations in the nation’s capabilities to respond to catastrophic disasters. Various reports from Congress and others, along with our work on FEMA’s performance before, during, and after Hurricane Katrina suggested that FEMA’s human, financial, and technological resources and capabilities were insufficient to meet the challenges posed by the unprecedented degree of damage and the resulting number of hurricane victims. Among other things, in 2006 we reported on problems during Hurricane Katrina with (1) emergency communications, (2) evacuations, (3) logistics, (4) mass care, (5) planning and training, and (6) human capital. Our 2006 report noted that emergency communications is a critical capability common across all phases of an incident. Agencies’ communications systems during a catastrophic disaster must first be operable, with sufficient communications to meet internal and emergency communication requirements. Once operable, they then should have communications interoperability whereby public safety agencies (e.g., police, fire, emergency medical services) and service agencies (e.g., public works, transportation, hospitals) can communicate within and across agencies and jurisdictions in real time as needed. Hurricane Katrina caused significant damage to the communication infrastructure— including commercial landline and cellular telephone systems—in Louisiana and Mississippi, which further contributed to a lack of situational awareness for military and civilian officials. Among other provisions aimed at strengthening emergency communications capabilities, the Post-Katrina Act established an Office of Emergency Communications (OEC) within DHS. The statutory responsibilities of OEC include, but are not limited to, conducting outreach, providing technical assistance, coordinating regional emergency communications efforts, and coordinating the establishment of a national response capability for a catastrophic loss of local and regional emergency communications. OEC’s stakeholder outreach efforts have included coordinating with 150 individuals from the emergency response community to develop the National Emergency Communications Plan. OEC officials stated that the outreach was primarily carried out through several organizations that represent officials from federal, state, and local governments and private- sector representatives from the communications, information technology, and emergency services sectors. Through the Interoperable Communications Technical Assistance Program, OEC has been working with Urban Area Working Groups and states to assess their communications infrastructure for gaps and determine technical requirements that can be used to design or enhance interoperable communications systems. According to the Deputy Director of OEC, OEC provided technical assistance to 13 recipients of the 2007 Urban Area Security Initiative grants by providing guidance on technical issues such as engineering solutions and drafting requests for proposals, as well as providing best practices information. In addition, OEC offered assistance to states and territories in developing their Statewide Communication Interoperability Plans and, as of August 1, 2008, had conducted plan development workshops for the 30 states and five territories that requested such help. Officials from OEC stated that they have been coordinating to minimize any overlap between the roles and responsibilities of various DHS regional staff offices related to emergency communications. According to the officials, officials from these regional staff offices plan to attend and share information through the Regional Emergency Communications Coordination Working Groups—also established by the Post-Katrina Act. OEC officials said that OEC had hired a federal employee to represent OEC at working group meetings. In addition, OEC officials stated their intention to hire regional interoperability coordinators for each of the 10 FEMA regions in fiscal year 2009 to work with FEMA on the activities of the working groups. FEMA officials told us in March 2009 that FEMA’s Disaster Emergency Communications Division has filled one national and nine regional positions to coordinate the working groups. FEMA’s Region II has not yet filled the regional position. As of March 2009, all working groups, with the exception of Regions II and IX, have been established. According to FEMA officials, the eight established groups have had various levels of activity, with the number of meetings ranging from one time (Regions VI and X) to eight times (Regional IV). No updated information about specific efforts to minimize overlap or to achieve the Post-Katrina Act objectives for the working groups was provided. To establish a national response capability for a catastrophic loss of local and regional emergency communications, OEC officials told us they had been working with FEMA and the National Communications System (NCS) to coordinate policy and planning efforts relating to the existing response capability managed through the NRF’s Communication Annex, Emergency Support Function 2. According to OEC officials, an example of this coordination was the inclusion of continuity of emergency communications and response operations in the National Emergency Communications Plan. The officials also said that OEC would represent NCS in regions where the system has no presence and would support the system’s private-sector coordination role, as appropriate. In addition, the Director and Deputy Director of OEC told us that OEC, FEMA, and the NCS were developing a strategy that involved the OEC’s regional interoperability coordinators providing technical support, playing a role as needed in Emergency Support Function 2, and providing response capabilities within their designated regions, among other things. FEMA officials told us in March 2009 that FEMA and NCS have worked closely to develop revised operating procedures that define their roles and responsibilities under Emergency Support Function 2. In addition, they said that NCS recently hired three Regional Emergency Communications Coordinators with responsibility for coordinating with regional, private- sector communications providers. The NCS coordinators are working with FEMA regional coordinators to ensure that infrastructure communications restoration efforts are supported by and consistent with FEMA tactical communications support to state and local response efforts. To improve the national response capability, FEMA officials also reported in March 2009 that they had defined an integrated response framework and five critical disaster emergency communications incident support functions—mission operations, facilities, tactical, restoration, and planning and coordination. Additionally, the officials also reported acquiring assets, assessing networks, and establishing prescripted mission assignments to enhance response capabilities. Finally, the officials said that FEMA Disaster Emergency Communications Division has coordinated the development of 24 state and territory disaster emergency communications annexes. They noted that some of these state and territorial annexes were used in Hurricanes Gustav and Ike, as well as during the Presidential Inauguration to support response activities, understand state and local communications capabilities, and prepare for any shortfalls that may arise. In terms of tactical support, FEMA officials told us that FEMA’s Mobile Emergency Response Support mission carried out a variety of support activities during Hurricanes Gustav and Ike. For example, among other activities reported by the officials, FEMA provided mobile emergency communications infrastructure to support continuity of local government and supported maintenance and repair of communications equipment for local first responders on Galveston Island. We reported in 2006 that by definition, a catastrophic disaster like Hurricane Katrina would impact a large geographic area necessitating the evacuation of many people—including vulnerable populations, such as hospital patients, nursing home residents, and transportation- disadvantaged populations who were not in such facilities. The Post-Katrina Act amended the Stafford Act to authorize transportation assistance to relocate displaced individuals to and from alternate locations for short- or long-term accommodations, or to return them to their predisaster primary residences. FEMA officials in the Disaster Assistance Directorate told us that they have developed a draft policy for implementing the transportation assistance authority. They noted that it would require implementation of proposed regulatory changes before becoming effective, and as of March 2009, it was on hold due to these required changes. In addition, they noted that according to FEMA’s July 2006 Mass Sheltering and Housing Assistance Strategy, if the scale of the evacuation overwhelms affected states’ sheltering capabilities, FEMA will coordinate and provide air or surface transportation in support of interstate evacuation. If the evacuated area is without extensive damage to residences, as stated in the strategy, FEMA will coordinate and fund return mass transportation to the point of transportation origin. If the evacuated area suffered extensive damage to residences, eligible evacuees are authorized, with host state consent, to use FEMA funding known as Other Needs Assistance to purchase return transportation when they are able to do so. The Post-Katrina Act authorized grants made to state, local, and tribal governments through the State Homeland Security Program or the Urban Area Security Initiative to be used to establish programs for mass- evacuation plan development and maintenance, preparation for execution of mass evacuation plans, and exercises. According to the Director of Grants Development and Administration, FEMA informed state, local, and tribal governments that they may use the grant awards to assist mass evacuation planning via the fiscal year 2008 Homeland Security Grant Program written guidance, which covers both grants. The act also required the FEMA Administrator, in coordination with the heads of other federal agencies, to provide evacuation preparedness technical assistance to state, local, and tribal governments. FEMA developed the Mass Evacuation Incident Annex to the NRF, which provides an overview of mass evacuation functions, agency roles and responsibilities, and overall guidelines for the integration of federal, state, tribal, and local support for the evacuation of large numbers of people during incidents requiring a coordinated federal response. However, according to officials in FEMA’s Disaster Operations Directorate, as of March 10, 2009, FEMA had not finalized the Mass Evacuation Incident Annex Operational Supplement to the NRF to provide additional guidance for mass evacuations. Officials in FEMA’s Disaster Operations Directorate also noted that the states participating in FEMA’s Catastrophic Disaster Planning Initiative— an effort to strengthen response planning and capabilities for select scenarios (e.g., a Category 5 hurricane making landfall in southern Florida)—benefit from detailed federal, state, and local catastrophic planning that includes examination of evacuation topics. These states include Florida, Louisiana, California, and the eight Midwestern states in the New Madrid Seismic Zone. National Preparedness Directorate officials also told us that FEMA had conducted mass evacuation workshops in Georgia and Florida and had provided technical assistance to the state of Louisiana, helping to develop a mass evacuation plan. FEMA officials told us that this plan—the Gulf Coast Evacuation Plan—was successfully implemented during Hurricane Gustav to evacuate 2 million people from New Orleans within 48 hours of the incident using a multimodal approach (air, bus, and rail) and to enable their return within 4 days. The Post-Katrina Act requires FEMA to provide mass evacuation planning assistance to institutions that house individuals with special needs upon request by a state, local, or tribal government. FEMA officials in the Disaster Operations Directorate told us that they had not received any requests for such assistance. These officials said that the draft Mass Evacuation Incident Annex Operational Supplement will include a tab on evacuation issues related to people with special needs and, once issued, can provide guidance to hospitals, nursing homes, and other institutions that house individuals with special needs. Officials from FEMA’s National Preparedness Directorate also noted that the Homeland Security Preparedness Technical Assistance Program provides technical assistance upon request to jurisdictions interested in planning for mass evacuations. Additionally, they said the directorate was developing evacuation and reentry planning guidance for use by state and local governments, which is scheduled for interim release in the summer of 2009. In establishing a Disability Coordinator within FEMA to ensure that the needs of individuals with disabilities are addressed in emergency preparedness and disaster relief, the Post-Katrina Act charged the Disability Coordinator with specific evacuation-related responsibilities, among other things. First, the act required the coordinator to ensure the coordination and dissemination of model evacuation plans for individuals with disabilities. Second, the act charged the coordinator with ensuring the availability of accessible transportation options for individuals with disabilities in the event of an evacuation. At the time of our 2008 report, FEMA had efforts under way for each provision, but provided little specific detail on the status of those efforts. The Disability Coordinator told us that FEMA was in the process of developing model evacuation plans for people with disabilities. She also told us that FEMA had begun to work with state emergency managers to help develop evacuation plans that include accessible transportation options, and that FEMA was working with states to develop paratransit options as well as to coordinate the use of accessible vans for hospitals and nursing homes. In 2006, we conducted work examining the nation’s efforts to protect children after the Gulf Coast hurricanes and identified evacuation challenges for this population. We noted that thousands of children were reported missing to the National Center for Missing and Exploited Children, which used its trained investigators to help locate missing children after the evacuation. Officials from this Center stated that both the American Red Cross and FEMA had some information on the location of children in their databases; however, they said it was difficult to obtain this information because of privacy concerns. These officials told us that standing agreements for data sharing among organizations tracking missing children, the Red Cross, and FEMA could help locate missing persons more quickly. The Post-Katrina Act established two mechanisms to help locate family members and displaced children. First, the act established the National Emergency Child Locator Center within the National Center for Missing and Exploited Children and enumerated the responsibilities of the center, among other things, to provide technical assistance in locating displaced children and assist in the reunification of displaced children with their families. Second, the act required the FEMA Administrator to establish the National Emergency Family Registry and Locator System to help reunify families separated after an emergency or major disaster. The National Emergency Child Locator Center and the Family Registry and Locator System have each established a hotline and a Web site. The family locator system has a mechanism to redirect any request to search for or register displaced children to the National Emergency Child Locator Center. FEMA officials told us in March 2009 that the family locator system was activated and used during Hurricanes Gustav and Ike after it was determined that the coastal evacuations of Louisiana and Texas would involve millions of people. Once activated, FEMA’s Public Affairs Office informed the media in the affected areas about the availability of the service. Officials noted that use of the family locator system during Hurricane Gustav resulted in 558 registrants and 862 searches, and use during Hurricane Ike resulted in 1,162 registrants and 1,034 searches. The National Emergency Child Locator Center was not activated, but three referrals (one during Hurricane Gustav and two during Hurricane Ike) were forwarded to the National Center for Missing and Exploited Children through the family locator system Web site. At the time of our 2008 report, FEMA had established a memorandum of understanding (MOU), effective March 6, 2007, with the following organizations: the Department of Justice, the Department of Health and Human Services, the National Center for Missing and Exploited Children, and the American Red Cross that, among other things, requires signatory agencies to participate in a cooperative agreement, and for FEMA, through the National Emergency Family Registry and Locator System, to provide relevant information to the National Emergency Child Locator Center. The Disaster Assistance Directorate Unit Leader told us that the child locator center was, at that time, in the process of finalizing cooperative agreements with federal and state agencies and other organizations such as the American Red Cross to help implement its mission. FEMA officials told us that, as of March 2009, a cooperative agreement between FEMA and the National Center for Missing and Exploited Children was being finalized. They said they expected the agreement to be tested during the 2009 hurricane season. We reported in 2006 that our work and that of others indicated that logistics systems—the capability to identify, dispatch, mobilize, and demobilize and to accurately track and record available critical resources throughout all incident management phases—were often totally overwhelmed by Hurricane Katrina. Critical resources apparently were not available, properly distributed, or provided in a timely manner. The result was duplication of deliveries, lost supplies, or supplies never being ordered. FEMA is responsible for coordinating logistics during disaster response efforts, but during Hurricane Katrina, FEMA quickly became overwhelmed, in part because it lacked the people, processes, and technology to maintain visibility—from order through final delivery—of the supplies and commodities it had ordered. Similarly, our 2006 work examining the coordination between FEMA and the Red Cross to provide relief to disaster victims found that FEMA did not have a comprehensive system to track requests for assistance it received from the Red Cross on behalf of voluntary organizations and state and local governments for items such as water, food, and cots.The Post-Katrina Act required FEMA to develop an efficient, transparent, and flexible logistics system for procurement and delivery of goods and services necessary for an effective and timely emergency response. In November 2008, we reported that FEMA had taken multiple actions to improve its logistics management. First, seeking to develop an effective and efficient logistics planning and operations capability, FEMA elevated its logistics office from the branch to the directorate level, establishing the Logistics Management Directorate (LMD) in April 2007. Second, FEMA and the U.S. General Services Administration—FEMA’s co- lead for Emergency Support Function 7—sponsored the National Logistics Coordination Forum in March 2008. The forum was intended to open a dialogue between the sponsors and their logistics partners, and to discuss how to better involve the private sector in planning for and recovering from disasters. The forum was attended by representatives from other federal agencies, public and private sector groups, nongovernmental organizations, and other stakeholders. Third, to improve its supply chain management, FEMA brought in a supply chain expert from the United Parcel Service through its Loaned Executive Program. FEMA also has a Private Sector Office to exchange information on best practices and to facilitate engagement with the private sector. In addition, FEMA established a Distribution Management Strategy Working Group in January 2008 to analyze and develop a comprehensive distribution and supply chain management strategy. Finally, in 2007, FEMA conducted the Logistics Management Transformation Initiative, a comprehensive assessment of FEMA’s logistics planning, processes, and technology. LMD officials intend for this initiative to help inform the development of a long-term strategy to transform FEMA’s business processes and identify information technology development opportunities. According to LMD officials, FEMA plans to complete this transformation by 2009, and review and refine business processes by 2014. We noted in our November 2008 report, as an area to be addressed, that the DHS Office of Inspector General reported in May 2008 that, while FEMA had developed a logistics planning strategy that calls for developing three levels of logistics plans (strategic, operational, and tactical), the FEMA Incident Logistics Concept of Operations and a Logistics Management Operations Manual were still in draft. Our 2006 findings about logistics challenges included FEMA’s inability to maintain visibility over supplies, commodities, and requests for assistance. As of August 1, 2008, FEMA had fully implemented Total Asset Visibility (TAV) programs in FEMA Regions IV and VI to manage and track, electronically and in real time, the movement of its disaster commodities and assets. At that time, according to FEMA LMD officials, TAV was partially available in the other eight FEMA regions. FEMA officials told us in March 2009 that the strategy to fully implement TAV by 2011 was undergoing a comprehensive review. LMD had restricted spending to critical mission functions, pending completion of the review. In the meantime, they said LMD would focus on capabilities that could have the most significant impact during the 2009 hurricane season, specifically, the aspect of TAV used for warehouse management and the aspect that would allow FEMA to use the system to order materials and from and track shipments of its response partners. Initially LMD is working with four partners—the Defense Logistics Agency, the General Services Administration, the U.S. Army Corps of Engineers, and the American Red Cross. According to LMD officials, at the time of our November 2008 report, the aspect of TAV FEMA uses for warehouse management was only available at distribution centers in Atlanta, Georgia, and Fort Worth, Texas. The officials stated that FEMA expected to deploy the warehouse management portion of TAV to the other six FEMA distribution centers— in Berryville, Virginia; Frederick, Maryland; San Jose, California; Guam; Hawaii; and Puerto Rico—in fiscal years 2009 and 2010. Further, the officials said that shipments from FEMA’s logistics partners were not yet tracked through TAV, but FEMA and the four initial partners were working to provide full visibility of critical shipments to disaster areas. FEMA officials told us in March 2009 that during Hurricanes Gustav and Ike, they used TAV to create and track commodity requirements fulfilled by FEMA or its partners and to track FEMA shipments in-transit. The officials noted that they were not able to track shipments from partners before they arrived at FEMA sites but that deficiency could be corrected when the partner-tracking aspect of TAV was fully implemented. They also said they used TAV’s warehouse management system, where available, to track and manage shipments, receipts and inventory for eight critical commodities daily. Other commodities that could not yet be tracked through TAV’s warehouse management system had to be manually entered into the system. Finally, they said they used TAV to track in-transit visibility of ambulances, buses, and temporary housing units. In March 2009, FEMA officials also shared four major lessons learned and planned corrective actions resulting from the response to Hurricanes Ike and Gustav. The four lessons learned related to: (1) inconsistent use of TAV in the field during Hurricane Ike, (2) lack of TAV specialists to support all distribution sites, (3) slow and unreliable connectivity to the TAV system, and (4) use of standard operating procedures. To address inconsistent use of TAV, FEMA officials say they have increased standardized training and awareness at all levels within FEMA and have developed a TAV communications plan intended increase awareness of TAV capabilities. To address issues with the availability of TAV specialists, FEMA officials told us they have identified and screened additional TAV specialists, are planning to hire additional Disaster Assistance Employees, and are planning to crosstrain additional employees. To address connectivity issues, FEMA officials said they are testing use of portable satellite equipment and scanners that are hardwired to a satellite. They also said they are seeking to use extended wireless access to support operations during the 2009 hurricane season. To address issues with standard operating procedures, FEMA officials said they are reviewing and updating the procedures and reemphasizing the appropriate use of TAV through training. Mass care is the capability to provide immediate shelter, feeding centers, basic first aid, and bulk distribution of needed items and related services to affected persons. As we reported in 2006, during Hurricane Katrina, charities and government agencies that provide human services, supported by federal resources, helped meet the mass care needs of the hundreds of thousands of evacuees. The Post-Katrina Act contained multiple provisions aimed at strengthening capabilities to provide for immediate mass care and sheltering needs, particularly for special needs populations. The Post-Katrina Act amended the Stafford Act to authorize the President to provide accelerated federal assistance in the absence of a specific request where necessary to save lives, prevent human suffering, or mitigate severe damage in a major disaster or emergency. The act required the President to promulgate and maintain guidelines to assist governors in requesting the declaration of an emergency in advance of a disaster event. FEMA issued an interim Disaster Assistance Policy in July 2007, which provides guidelines to assist governors in requesting the declaration of an emergency in advance of a disaster. According to officials in FEMA’s Disaster Operations Directorate, FEMA has established a program to preposition goods and services in advance of a potential disaster. For example, the officials explained that FEMA was able to respond quickly to a state that had been affected by ice storms because the agency, acting without an initial request from the state, had prepositioned goods in advance of the storms. FEMA officials told us FEMA was reviewing a draft policy directive that would allow FEMA to provide federal assistance without a declaration if a state would agree to assume the normal cost share after a declaration has been made or to assume total cost if no declaration is made. In establishing a Disability Coordinator within FEMA to ensure that the needs of individuals with disabilities are addressed in emergency preparedness and disaster relief, the Post-Katrina Act charged the coordinator with coordinating and disseminating best practices for special needs populations. The Disability Coordinator shared with us two such practices that were in progress at the time of our November 2008 report. First, FEMA was developing “go kits” for people with developmental impairments, the hearing impaired, and the blind. The go kits are to contain visual and hearing devices. For example, the go kit for the hearing impaired will include a teletypewriter, a keyboard with headphones, and a clipboard with sound capabilities. The go kits are to be stored in the regions and include a list of their contents and directions for use. Second, the Disability Coordinator said FEMA was developing a handbook for federal, state, and local officials to use in the field to help them better accommodate those with disabilities. In addition, the Post-Katrina Act required that the FEMA Administrator, in coordination with the National Advisory Council, the National Council on Disabilities, the Interagency Coordinating Council on Preparedness and Individuals with Disabilities, and the Disability Coordinator, develop guidelines to accommodate individuals with disabilities. FEMA has published a reference guide titled Accommodating Individuals with Disabilities in the Provisions of Disaster Mass Care, Housing, and Human Services. The reference guide describes existing legal requirements and standards relating to access for people with disabilities, with a focus on equal access requirements related to mass care, housing, and human services. The reference guide states that it is not intended to satisfy all of the guideline requirements contained in the Post-Katrina Act. In addition to the reference guide, FEMA released for public comment guidance titled Interim Emergency Management Planning Guide for Special Needs Populations. This interim guidance—also known as the Comprehensive Preparedness Guide (CPG) 301—addressed some of the requirements contained in the Post-Katrina Act, such as access to shelters and portable toilets and access to emergency communications and public information. However, it did not address other requirements, such as access to first-aid stations and mass-feeding areas. FEMA officials told us in March 2009 that they had received final comments on CPG 301 and expected to release the final document in spring 2009. In addition, FEMA officials stated that they have developed guidance for the Functional Needs Support Unit, which they expect to publish by the end of March 2009. According to the interim version of CPG 301, the Functional Needs Support guidance will serve as a template for developing sheltering plans for special needs populations. Once the Functional Needs Support program is in place, the Functional Needs Support Unit can be used in shelters, so that trained and certified shelter staff will be assigned to serve as caregivers and provide the assistance normally supplied by a family member or attendant. FEMA officials told us that the agency will contract to provide training to states and localities on how to implement the Functional Needs Support guidance—such as how to provide staff, caregivers, durable medical equipment, and facility access. FEMA officials stated that, in the absence of completed guidance for the 2008 hurricane season, shelters received the Justice Department’s Americans with Disabilities Act Checklist for Emergency Shelters. They also said that the 2008 hurricane season highlighted the need for a standardized but scalable approach to sheltering special needs populations, with attention given to durable medical equipment, caregivers, trained staff, and special diets for evacuees. As we reported in 2006, ensuring that needed capabilities are available requires effective planning and coordination, as well as training and exercises, in which the capabilities are realistically tested, and problems identified and lessons learned and subsequently addressed in partnership with other federal, state, and local stakeholders. Clear roles and coordinated planning are necessary, but not sufficient by themselves to ensure effective disaster management. It is important to test the plans and participants’ operational understanding of their roles and responsibilities through robust training and exercise programs. The Post-Katrina Act required the FEMA Administrator, in coordination with the heads of appropriate federal agencies, the National Council on Disabilities, and the National Advisory Council, to carry out a national training program and a national exercise program. FEMA’s National Preparedness Directorate has established a National Exercise Program. According to officials from FEMA’s National Preparedness Directorate, the National Exercise Program conducts four Principal-Level Exercises and one National-Level Exercise annually. These FEMA officials said that the Principal-Level Exercises are discussion-based (i.e., tabletop or seminar) to examine emerging issues and that one is conducted in preparation for the annual National-Level Exercise. The National-Level Exercises are operations-based exercises (drills, functional exercises, and full-scale exercises) intended to evaluate existing national plans and policies, in concert with other federal and nonfederal entities. We have ongoing work examining the National Exercise Program, and we expect to publish a report on the results of our work this spring. FEMA’s Deputy for National Preparedness told us that DHS and FEMA were developing the Homeland Security National Training Program to oversee and coordinate homeland security training programs, increase training capacity, and ensure standardization across programs. The Post-Katrina Act also required the President to establish a National Exercise Simulation Center (NESC) that uses a mix of live, virtual, and constructive simulations to, among other things, provide a learning environment for the homeland security personnel of all federal agencies, and that uses modeling and simulation for training, exercises, and command and control functions at the operational level. According to FEMA officials, FEMA has been using FEMA Simulation Centers, Department of Defense facilities, and other facilities to support exercise simulation while it develops the NESC. For example, FEMA officials said that FEMA has provided initial exercise simulation support for exercises requiring the two highest levels of federal interagency participation in the National Exercise Program. According to an official in FEMA’s National Integration Center, the NESC is currently under development and is estimated to take 3 to 4 years to fully establish. The Post-Katrina Act also required the FEMA Administrator, in coordination with the National Council on Disabilities and the National Advisory Council, to establish a remedial action management program to, among other things, track lessons learned and best practices from training, exercises, and actual events. FEMA launched the Remedial Action Management Program (RAMP) in 2003 and released it as a Web application for all FEMA intranet users in January 2006. RAMP uses FEMA facilitators to conduct sessions immediately after exercises or events, and these facilitators are responsible for developing issue descriptions for remedial actions. In addition, FEMA has a related program called the Corrective Action Program (CAP) that is to be used for governmentwide corrective action tracking by federal, state, and local agencies. While RAMP is FEMA’s internal remedial action program, CAP is designed to serve as an overarching program for linking federal, state, and local corrective actions. FEMA developed RAMP prior to enactment of the Post-Katrina Act. However, FEMA has not yet established any mechanisms to coordinate ongoing implementation of RAMP or CAP with the National Council on Disabilities or the National Advisory Council. We have ongoing work related to FEMA’s efforts to track corrective actions from exercises and actual events. We plan to publish a report this spring. In 2006, we reported that the various Congressional reports and our own work on FEMA’s performance before, during, and after Hurricane Katrina suggest that FEMA’s human resources were insufficient to meet the challenges posed by the unprecedented degree of damage and the resulting number of hurricane victims. The Post-Katrina Act requires the FEMA Administrator to prepare and submit to Congress a plan to establish and implement a Surge Capacity Force for deployment to disasters, including catastrophic incidents. The act requires the plan to include procedures for designation of staff from other DHS components and executive agencies to serve on the Surge Capacity Force. It also required that the plan ensure that the Surge Capacity Force includes a sufficient number of appropriately credentialed individuals capable of deploying to disasters after being activated, as well as full-time, highly trained, credentialed individuals to lead and manage. The Director of FEMA’s Disaster Reserve Workforce explained that unlike in the military model, FEMA’s disaster reservists are the primary resource for disaster response and recovery positions, filling 70-80 percent of all Joint Field Office positions. FEMA has interpreted Surge Capacity Force to include its Disaster Reserve Workforce of 5,000-6,000 reserve Disaster Assistance Employees, who are full-time and contract staff. If additional capacity is necessary, another approximately 2,000 Disaster Assistance Employees are available to perform immediate, nontechnical functions that require large numbers of staff. Other sources FEMA has identified include local hires—additional staff hired from the affected area to perform the same functions as disaster reservists; contract support for activities that require specialized skill sets and for general disaster assistance functions; other full-time FEMA staff detailed to perform disaster assistance work; and other resources—particularly employees from other DHS components—detailed to perform disaster assistance work. FEMA’s Disaster Reserve Workforce provided information on the deployment of FEMA workforce in response to Hurricanes Gustav and Ike, as outlined in table 1. FEMA contracted to perform a baseline assessment and preliminary design for professionalizing the Disaster Reserve Workforce and its supporting program management function, including FEMA’s Surge Capacity Force planning. The contractor developed a preliminary design for the Disaster Reserve Workforce, which included an organizational concept, workforce size and composition, concept of operations, and a policy framework. An Interim Surge Capacity Force Plan was announced in a meeting of the DHS Human Capital Council in March 2008 and communicated to the heads of DHS components in a May 2008 memorandum from the FEMA Administrator. Despite the initial actions FEMA has taken to assess its baseline capabilities and draft an interim Surge Capacity Force Plan, according to the Director of the Disaster Workforce Division, FEMA has not yet provided Congress with a plan for establishing and implementing a Surge Capacity Force. The director stated that her goal is to submit a plan to implement a surge capacity force by summer 2009 with timelines and information on select—but not all—positions in the disaster reserve workforce. In May 2008, FEMA sent a list of job titles and positions needed in the Surge Capacity Force to all DHS Human Capital Officers and asked them to identify approximately 900 employees throughout DHS for the Surge Capacity Force. According to the director of the Disaster Reserve Workforce Division, the initial DHS Agency Surge Capacity designation lists were submitted in June 2008. However, she stated that upon review, there were inconsistencies with the different agencies’ interpretation of requirements for personnel, training, and skill sets. Therefore, a Surge Capacity Force Working Group met to review surge staffing requirements and to develop a timeline for the development of processes and a Concept of Operations Plan. Agency participants in the working group included FEMA, the Transportation Security Administration, and U.S. Citizenship and Immigration Services. The Disaster Reserve Workforce Division told us that, as of March 2009, a draft of the Concept of Operations Plan was being reviewed within these three component agencies and a final product is expected to be delivered for DHS review by June 30, 2009. According to the Disaster Reserve Workforce Division, because internal FEMA resources were sufficient to respond effectively to Hurricanes Gustav and Ike, FEMA did not require the assistance of other federal agency employees for those events. The Disaster Reserve Workforce Division, in partnership with FEMA’s Emergency Management Institute, has been developing standardized credentialing plans, which will incorporate existing position task books for the Disaster Assistance Employee workforce (a total of 230 positions organized in 23 cadres). FEMA officials told us in March 2009 that they had either initiated development of or completed credentialing plans for 102 positions. They said they expected to complete the remaining credentialing plans for all cadres and positions by spring 2010. Disaster Reserve Workforce Division officials explained that development of the credentialing plans in conjunction with the position task books will highlight gaps in the training curriculum that will assist in prioritizing curriculum development. Apart from the Disaster Reserve Workforce Division’s credentialing initiative, the FEMA workforce is to be credentialed by the National Preparedness Directorate’s NIMS credentialing program, the administrative process for validating the qualifications of personnel, assessing their background, and authorizing their access to incidents involving mutual aid between states. FEMA officials told us in March 2009 that the NIMS Credentialing Guideline was posted to the Federal Register and issued for public comment on December 22, 2008, and the comment period closed on January 21, 2009. They said comments have been collected and were to be adjudicated March 11, 2009. According to the officials, following adjudication, the guideline is to be revised and submitted to the Executive Secretariat for formal FEMA adoption and release. According to FEMA officials, experiences from the 2008 hurricane season confirmed the basic need for the credentialing program. The Post-Katrina Act requires each FEMA Regional Office to staff and oversee one or more strike teams within the region to serve as the focal point of the federal government’s initial response efforts and to build federal response capabilities within their regions. The act also requires the President, acting through the FEMA Administrator to establish emergency response teams (at least three at the national level and a sufficient number at the regional level). According to Disaster Operations Directorate officials, “strike teams” and “emergency response teams,” the Post-Katrina Act’s terms for the support teams deployed to assist in major disasters and emergencies under the Stafford Act, are now called Incident Management Assistance Teams (IMAT). IMATs are interagency national- or regional-based teams composed of subject matter experts and incident-management professionals, and are designed to manage and coordinate national response emergencies and major disasters. According to the officials, Regional Administrators oversee IMATs based within their regions. IMAT personnel are intended to be permanent, full-time employees whose duties and responsibilities are solely focused on their IMAT functions. The officials said that the IMATs’ other functions include working with state and local emergency managers to plan, prepare, and train for disasters; running exercises; and building relationships with emergency managers and other IMAT personnel. National IMATs are to consist of 26 positions, including a designated team leader and senior managers for operations, logistics, planning, and finance and administration sections. This sectional organization mirrors the incident command structure presented in the NIMS. FEMA has established a national IMAT in the National Capital Region and a second national IMAT in Sacramento, California, according to FEMA officials in the Disaster Operations Directorate. At the regional level, Disaster Operations Directorate officials said that IMATs had been established in FEMA Regions II, IV, V, and VI. According to these officials, they are in the process of establishing a fifth regional IMAT in Region VII, to become operational later this year. They said that FEMA’s intention is to establish IMATs in all 10 regions by the end of fiscal year 2010 and a third national team in fiscal year 2011. According to FEMA officials in the Disaster Operations Directorate, although the National IMAT established in the National Capital Region was fully staffed, when we reported in November 2008, some IMAT positions were not yet filled with permanent full-time employees, but rather with FEMA detailees who had been selected for their advanced training and expertise. In general, the detailees were to provide guidance and support to the permanent full-time employees until the teams were fully staffed with personnel capable of managing their respective IMATs. According to officials in FEMA’s Disaster Operations Directorate, at the time of our November 2008 report, FEMA had procured personal equipment for IMAT members and had ordered communications vehicles. In addition, the National IMAT had participated in the National-Level Exercise 2008. Also, Disaster Operations Directorate officials told us that IMATs supported a number of disasters and special events in 2008 (including recent storms and hurricanes and the Democratic and Republican National Conventions). FEMA has established mandatory training courses for all IMAT personnel, in addition to the standard training required for all FEMA employees. According to officials in FEMA’s Disaster Operations Directorate, they have been implementing a credentialing program for the IMATs. FEMA planned to incorporate training and credentialing for all hazards by identifying core competencies required for each IMAT position and assessing the competencies against existing task descriptions to guide the development of mandatory training and credentialing plans. According to these officials, as of March 2009, a draft of the credentialing plan was under review and they indicated that the credentialing process will be consistent with FEMA’s Disaster Workforce Credentialing Plan. At the time of our November 2008 report, Disaster Operations Directorate officials told us that FEMA was finalizing an IMAT doctrine and a Concept of Operations Plan. However, FEMA did not describe to us how it established or intended to establish target capabilities for the IMATs, which are required by the Post-Katrina Act as the basis for determining whether the IMATs consist of an adequate number of properly planned, organized, equipped, trained, and exercised personnel. Our 2006 report noted that when responding to the needs of the victims of a catastrophic disaster, FEMA must balance controls and accountability mechanisms with the immediate need to deliver resources and assistance in an environment where the agency’s initial response efforts must focus on life-saving and life-sustaining tasks. We reported in February 2006 that weak or nonexistent internal controls in processing applications left the government vulnerable to fraud and abuse, such as duplicative payments. We estimated that through February 2006, FEMA made about 16 percent ($1 billion) in improper and potentially fraudulent payments to applicants who used invalid information to apply for disaster assistance. The Post-Katrina Act required the development of a system, including an electronic database, to counter improper payments in the provision of assistance to individuals and households. FEMA has established a process to identify and collect duplicative Individual and Households Program (IHP) payments. This process includes, among other things, FEMA’s disaster assistance database automatically checking specific data fields in every applicant record for potentially duplicate applications, having a FEMA caseworker and a supervisor review potentially duplicate applications to determine if FEMA is entitled to collect a payment already made, and notifying the applicant of FEMA’s decision to collect a duplicate payment while providing an appeal process for the applicant. In addition, FEMA provides applicants with a copy of its application and a program guide, Help after a Disaster: Applicant’s Guide to the Individuals and Households Program. Updated and reissued in July 2008, this guide provides applicants with information on the proper use of IHP payments. Moreover, according to FEMA, the agency established identity verification processes, which include verifying that the applicant’s social security number is valid, matches the applicant’s name, and does not belong to a deceased individual. Further, FEMA reported that it has implemented procedures to validate that the address an applicant reports as damaged was the applicant’s primary residence during the time of the disaster and that the address is located within the disaster-affected area. According to FEMA’s Information Technology Report submitted to Congress in September 2007 under section 640 of the Post-Katrina Act, FEMA uses the National Emergency Management Information System to perform numerous disaster-related activities, including providing disaster assistance to individuals and communities. Although this system interfaces with FEMA’s financial accounting system through a special module, FEMA has not yet taken action to ensure that applicant information collected in the system is integrated with disbursement and payment records to determine ineligible applicants. Mr. Chairman and Members of the Committee, this concludes my statement. I would be pleased to respond to any questions you or other Members of the Committee may have. In addition to the contact named above, Leyla Kazaz, Assistant Director, and Kathryn Godfrey, Analyst-in-Charge, managed this assignment. Lara Kaskie, Christine Davis and Janet Temko made significant contributions to the work. Other contributors to the work include Jonathan Tumin, Sara Margraf, and Michael Blinde. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Hurricane Katrina severely tested disaster management at the federal, state, and local levels and revealed weaknesses in the basic elements--leadership, capabilities, and accountability--of preparing for, responding to, and recovering from disasters. In its 2006 work on the response to Hurricane Katrina, GAO noted that these elements needed to be strengthened. In October 2006, Congress enacted the Post-Katrina Act to address issues identified in the response to Hurricane Katrina. GAO reported in November 2008 that the Department of Homeland Security (DHS) and the Federal Emergency Management Agency (FEMA) had at least preliminary efforts under way to address most of the provisions, but also identified a number of areas that required further action. This statement discusses select issues within the basic elements related to (1) findings from the response to Hurricane Katrina, (2) provisions of the Post-Katrina Act, and (3) specific actions DHS and FEMA have taken to implement these provisions. GAO's comments are based on GAO products issued from February 2006 through November 2008, and selected updates in March 2009. To obtain updated information, GAO consulted program officials. GAO reported in September 2006 that the experience of Hurricane Katrina showed the need to improve leadership at all levels of government to respond to catastrophic disasters. For example, GAO reported that, in the response to Hurricane Katrina, there was confusion over roles and responsibilities under the National Response Plan, including the roles of the DHS Secretary, the FEMA Administrator, the Principal Federal Official (PFO), and the Federal Coordinating Officer (FCO). The Post-Katrina Act clarified FEMA's mission within DHS and set forth the role and responsibilities of the FEMA Administrator. The act also required that the FEMA Administrator provide a clear chain of command that accounts for these roles. In revising the National Response Plan--now called the National Response Framework--FEMA articulated specific roles for the PFO and FCO, which are described in GAO's November 2008 report. GAO reported in September 2006 that various congressional reports and GAO's own work on FEMA's performance before, during, and after Hurricane Katrina suggested that FEMA's capabilities were insufficient to meet the challenges posed by the degree of damage and the number of hurricane victims. The capabilities issues GAO identified related to, among others, (1) emergency communications, (2) evacuations, (3) logistics, (4) mass care, (5) planning and training, and (6) human capital. The Post-Katrina Act included a variety of provisions that related to these issues. For example, related to emergency communications, the act established an Office of Emergency Communications (OEC) within DHS. GAO reported in November 2008 that, in response to specific responsibilities outlined in its authorizing provision, OEC has been working with Urban Area Working Groups and states to assess gaps in communications infrastructure and to determine technical requirements to enhance interoperable communications systems. GAO reported in February 2006 that accountability mechanisms--specifically, internal controls--were lacking or nonexistent in processing applications for individual and household assistance following Hurricane Katrina, which left the government vulnerable to fraud and abuse. For example, GAO estimated that through February 2006, FEMA made about 16 percent ($1 billion) in improper and potentially fraudulent payments to applicants who used invalid information to apply for disaster assistance. The Post-Katrina Act required the development of a system, including an electronic database, to counter improper payments. GAO reported in November 2008 that FEMA established a process to identify and collect duplicative payments by, among other things, enabling its disaster assistance database to check automatically for duplicate applications.
Because of a number of security incidents, Diplomatic Security’s missions and resources have grown tremendously in the past decade. The growth in Diplomatic Security’s mission includes key areas such as enhanced physical security and investigations. Following the 1998 attacks on U.S. Embassies in Kenya and Tanzania, Diplomatic Security determined that more than 85 percent of U.S. diplomatic facilities did not meet its security standards and were therefore vulnerable to terrorist attack; in response, Diplomatic Security added many of the physical security measures currently in place at most U.S. missions worldwide, such as additional barriers, alarms, public address systems, and enhanced access procedures. Since 1998, there have been 39 attacks aimed at U.S. Embassies, Consulates, or Chief of Mission personnel (not including regular attacks against the U.S. Embassy in Baghdad since 2004). The nature of some of these attacks has led Diplomatic Security to further adapt its security measures. Moreover, the attacks of September 11, 2001, underscored the importance of upgrading Diplomatic Security’s domestic security programs and enhancing its investigative capacity. Furthermore, following the onset of U.S. operations in Iraq in 2003, Diplomatic Security has had to provide security in the Iraq and Afghanistan war zones and other increasingly hostile environments such as Pakistan. Diplomatic Security funding and personnel have also increased considerably in conjunction with its expanding missions. Diplomatic Security reports that its budget has increased from about $200 million in 1998 to $1.8 billion in 2008. In addition, the size of Diplomatic Security’s direct-hire workforce has doubled since 1998. The number of direct-hire security specialists (special agents, engineers, technicians, and couriers) increased from under 1,000 in 1998 to over 2,000 in 2009, and the number of direct-hire civil service personnel increased from 258 to 592. At the same time, Diplomatic Security has increased its use of contractors to support its security operations worldwide, specifically through increases in the Diplomatic Security guard force and the use of contractors to provide protective details for American diplomats in high-threat environments. Diplomatic Security faces several policy and operational challenges. First, State is maintaining missions in increasingly dangerous locations, necessitating the use of more resources and making it more difficult to provide security in these locations. Second, although Diplomatic Security has grown considerably in staff over the last 10 years, staffing shortages in domestic offices, as well as other operational challenges further tax Diplomatic Security’s ability to implement all of its missions. Finally, State has expanded Diplomatic Security without the benefit of solid strategic planning. Diplomatic Security officials stated that maintaining missions in dangerous environments such as Iraq and Afghanistan requires more resources and increases the difficulty for Diplomatic Security to provide a secure environment. Keeping staff secure, yet productive, in Iraq has been one of Diplomatic Security’s greatest challenges since 2004, when security for the U.S. Embassy in Baghdad transferred from the U.S. Department of Defense to Diplomatic Security. The U.S. mission in Baghdad—with 1,300 authorized U.S. civilian personnel—is one of the largest in the world. Maintaining Diplomatic Security operations in Iraq has required approximately 36 percent of its entire budget each fiscal year since 2004 and, as of September 2008, required 81 special agents to manage security operations. To support security operations in Iraq, Diplomatic Security has had to draw staff and resources away from other programs. Earlier in 2009, we reported that Diplomatic Security’s workload—and thus its resource requirements—will likely increase as the U.S. military transitions out of Iraq. U.S. policymakers’ increased focus on Afghanistan poses another significant challenge for Diplomatic Security. The security situation in Afghanistan has deteriorated since 2005, and the number of attacks there increased from 2,388 in 2005 to 10,889 in 2008. Afghanistan is Diplomatic Security’s second largest overseas post with a staff of 22 special agents in 2009. Diplomatic Security plans to add an additional 25 special agents in 2010, effectively doubling the number of agents in Afghanistan. In addition to operating in the Iraq and Afghanistan war zones, State is maintaining missions in an increasing number of other dangerous posts— such as Peshawar, Pakistan, and Sana’a, Yemen—some of which State would have previously evacuated. Diplomatic Security’s ability to fully carry out its mission of providing security worldwide is hindered by staffing shortages in domestic offices and other operational challenges such as inadequate facilities and pervasive language proficiency shortfalls. Despite Diplomatic Security’s staff growth over the last 10 years, some offices have been operating with severe staffing shortages. In 2008, approximately one-third of Diplomatic Security’s domestic suboffices operated with a 25 percent vacancy rate or higher. Several offices report that this shortage of staff affected their ability to conduct their work. For example: The Houston field office reported that, for 6 months of the year, it operated at 50 percent capacity of nonsupervisory agents or lower, and for 2 months during the summer, it dipped down to a low of 35 percent. This staffing gap happened while the field office was experiencing a significant increase in its caseload due to the Western Hemisphere Travel Initiative. As a result, the Houston field office management reported that this combination overwhelmed its capabilities and resulted in a significant backlog of cases. The New York field office reported that the number of special agents there dropped to 66 in 2008 from more than 110 agents in 2007. As a result, the office had to draw special agents from other field offices to cover its heavy dignitary protection load. In 2008, the Mobile Security Deployment (MSD) Office was authorized to have 94 special agent positions, but only 76 were filled. Furthermore, Diplomatic Security officials noted that not all staff in filled positions are available for duty. For example, in 2009, 22 agents assigned to MSD were in training. As a result of the low level of available staff, Diplomatic Security reported that many posts go for years without updating their security training. Officials noted that this lack of available agents is particularly problematic given the high number of critical threat posts that are only 1-year tours that would benefit from frequent training. State officials attributed these shortages to the following three factors: Staffing the Iraq mission: Staffing the Iraq mission in 2008 required 16 percent of Diplomatic Security’s staff. In order to provide enough Diplomatic Security special agents in Iraq, we reported that Diplomatic Security had to move agents from other programs, and those moves have affected the agency’s ability to perform other missions, including providing security for visiting dignitaries and visa, passport, and identity fraud investigations. Protection details: Diplomatic Security draws agents from field offices, headquarters, and overseas posts to participate in protective details and special events, such as the Olympics. Recently, Diplomatic Security’s role in providing protection at such major events has grown and will require more staff. Normal rotations: Staff take home leave between postings and sometimes are required to take training before starting their next assignment. This rotation process regularly creates a labor shortage, which affects Diplomatic Security’s ability to meet its increased security demands. In 2005, Diplomatic Security identified the need for a training float— additional staff that would allow it to fill critical positions and still allow staff time for job training—but Diplomatic Security has not been able to implement one. This is consistent with our observation that State has been unable to create a training float because its staff increases have been absorbed by the demand for personnel in Iraq and Afghanistan. Diplomatic Security requested funding to add over 350 security positions in fiscal year 2010. However, new hires cannot be immediately deployed overseas because they must meet training requirements. In addition to hiring new special agents, Diplomatic Security established the Security Protection Specialist (SPS) position in February 2009 to create a cadre of professionals specifically trained in personnel protection who can provide oversight for the contractor-operated protective details in high-threat posts. Because of the more targeted training requirements, Diplomatic Security would be able to deploy the SPS staff more quickly than new hire special agents. However, Diplomatic Security has had difficulty recruiting and hiring a sufficient number of SPS candidates. According to senior Diplomatic Security officials, it may cancel the program if it cannot recruit enough qualified candidates. Diplomatic Security faces a number of other operational challenges that impede it from fully implementing its missions and activities, including: Inadequate buildings: State is in the process of updating and building many new facilities. However, we have previously identified many posts that do not meet all security standards delineated by the Overseas Security Policy Board and the Secure Embassy Construction and Counterterrorism Act of 1999. Foreign language deficiencies: Earlier this year, we found that 53 percent of Regional Security Officers do not speak and read at the level required by their positions, and we concluded that these foreign language shortfalls could be negatively affecting several aspects of U.S. diplomacy, including security operations. For example, an officer at a post of strategic interest said because she did not speak the language, she had transferred a sensitive telephone call from a local informant to a local employee, which could have compromised the informant’s identity. Experience gaps: Thirty-four percent of Diplomatic Security’s positions (not including those in Baghdad) are filled with officers below the position’s grade. For example, several Assistant Regional Security Officers with whom we met were in their first overseas positions and stated that they did not feel adequately prepared for their job, particularly their responsibility to manage large security contracts. We previously reported that experience gaps can compromise diplomatic readiness. Host country laws: At times, host country laws prohibit Diplomatic Security from taking all the security precautions it would like outside an embassy. For example, Diplomatic Security officials said that they prefer to arm their local guard forces and their special agents; however, several countries prohibit this. In cases of attack, this prohibition limits Diplomatic Security’s ability to protect an embassy or consulate. Balancing security with the diplomatic mission: Diplomatic Security’s desire to provide the best security possible for State’s diplomatic corps has, at times, been in tension with State’s diplomatic mission. For example, Diplomatic Security has established strict policies concerning access to U.S. facilities that usually include both personal and vehicle screening. Some public affairs officials—whose job it is to foster relations with host country nationals—have expressed concerns that these security measures discourage visitors from attending U.S. Embassy events or exhibits. In addition, the new embassies and consulates, with their high walls, deep setbacks, and strict screening procedures, have evoked the nickname, “Fortress America.” Although some planning initiatives have been undertaken, neither State’s departmental strategic plan nor Diplomatic Security’s bureau strategic plan specifically addresses its resource needs or its management challenges. Diplomatic Security’s tremendous growth over the last 10 years has been reactive and has not benefited from adequate strategic guidance. State’s strategic plan does not specifically address Diplomatic Security’s resource needs or management challenges, as required by the Government Performance and Results Act (GPRA) and other standards. While State’s strategic plan for 2007-2012 has a section identifying security priorities and goals, we found it did not identify the resources needed to meet these goals or address all of the management challenges we identified in this report. Diplomatic Security has undertaken some planning efforts at the bureau and office level, but these efforts also have limitations. First, Diplomatic Security creates an annual bureau strategic plan. While this plan lists priorities, goals, and indicators, these elements are not always linked together. Further, the plan does not identify what staff, equipment, or funding would be needed. Second, Diplomatic Security has created a Visa and Passport Security Strategic Plan to guide its efforts to disrupt individuals and organizations that attempt to compromise the integrity of U.S. travel documents. Third, Diplomatic Security reported that it is currently examining all of its security programs to determine how funding and personnel resources are distributed and support its goals. Finally, Diplomatic Security uses established security standards and staffing matrixes to determine what resources are needed for various activities. However, while these various tools help specific offices or missions plan their resource requests, they are not useful for determining overall bureau needs. Several senior Diplomatic Security officials noted that Diplomatic Security remains reactive in nature, stating several reasons for its lack of long-term strategic planning. First, Diplomatic Security provides a support function and must react to the needs of State; therefore, it cannot plan its own resources until State determines overall policy direction. Second, while State has a 5-year workforce plan that addresses all bureaus, officials stated that Diplomatic Security does not use this plan to determine its staffing needs. Finally, past efforts to strategically plan Diplomatic Security resources have gone unheeded. For example, Diplomatic Security’s bureau strategic plan for fiscal year 2006 identified a need to (1) develop a workforce strategy to recruit and sustain a diverse and highly skilled security personnel base and (2) establish a training float to address recurring staffing problems. However, as of September 2009, Diplomatic Security had not addressed either of those needs. Diplomatic Security officials stated they hope to participate in a new State management initiative, the Quadrennial Diplomatic and Development Review (QDDR). This review, which will be managed by a senior leadership team under the direction of the Secretary of State, is designed to provide the short-, medium-, and long-term blueprints for State’s diplomatic and development efforts and offer guidance on how State develops policies, allocates its resources, deploys its staff, and exercises its authorities. In our report, we recommended that the Secretary of State—as part of the QDDR or as a separate initiative—conduct a strategic review of the Bureau of Diplomatic Security to ensure that its missions and activities address State’s priority needs. This review should also address key human capital and operational challenges faced by Diplomatic Security, such as operating domestic and international activities with adequate staff; providing security for facilities that do not meet all security standards; staffing foreign missions with officials who have appropriate language operating programs with experienced staff, at the commensurate grade balancing security needs with State’s need to conduct its diplomatic mission. State agreed with our recommendation and noted that, although it is currently not planning to perform a strategic review of the full Diplomatic Security mission and capabilities in the QDDR, the Under Secretary for Management and the Assistant Secretary for Diplomatic Security are completely committed to ensuring that Diplomatic Security’s mission will benefit from this initiative. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions you or other Members of the Subcommittee may have at this time. For questions regarding this testimony, please contact Jess T. Ford at (202) 512-4128 or fordj@gao.gov. Individuals making key contributions to this testimony include Anthony Moran, Assistant Director; Miriam Carroll Fenton; Joseph Carney; Jonathan Fremont; and Antoine Clark. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses the Department of State's (State) Bureau of Diplomatic Security (Diplomatic Security), which is responsible for the protection of people, information, and property at over 400 embassies, consulates, and domestic locations. Since the 1998 bombings of U.S. Embassies in East Africa, the scope and complexity of threats facing Americans abroad and at home has increased. Diplomatic Security must be prepared to counter threats such as crime, espionage, visa and passport fraud, technological intrusions, political violence, and terrorism. The statement today is based on a GAO report that was issued on November 12, 2009. It will discuss (1) the growth of Diplomatic Security's missions and resources and (2) the challenges Diplomatic Security faces in conducting its work. To address these objectives in our report, GAO (1) interviewed numerous officials at Diplomatic Security headquarters, several domestic facilities, and 18 international postings; (2) analyzed Diplomatic Security and State budget and personnel data; and (3) assessed challenges facing Diplomatic Security through analysis of interviews with personnel positioned domestically and internationally, budget and personnel data provided by State and Diplomatic Security, and planning and strategic documentation. GAO conducted this performance audit from September 2008 to November 2009, in accordance with generally accepted government auditing standards. Those standards require that GAO plans and performs the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. GAO believes that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Since 1998, Diplomatic Security's mission and activities--and, subsequently, its resources--have grown considerably in reaction to a number of security incidents. As a consequence of this growth, we identified several challenges. In particular (1) State is maintaining a presence in an increasing number of dangerous posts, which requires additional resources; (2) staffing shortages in domestic offices and other operational challenges--such as inadequate facilities, language deficiencies, experience gaps, and the difficulty of balancing security needs with State's diplomatic mission--further tax Diplomatic Security's ability to implement all of its missions; and (3) Diplomatic Security's considerable growth has not benefited from adequate strategic guidance. In our report, we recommend that the Secretary of State--as part of the agency's Quadrennial Diplomatic and Development Review (QDDR) or separately--conduct a strategic review of Diplomatic Security to ensure that its missions and activities address its priority needs.
To address our objectives, we reviewed ongoing efforts within Defense to reduce Defense Transportation System (DTS) costs by eliminating redundancy in automated information systems and in the business processes they support. We examined governing regulations and directives, evaluated plans and actions to select transportation migration systems and improve transportation processes, and interviewed key Defense officials. We performed our audit from June 1995 through May 1996 in accordance with generally accepted government auditing standards. We worked principally at the offices of the Deputy Under Secretary of Defense for Logistics (Transportation Policy) in Washington, D.C.; the U.S. Transportation Command’s (USTRANSCOM) Joint Transportation CIM Center (JTCC) at Scott Air Force Base, Illinois; and at development sites for selected migration systems. Appendix I details our scope and methodology. Defense provided written comments on a draft of this report. These comments are reprinted in appendix II and are discussed in the agency comments and evaluation section of the report. Defense relies on transportation services and information systems to help ensure that cargo, supplies, and people are conveyed to designated locations as quickly as possible during peace and war. Information is needed to perform functions like deploying troops for wartime, packing and shipping cargo for transport, and drawing plans for ship loading. Because today’s defense strategies use fewer forward deployed troops and equipment, the transportation function and the information systems supporting it have become increasingly important. During fiscal year 1995, the total cost of common-user Defense transportation amounted to about $6 billion. For the same period, USTRANSCOM spent approximately $164.5 million on information technology to support transportation services. While transportation is crucial to achieving U.S. military objectives, Defense transportation business operations are very similar or identical, in some cases, to those of the commercial transportation industry. This commonality enables Defense to rely on the commercial transportation industry to meet about 85 percent of its peacetime and wartime transportation needs. Moreover, commercial transportation providers and port management authorities have developed or purchased their own automated information systems to perform many of the same functions as defense transportation performs, such as those for moving passengers, documenting and reporting on cargo, and operating sea and aerial ports. Defense itself recognizes the similarities between itself and the commercial transportation sector in its policies and procedures, which call for using commercial automated information systems when feasible. Over the years, various studies, commissions, and internal DOD reports have noted that military transportation processes are fragmented, outdated, inefficient, and costly. In addition, Defense has long recognized that timely, accurate, and comprehensive information on transportation activities would greatly increase its effectiveness. For example: In 1992, GAO reported serious problems with the services’ deployment data bases during Operation Desert Shield/Storm. Inaccurate and incomplete database information resulted in erroneous lift requirements, inefficient use of lift, and revisions to movement routing and scheduling. Defense was forced to rely on informal, personal communication and manual methods to obtain the correct amount of lift and to determine which units were ready to move. According to a Defense report on Operation Desert Shield/Storm logistics, military airport facilities became so overloaded with high-priority sustainment cargo that other cargo was hastily repacked into shipping containers with partial documentation or without any documentation and reshipped by surface transport. Because little or no documentation accompanied the cargo, over half of the 40,000 containers sent to Saudi Arabia had to be reopened to determine their contents. In 1993, GAO reported that Defense’s ability to effectively manage its transportation operations was limited, in part, because of redundancy and the lack of standardization among its automated information systems. Specifically, we noted, and Defense agreed, that the Continental United States Freight Management System (CFM) would duplicate functions which are similar or identical to transportation systems concurrently under development by the Air Force, Marine Corps, Army, and the Defense Logistics Agency (DLA). In a 1994 report, Reengineering the Defense Transportation System: The “Ought To Be” Defense Transportation System for the Year 2010, Defense recognized that change to transportation business processes is key to realizing large cost savings and performance improvements. Defense further maintained that nothing less than fundamental change would be required to achieve such gains in savings and productivity. In 1995, Defense reported that the lack of visibility over shipments and units entering a theater of operations has been a chronic problem experienced in every major U.S. deployment during the 20th century. The report asserted that acquisition of transportation automated information systems providing more timely, accurate, and complete information would help resolve the problem. In early 1996, GAO reported that Defense common-user transportation costs were two to three times higher than comparable commercial carrier costs. Higher costs were attributed, in part, to fragmented business processes and an inefficient organizational structure. The Congress also is concerned about continuing problems in defense transportation and has taken legislative action to reduce its costs. The House Committee on National Security, in its report on the Defense Authorization Act for Fiscal Year 1996, estimated that approximately $100 million could be saved each year if commissaries and exchanges are allowed to contract directly, using the most cost-effective carriers to transport products overseas. Subsequently, the Congress approved a provision in the Defense Authorization Act for Fiscal Year 1996 authorizing the commissaries and military exchanges to negotiate directly with private carriers for the most cost-effective transportation of commissary and exchange supplies by sea without relying on the Military Sealift Command or the Military Traffic Management Command. Although Defense has repeatedly attempted to correct its transportation problems over the years, many of its actions have been directed toward the acquisition of information technology to address problems rather than through a complete analysis of its business processes. Such an analysis would identify the root causes of Defense’s transportation problems. Identification of the root causes helps an organization focus on appropriate means for addressing the problem and serves to direct resources where needed to achieve quality improvements in operations. These process improvements, in turn, provide the basis for the acquisition of technology to support the newly improved processes. Defense’s CIM program was intended to institutionalize this type of approach to information systems management. In 1989, the Deputy Secretary of Defense established the CIM program to reduce the cost and improve the efficiency of operations. Defense anticipated that it would reduce costs significantly by streamlining its business practices, consolidating information systems into a core set of migration systems, and standardizing data. To carry out the CIM initiatives for Defense transportation, the Deputy Under Secretary of Defense for Logistics chartered JTCC, in August 1993, under the command authority of USTRANSCOM. JTCC’s primary objective is to improve the efficiency and effectiveness of the DTS by using business process reengineering techniques, designating and implementing migration systems selections, and leading data standardization efforts. By migrating to 28 transportation systems, Defense estimated in February 1996 that it would save $240 million over a 6-year period, primarily through elimination of duplicate legacy systems. A description of Defense’s transportation business processes and the migration systems selections that support them are provided in appendix III. In an October 1993 CIM memorandum, after becoming dissatisfied with the pace of improvements, the Deputy Secretary of Defense directed all functional business areas to accelerate efforts to select and implement migration systems by March 1997. In response, JTCC initiated a structured approach, in April 1994, to identify, select, and implement transportation migration systems by March 1997. The approach was systematic, communicated in a written plan, and agreed to by departmentwide transportation process owners and stakeholders. Further, the approach called for consideration of alternatives, including a review of commercial products, and required that cost-benefit analyses be prepared in support of migration systems selections. Defense has little assurance that its transportation system selections are cost-effective. To meet a March 1997 deadline imposed by the Deputy Secretary of Defense, JTCC hurriedly implemented its migration system selection approach without adequately evaluating government and/or commercial sector alternatives in selecting 17 of the 28 migration systems, using complete and verified cost information in choosing 7 systems from among numerous legacy systems which could provide the same basic functionality, and assessing the impact that significant changes to transportation operations—made through reengineering and outsourcing—will have on its migration system selections. In some cases, Defense selected migration systems that will lose money if implemented as migration systems. Governmentwide and DOD regulations require that a range of feasible alternatives be considered before significant changes to business processes or information systems are made. These regulations call for aggressive examination of alternatives to ensure that innovative and improved ways of doing business are considered. The Office of Management and Budget (OMB) Circular A-94, General Services Administration’s (GSA) Federal Information Management Regulations, and DOD Instruction 7041.3 cite acquisition of new systems, sharing existing systems, contracting for services, using commercial off-the-shelf software, and maintaining the status quo as examples of alternatives that should be considered. In addition, in November 1993, the Assistant Secretary of Defense for Command, Control, Communications, and Intelligence issued criteria requiring that migration systems selection consider a reasonable range of alternatives. However, in selecting systems for migration, Defense did not adequately consider alternatives available in other parts of the government and/or the commercial sector. As a result, it has little assurance that the systems it chose are the most cost-effective and appropriate. The degree to which Defense considered alternatives to the systems chosen varies from system to system. However, in all cases, alternatives were not considered to the extent that Defense’s own guidance calls for. Specifically: For all system selections, Defense did not consider developing new systems or contracting for services as required by Office of Management and Budget, General Services Administration, and Defense directives. According to the Chair of Defense’s Transportation CIM Advisory Group, the March 1997 deadline provided insufficient time to fully evaluate alternatives. For 17 of the 28 transportation systems selected, Defense made its decisions based on the judgment of transportation experts who determined that these 17 systems support a transportation business function so unique that nothing else could be considered as a feasible alternative. However, JTCC officials could provide no documented analysis to support this conclusion. Seven migration systems were selected after considering a narrow range of alternatives. The remaining four systems were designated “interim” systems because Defense believes alternative solutions exist for these systems. According to JTCC officials, alternatives will be considered at a later, unspecified date. To its credit, Defense reviewed commercial off-the-shelf transportation software products for some transportation business areas while making its migration system selections. However, this review was inadequate because it did not analyze the degree to which unmodified software could meet unique Defense requirements, identify the expected cost to make necessary software modifications, determine the time required to make modifications, and provide for a hands-on view of the software in operation. While the study determined that about 700 commercially available software packages provided some degree of transportation functionality, 24 were selected for a final detailed review. Out of the 24 finalists, JTCC concluded that (1) none would fully support Defense’s transportation requirements without modified software and (2) required modifications could not be made before March 1997 at an acceptable cost. Although Defense asserts that required modifications would be costly, it could not provide documented analysis to support this conclusion. Further, Defense plans to make $13 million worth of software modifications to just five of its in-house selections. Also, despite Defense’s conclusion regarding the inability of commercially available software to fully support transportation requirements, a government contractor is making extensive use of one of the rejected products in its development of the Global Transportation Network. To meet the March 1997 deadline mandated in the Deputy Secretary’s October 1993 memorandum, Defense selected transportation migration systems based on incomplete, unverified cost data without comparing all the benefits of each system. Consequently, there is little assurance that these selected systems will help contain the cost of performing Defense’s transportation mission to any great extent or bring about the benefits envisioned by the migration strategy. Defense regulations stress the importance of considering system costs and benefits to ensure that correct, well-informed decisions are made about information systems. DOD Directive 8120.1 and DOD Instruction 7041.3 require preparation of a functional economic analysis to document all costs (both direct and indirect), all quantifiable benefits, and all significant nonquantifiable benefits. Also, the Assistant Deputy Under Secretary of Defense for Transportation Policy identifies conducting objective analyses that show favorable investment returns as the best way to ensure funding for migration systems. To be useful in making fully informed business decisions, such cost information should be complete and verified. Instead of preparing the required functional economic analyses and documenting investment returns, Defense selected its transportation migration systems based primarily on a system’s ability to meet current functional requirements. After the selections were made, JTCC continued to analyze savings projections associated with migration systems. This later analysis culminated in a January 1996 study discussed at the end of this section. Had Defense followed its own regulations and calculated investment returns, it would have found—based on data available when the migration systems were selected—that two of the selected systems would lose money if implemented as migration systems. The Air Loading Module (ALM) would lose $0.67 out of every dollar invested and the Cargo Movement Operations Systems (CMOS) would lose $0.04 out of every dollar invested. JTCC’s analyses also did not include all costs associated with its evaluation of in-house systems. At least $18 million in costs were excluded: $16 million for JTCC’s analysis of candidate migration systems and $2 million for maintaining migration system hardware. The magnitude of other exclusions remains unknown. For example, JTCC estimates that, collectively, training on migration systems will be required at nearly 300 sites. However, its analyses did not include estimates of the number of persons to be trained at each site or the cost of productivity losses associated with that training. JTCC also estimates that hardware and off-the-shelf software totaling $10 million will be purchased between fiscal year 1996 and fiscal year 1999. However, JTCC’s estimates do not include the cost of labor necessary to purchase these items. If JTCC had included these costs in its systems selection analyses, it would have found that the overall return on investment would have decreased. For example, as stated above, $16 million in costs related to JTCC’s own work on migration systems was excluded from analysis. JTCC was unable to attribute a specific percentage of these costs to its work on selecting the seven systems for which in-house alternatives competed against one another. However, if just 6.3 percent of this $16 million were factored into the analysis, Defense would barely break even on its investment in those systems. Moreover, as figure 1 shows, Defense would actually lose money on its investment if more than 6.3 percent were included. Still, even if recommending migration systems were accomplished for free—the estimated reduced cost associated with the selected alternatives ($1.02 million) would be suspect since JTCC did not verify the system costs used in selecting the migration systems. Unlike the information obtained on each system’s functional and technical capabilities—which JTCC meticulously verified—system cost information was taken at face value. JTCC officials concede that the costs used for its analyses were very rough and resulted in inaccurate, low estimates of migration system costs. Further, since JTCC’s migration systems selection methodology emphasized the importance of meeting current functional requirements, JTCC’s analyses of in-house systems excluded the required quantification and comparison of new benefits. Although JTCC officials stated that the benefits of migration systems go beyond meeting current functional requirements, benefits such as operating more easily in remote locations and improving military readiness were not addressed in the migration system decision documents and remain unquantified. These decision documents instead focus on quantifying each system’s current functional and technical merits to the exclusion of new benefits a system may offer. Although the transportation migration systems were selected and approved prior to April 1995, Defense continued to prepare justification for its migration systems selections—culminating in a January 1996 study entitled A Business Case Study for Transportation Systems Migration. This case study documents additional projected cost savings and avoidances that were not considered during the migration systems selection process. However, these estimates of cost savings and avoidances are not reliable for a number of reasons. In its business case study, JTCC estimates that the transportation migration strategy will produce cost avoidances and savings of $4 billion. However, the validity of this figure is questionable. First, JTCC relied on cost estimates from 13 different sources using a variety of forecasting horizons (from 4 to 17 years) without consistently accounting for the timing of estimated costs and benefits. OMB Circular A-94 and DOD Instruction 7041.3 identify the timing of costs and benefits as an important consideration in deciding whether a government program can be justified on economic principles. These regulations further require that estimated gains and losses occurring in different time periods be converted to a standard unit of measurement that accounts for the time-value of money. Second, JTCC did not report estimated savings and avoidances in a constant base-year’s dollars. By mixing base-years, JTCC has failed to show the expected benefits and costs associated with the transportation migration systems in terms of meaningful, actual purchasing power. Third, Defense would be expected to realize $3.75 billion (93 percent) of the reported $4 billion in savings and avoidances whether or not the migration strategy was implemented. For example, JTCC estimates that Defense will avoid and/or save $92 million by implementing and operating the TC-AIMS II migration system over a 13-year period. However, the Air Force’s CMOS system, which is now a component of the TC-AIMS II migration system, predates the migration effort and was expected to save $57 million—without being implemented in any service but the Air Force. The remaining savings and avoidances that can be attributed directly to migration are comprised of estimates that rely on questionable assumptions. For example, JTCC assumed that each legacy system, if not terminated, would attempt to acquire all the functionality that a fielded migration system would have. Based on this assumption, JTCC calculated that Defense will avoid $101 million in costs for the legacy systems that competed as in-house alternatives. For example, JTCC estimated that by migrating to the TC-AIMS II system, Defense will avoid spending $17.4 million between fiscal year 1998 and fiscal year 2001 to upgrade the unit movement function of the CMOS system. CMOS program officials maintain that this estimate is grossly high—more than double the Air Force approved budget for the entire CMOS program during the same period. Similarly, JTCC estimated that Defense will avoid spending $18 million over the same period to upgrade the Transportation Coordinator - Automated Command and Control Information System (TC-ACCIS) unit movement functionality. This estimate increases by nearly 28 percent prior estimates for the entire TC-ACCIS program that already include system enhancements. Another $96 million in migration-related cost avoidances are associated with Defense’s data standardization, functional process improvement, electronic data interchange, and Defense Logistics Management System (DLMS) efforts. This estimate may overstate software maintenance costs by as much as $61.7 million, since it does not consider maintenance costs that legacy systems already planned to incur over the next 5 years. JTCC officials stated that preparing a cost analysis that takes into account what each program already planned to spend for software maintenance would require a level of visibility into each system that JTCC does not have. In May 1995, Defense launched an effort to reengineer the Department’s transportation processes, focusing first on transportation acquisition and financial payment/billing processes. According to the Assistant Deputy Under Secretary of Defense for Transportation Policy, this effort will examine transportation issues from a top-down perspective and will change Defense policies to affect the way work is done in the transportation acquisition and finance areas. Defense expects the reengineering of its remaining transportation processes to be completed within the next 6 years. In making its migration system selections, however, Defense did not assess the impact that these changes and other potential significant changes to transportation operations—such as outsourcing—would have on its system selections. Consequently, Defense may end up investing in systems that do not provide positive investment returns before such changes to transportation operations are made. For example, Defense plans to spend $63 million from fiscal year 1996 through fiscal year 2001 to implement a migration system that will automate and standardize the moving, storing, and managing of personal property for Defense personnel. At the same time, the Department is considering the outsourcing of major components of the personal property function. If outsourced, contractors will perform the management, administrative, and operational duties that Defense now performs for personal property movement and storage. As a result, further spending on the migration system may be questionable since the system may no longer be needed. Also, in following its migration strategy, Defense believes that the implementation of migration systems will resolve some of its process problems that may be more appropriately addressed through reengineering. For example, to alleviate water port loading dock congestion during full-scale deployment, Defense has selected a migration system to more quickly develop plans for loading ships. This system, the Integrated Computerized Deployment System (ICODES), is capable of dramatically reducing the time required to plan the load. However, without performing a thorough analysis of the nature of dock congestion, Defense cannot expect its load planning migration system to alleviate the congestion. In fact, according to an ICODES program official, port congestion is not caused by lengthy planning times. Rather, unit commanders load more equipment than necessary since they do not believe that all of it will arrive at the right location when needed. According to the official, this problem was so severe during Operation Desert Storm that unit commanders were typically bringing division-size loads to port. Defense’s initial approach to selecting and implementing transportation migration systems was systematic, communicated in a written plan, and agreed to by departmentwide transportation process owners and stakeholders. It was geared to ensuring that the Department chose systems that would meet its needs in the most cost-effective fashion. However, faced with the March 1997 deadline, Defense deviated from this approach and selected systems that may not provide much new savings or, in some cases, will actually lose money. We believe Defense’s management approach to implementing its transportation system migration strategy was shortsighted. By not considering alternatives, not relying on complete cost estimates, and by not assessing the potential impact of outsourcing and reengineering on its migration systems, Defense essentially gambled that systems migration would achieve anticipated savings and resolve problems with transportation business processes. As a result, these selections may turn out to be poor investments and preclude the use of better commercial alternatives. We recommend that the Secretary of Defense direct the Deputy Under Secretary of Defense for Logistics to complete the following actions. To ensure that positive investment returns are achieved before reengineered or outsourced processes are implemented, immediately establish current cost, benefit, investment return, and schedule baselines for the seven migration systems that were selected from among in-house legacy systems. For these systems, terminate the migration of transportation systems for which migration is shown to be a poor investment. The Department of Defense provided written comments on a draft of this report. The Deputy Under Secretary of Defense for Logistics partially concurred with the report’s recommendations and stated that Defense would terminate systems that are shown to be poor investments. Defense’s response to this report is summarized below, along with our evaluation, and is presented in appendix II. In its response, Defense stated that its selection of migration systems was driven by the Deputy Secretary of Defense’s October 1993 memorandum which directed expedited selection and implementation of migration systems. Further, Defense stated that in accordance with DOD 8020.1, it selected transportation migration systems based primarily on their ability to improve support to the warfighter and enhance readiness. Defense added that cost effectiveness and economic factors were also considered when selecting migration systems. We recognize that the October 1993 memorandum was the primary basis for migration system selections. However, we believe that Defense erred in implementing the memorandum, because it did not follow its own regulations on systems development life cycle management. These regulations are designed to ensure that all essential ingredients to making sound business decisions are incorporated into all major technology investment decisions. In particular, DOD 8120.1-M directs that migration system selections be based on functional economic analyses (FEA) and that migration systems follow DOD life cycle management policies and procedures, to include making maximum use of commercial off-the-shelf (COTS) products. However, despite these requirements, Defense had just one up-to-date FEA available at the time it made its transportation migration selection decisions. Further, the analyses that Defense conducted in lieu of preparing the required FEAs did not (1) adequately consider alternatives (such as the use of COTS products), (2) rely on complete, verified cost and benefit data, and (3) consider the potential impact of change to transportation operations that reengineering would have on its system selections. We are sending copies of this report to the Ranking Minority Member of the Subcommittee on Military Readiness, House Committee on National Security; the Chairmen and Ranking Minority Members of the Senate and House Committees on Appropriations, the Senate Committee on Armed Services, the Senate Committee on Governmental Affairs, and the House Committee on Government Reform and Oversight; the Secretaries of Defense, the Army, the Navy, and the Air Force; the Director of the Office of Management and Budget; the Commander-in-Chief, U.S. Transportation Command; and other interested parties. Copies will be made available to others on request. If you have any questions about this report, please call me at (202)512-6240 or Franklin W. Deffer, Assistant Director, at (202)512-6226. Major contributors to this report are listed in appendix V. CFM(HOST) CFM(FM) In addressing our objectives, we reviewed ongoing efforts within Defense to contain DTS costs by eliminating redundancy in automated information systems and in the business processes they support. We examined a number of governing criteria including GSA’s information resources management regulations; OMB policies and procedures for managing federal information resources; and Defense directives and instructions pertaining to acquisition of automated systems, defense information management, and life cycle management of automated information systems. We evaluated plans and actions to select migration systems and improve key transportation processes including USTRANSCOM’s Defense Transportation System 2010 Action Plan and 2015 Strategic Plan; the DOD Transportation Process Improvement, Systems Migration, and Data Standardization Plan; and 21 Integration Decision Papers justifying migration selection decisions. We analyzed Defense’s cost containment strategy including comparing investment costs among competing systems and identifying costs associated with systems not selected for retention. In performing our investment analysis, we used cost data published in the Integration Decision Papers, which the JTCC had not validated but considered the best data available. We worked primarily with officials at USTRANSCOM’s JTCC, Scott Air Force Base, Illinois, to determine the regulating criteria, methodology, and status of Defense’s cost containment and streamlining efforts. We also interviewed the Deputy Director for Command, Control, Communications, and Computers at the Military Sealift Command, Washington Navy Yard, Washington, D.C.; the Program Manager for the Global Transportation Network; the Assistant for Travel and Transportation Management to the Assistant Deputy Under Secretary for Transportation Policy-Logistics; staff at Air Force Transportation (AF/LGT), Deputy Chief of Staff (Logistics); and the former Transportation Management Division Chief, Directorate of Transportation Energy and Troop Support, Office of the Deputy Chief of Staff for Logistics, Department of the Army. To see migration projects firsthand, we interviewed representative officials and received demonstrations of CMOS at Gunter Air Force Base, Montgomery, Alabama; Navy Material Transportation Office Operations and Management Information System under development at Norfolk Naval Base, Norfolk, Virginia; Consolidated Aerial Port System II and Passenger Reservation and Manifest System systems at Charleston Air Force Base in Charleston, South Carolina; and Worldwide Port System in operation at the Military Traffic Management Command’s Major Port Command in Charleston, South Carolina. To better understand overall transportation issues, we interviewed the Chairman, Information Technology Committee, American Association of Port Authorities; and the manager of the Systems and Programming Information Services, South Carolina State Ports Authority. We interviewed the Vice President for Technology at Boeing Information Services regarding private industry system migration efforts. We also provided status briefings to the Assistant Deputy Under Secretary of Logistics (Transportation Policy) at the Pentagon in Arlington, Virginia. Our audit was performed from June 1995 through May 1996 in accordance with generally accepted government auditing standards. The following are GAO’s comments on the Department of Defense’s letter dated August 7, 1996. 1. We have clarified our recommendation to specify the systems requiring cost, benefit, investment return, and schedule baselines. 2. According to Defense, the total number of migration systems is 23, while the report states the number as 28. The 28 figure cited in our report is based upon the signed July 1995 memorandum from the Assistant Secretary of Defense for Command, Control, Communications, and Intelligence which identifies 26 of the 28 migration systems listed in appendixes III and IV. The additional systems not listed in the memo, the Analysis of Mobility Platform (AMP) and Joint Flow and Analysis System for Transportation (JFAST) are identified in JTCC’s Integration Decision Papers (IDP) as the two systems supporting the future operations component of the Global Transportation Network (GTN). The IDP for the transportation planning and execution functional area specifically recommends that Defense select AMP and JFAST as the migration system for the future operations subfunctional area. Further, while Defense does not identify in its response which one of the four interim migration selections is incorrect, our report identifies the four systems as interim migration selections based upon information in the January 1996 A Business Case Study for Transportation Systems Migration. 3. According to a February 1995 Air Force paper, CMOS provides cost and operational benefits and a positive return on investment. However, these benefits and returns are relevant to the CMOS system only when it is deployed within the Air Force—but not to any other military service as a migration system. The figures cited in the February 1995 paper are based on a CMOS Functional Economic Analysis that is nearly 4 years old and that predated the migration effort. And although the February 1995 paper included some cost avoidances that were not considered in the CMOS FEA, it did not include an analysis of costs and benefits associated with migrating CMOS to the other military services. We modified our report to reflect that implementing CMOS as a migration system is a losing proposition. — Transportation Coordinator’s Automated Information Management System II (TC-AIMS II) Installation Transportation Office/Traffic Management Office (ITO/TMO) - receive movement requirements; plan, monitor, and conclude movements; screen potential carriers; order conveyances; reserve space on scheduled carriers; and produce documentation for billing and statistical purposes — Cargo Movement Operations System (CMOS) — CONUS Freight Management (CFM) — Canadian Transportation Automated Control System (CanTRACS) — Passenger Reservation and Manifest System (PRAMS) — Groups Operational Passenger System (GOPAX) — Transportation Operational Personal Property System (TOPS) Load Planning - planning to fit cargo, vehicles, and equipment onto specific aircraft, ships, and rail cars — Air Loading Module (ALM) — Integrated Computerized Deployment System (ICODES) Port Management - planning for arriving passengers and cargo; preparing shipments for transport; supervising terminal operations — In-transit Visibility-Modernization (ITV-MOD) Consolidated Aerial Port System II (CAPS II) — Worldwide Port System (WPS) Mode Clearance - actions taken to hand off cargo, passengers, and equipment from one transportation mode to another — Navy Material Transportation Office Operations and Management Information System (NAOMIS) — Integrated Booking System(IBS) — Mobilization Movement Control (MOBCON) Theater Transportation Operations - includes all business processes described above with the primary difference being a more extensive use of service and host country organizations — Command and Control Information Processing System (C2IPS) — Department of the Army Movement Management System-Redesign (DAMMS-R) High-Level Transportation Planning and Execution - actions performed at the Commander-in-Chief (CINC) and CINC Component levels to plan and perform deployment, operational level movement, sustainment, and redeployment — Airlift Deployment Analysis System (ADANS) — Global Decision Support System (GDSS) — ITV-MOD Headquarters On-Line System for Transportation (ITV-MOD HOST) — Global Transportation Network (GTN) — Analysis of Mobility Platform (AMP) — Joint Flow and Analysis System for Transportation (JFAST) — TRANSCOM Regulating and Command and Control Evacuation System (TRAC2ES) — Enhanced Logistics Intra-Theater Support Tool (ELIST) — Asset Management System (AMS) — Integrated Command, Control, and Communications (IC3) Project — Joint Air Logistics Information System (JALIS) — Defense Transportation Tracking System (DTTS) Plans and schedules transportation airlift missions for commercial aircraft and for the C-17, C-5, and C-141. The system also plans and schedules aerial refueling for the KC-10 and KC-135. Performs military and civilian aircraft load planning. Performs rapid time-phased force deployment data modeling for all transportation modes and deployment phases. Manages movement tracking, repair, modification, compliance with industry and regulatory requirements, receipt and disposal of equipment, and auditing of revenues and expenses for the Defense Freight Railway Interchange Fleet and the Army’s railroad container fleet. Routes and ranks cargo shipments originating in Canada and maintains all Canadian commercial transportation tenders and contracts. Accepts aircraft mission schedule information from GDSS and then distributes the schedule data to wing activities involved in aircraft launch, loading, and recovery. CFM(HOST) supports procurement of commercial freight and cargo transportation services. CFM(FM) is a field module which allows transportation officers to obtain routing and rating information via the Defense Information System Network or a commercial telephone line. CFM(HOST) and CFM(FM) together constitute CFM. Supports the collection, processing, and transmission of information concerning the movement of cargo entering aerial ports located outside the continental United States. CMOS supports both peacetime and contingency operations. Supports the management of joint-use theater land transportation. Provides near real-time satellite tracking of any sensitive cargo transported by commercial carriers and of classified arms, ammunition, and explosives. Compares the planned theater arrival schedule against a theater’s transportation assets, cargo handling equipment, facilities, and routes in order to produce a detailed plan of the daily flow of theater transportation including delays and constrictions. Worldwide command control system for strategic airlift and air refueling. Performs functions associated with arranging commercial transportation for groups of 21 or more passengers by air or surface transport. Transportation command and control system providing intransit visibility of units, passengers, and cargo during both peace and war. It also tracks patient movement and performs planning activities. GTN is the transportation command and control module of the Global Command and Control System. Standardizes booking procedures for unit and nonunit ocean-eligible cargo. This project consolidates four sealift transportation planning and execution systems onto one hardware platform. Facilitates ship loading by integrating digitized ship drawings and cargo data from multiple information sources. Performs command and control operations, passenger operations, and cargo movement operations at Air Mobility Command (AMC) aerial ports. Provides for a centrally located record of on-hand cargo and cargo movements to AMC aerial ports and operating sites around the world. Maintains airlift cargo data, manifest data, and air shipment information. Schedules all the Services’ fixed-wing and rotary-wing support airlift for nontactical passengers and cargo. Provides strategic transportation feasibility estimates. Plans and supports unit deployments. Also builds and maintains a database of force and equipment data on support assets and requirements. Plans the routes and obtains permission to use state highways for truck convoys. Provides shipment information on clearing or challenging air-eligible cargo, supports asset visibility, and performs cargo manifesting and transportation billing processes. Performs passenger reservation services for AMC, including flight and reservation processing and passenger processing. Manages DOD personal property movement and storage information. Provides in-transit visibility of patients, monitors patient medical equipment pools, and plans transportation for patients. Performs water port terminal management functions. Denice M. Millett, Evaluator-In-Charge Michael W. Buell, Staff Evaluator David R. Solenberger, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of Defenses' (DOD) transportation migration systems. GAO found that DOD: (1) has little assurance that its transportation migration systems are cost-effective; (2) selected its transportation migration systems without fully evaluating government or commercial alternatives; (3) relied on incomplete and unverified cost data in making its selections; (4) selected migration systems that will lose money if implemented; (5) did not consider developing new systems or contracting for services; (6) made some of its selection decisions based on the judgement of transportation experts; (7) reviewed commercial off-the-shelf transportation software products while making migration system selections; (8) did not analyze the extent to which unmodified software could meet its requirements, identify software modification costs, or provide a hands-on view of the software in operation; (9) did not assess how changes to its transportation operations would affect the migration systems; and (10) did not ensure that the new migration system would yield savings.
Eligibility for benefits under SSI, DI, Medicare, and Medicaid programs for individuals with disabilities is determined in part on whether an individual has a disability as defined in the Social Security Act. For purposes of these programs, a person is disabled if he or she has a medically determined physical or mental impairment that (1) has lasted or is expected to last at least 1 year or result in death and (2) prevents the person from engaging in substantial gainful activity (SGA). As of January 2003, SGA is defined as countable earnings—generally gross earnings less the cost of items that, because of the impairment, a person needs to work—of more than $800 per month. The Social Security Administration’s (SSA) interpretation of disability specifies that for a person to be determined to be disabled, the impairment must be of such severity that the person not only is unable to do his or her previous work but, considering the person’s age, education, and work experience, is unable to do any other kind of substantial work that exists in the national economy. The Ticket to Work and Work Incentives Improvement Act of 1999 allowed states to expand the availability of Medicaid coverage for individuals with disabilities who work, even though they earn more than the SGA level. States that implement Ticket to Work Buy-In programs may consider as disabled those individuals who, except for the fact that they are earning more than the SGA $800 monthly amount, otherwise would meet the Social Security Act definition of disabled. Individuals with disabilities become eligible for Medicaid in a variety of ways but primarily through SSI or DI eligibility (see table 1). Individuals with disabilities, however, must also meet Medicaid income and asset requirements in order to obtain Medicaid coverage. Both the SSI and DI programs contain work incentive provisions designed to assist individuals with disabilities to achieve gainful employment while retaining some eligibility for health care coverage. Individuals receiving SSI also are assured eligibility for Medicaid in 39 states and the District of Columbia. The remaining 11 states (known as 209(b) states) may use different standards for disability, income, or assets; thus, SSI beneficiaries in these 11 states may not have assured eligibility for Medicaid. Work incentives under SSI allow individuals to (1) have their SSI cash benefits gradually reduced as earnings increase, rather than having cash benefits removed entirely once earnings exceed the SGA limit, and (2) maintain their Medicaid coverage up to an income limit that varies across the states (from $15,049 (170 percent of the FPL) in Arizona to $39,228 (443 percent of the FPL) in New Hampshire as of 2002). Individuals receiving DI also may become eligible for Medicaid under certain circumstances. By virtue of their DI disability determination, they meet one of the categorical eligibility requirements for Medicaid. However, they must also meet Medicaid’s income and asset requirements as defined by each state. DI beneficiaries can “spend down” their income on medical expenses in order to meet state-determined income limits for the medically needy eligibility category, if a state provides this optional coverage. While DI beneficiaries receive health care coverage through Medicare, eligibility for the medically needy category provides Medicaid- covered services that are not covered by Medicare, such as most outpatient prescription drugs. Work incentives under DI are structured such that if an individual’s work activity increases to a level where he or she is no longer deemed disabled, the individual loses DI eligibility, and in turn, Medicaid eligibility. The Ticket to Work Medicaid Buy-In builds on an earlier effort to expand Medicaid eligibility for individuals with disabilities who desire to work. Through the Balanced Budget Act of 1997 (BBA) (Pub. L. No. 105-33, 111 Stat. 251), the Congress gave states the option of implementing a coverage category for working individuals with disabilities. For these individuals, the BBA authorized states to extend Medicaid coverage to those who meet the SSI definition of disability and exceed the SSI income eligibility limit but whose income remains under 250 percent of the FPL. States electing the BBA option may require beneficiaries to pay premiums or may use other cost-sharing provisions as long as they are set on a sliding scale based on income. As of December 2002, 12 states had implemented a BBA option for working individuals with disabilities. The Ticket to Work Medicaid Buy-In legislation expands the availability of Medicaid coverage for individuals with disabilities who desire to work by allowing them to gain or maintain Medicaid eligibility as they enter the workforce or to increase their earnings if they are in the workforce. The Ticket to Work Buy-In builds on the BBA option by giving states unlimited flexibility to set higher income and asset levels for two new eligibility groups—Basic Coverage Group and Medical Improvement Group—for working individuals with disabilities. (For a comparison of the two programs, see table 2.) The Basic Coverage Group allows states to cover people aged 16 to 64 who, except for the amount of their earned income, would be eligible to receive SSI benefits. States may establish their own income and asset standards or elect to have no standards at all. As with the BBA option, states electing the Basic Coverage Group may require participants to pay monthly premiums or may impose other cost-sharing mechanisms if they are set on an income-based sliding scale. However, for individuals with annual incomes less than 450 percent of the FPL, states may not impose premiums that exceed 7.5 percent of income. Additionally, if the individual’s adjusted gross income for federal income tax purposes exceeds $75,000, the state must require the individual to pay the highest amount of premiums that an individual would be required to pay under the state’s premium structure, although a state is allowed to subsidize this cost with its own funds. While the Basic Coverage Group Buy-In participants must have earnings, the Ticket to Work legislation does not specify a minimum level of employment for this group. Since states cannot adopt rules defining employment for this group that are more restrictive than those in federal law, states cannot establish requirements such as minimum earnings or hours worked. The Medical Improvement Group allows states to cover working individuals who lose Medicaid eligibility under the Basic Coverage Group because their conditions have improved to the point that they no longer meet the SSI definition of disability but still have “a severe, medically determinable impairment.” The same premium requirements apply as for the Basic Coverage Group. If a state elects to cover the Medical Improvement Group, it must also cover the Basic Coverage Group. While the Ticket to Work legislation does not set an employment standard for the Basic Coverage Group, it provides a definition and also allows a state to define employment for the Medical Improvement Group. According to the legislation, an individual qualifying for the Medical Improvement Group is considered employed if the individual is earning at least the minimum wage and working at least 40 hours per month. Alternatively, a state may use hours of work, wage levels, or other measures to define employment if the Secretary of Health and Human Services approves the definition. Compared with the rest of the working-age population, the estimated 6.7 million working-age individuals with disabilities nationwide were more likely to be not working, have less education, and have incomes below the FPL. Specifically, 82 percent of working-age individuals with disabilities, or about 5.5 million individuals, reported that they were not working. (See fig. 1.) Nearly three-fourths of working-age individuals with disabilities reported they had a high school education or less. Furthermore, these individuals were nearly three times more likely than individuals without disabilities to have incomes below the FPL. At the same time, individuals with disabilities were less likely to be uninsured compared with the rest of the working-age U.S. population, with just 9 percent of those with disabilities reporting being uninsured, compared with 15 percent for the rest of the working-age population. Nearly half of individuals with disabilities who reported having health insurance obtained coverage through public sources, such as Medicaid and Medicare. Working-age individuals with disabilities were far more likely to have public health coverage than working-age individuals in the general population. Specifically, working-age individuals with disabilities were about eight times more likely to have public health insurance coverage than other working-age individuals. Generally, the lower the income level, the more likely an individual with disabilities was to have public health insurance coverage. For example, 75 percent of individuals with disabilities who had incomes below the FPL had public health insurance, while fewer than 20 percent of those with incomes at or exceeding 400 percent of the FPL had public coverage. The extent of their health care costs underscores the need for individuals with disabilities to maintain some type of health insurance coverage to help cover the costs of their care. Health care expenditures for working- age individuals with disabilities were about five times the expenditures for other working-age individuals, annually averaging about $7,600 and $1,500, respectively. The 12 states that opted to implement the Ticket to Work Medicaid Buy-In program as of December 2002 set income and asset levels for eligibility that provided new opportunities for working individuals with disabilities to secure and maintain Medicaid coverage. DI-eligible individuals benefited particularly because states’ broader eligibility categories under the Buy-In allowed individuals to become eligible for Medicaid without spending down their incomes and to remain eligible when their incomes rose to higher levels. In addition to expanding income eligibility and asset limits, all states took advantage of the flexibility of the statute to charge premiums or copayments to ensure that Buy-In participants shared in the cost of their health care coverage. Across the 12 states that opted to implement Ticket to Work Medicaid Buy-In programs, all set eligibility requirements that expanded eligibility for working individuals with higher incomes or more assets than usually allowed under their Medicaid programs. As of December 2002, the number of Buy-In participants for the 12 states totaled 24,258, ranging from 3 participants in Wyoming to almost 8,500 participants in Missouri. (See table 3.) Eleven of the 12 states set Buy-In eligibility limits for income at twice the FPL or higher—or $17,720 per year for an individual in 2002— thereby expanding opportunities for individuals to secure and maintain Medicaid coverage. Buy-In programs also allowed participants to retain more assets than usually allowed in states’ Medicaid programs. Of the 12 states, 7 states set asset limits that ranged from $10,000 to $30,000 for individuals, couples, or both. Three states— Missouri, Indiana, and Arkansas—opted for asset requirements of $4,000 or less for an individual, while the remaining two states—Washington and Wyoming— imposed no asset limits. States generally allowed Ticket to Work participants to exclude certain assets from the asset limits. In addition to excluding the value of certain assets that applied to most individuals with disabilities in the Medicaid program when determining eligibility, 10 of the 12 states allowed Buy-In participants to save money in retirement accounts such as Individual Retirement Accounts, Keoghs, and 401(k)s; medical savings accounts; or special accounts that allow individuals to save for expenses such as modifications for job or home and education costs. These accounts are not considered when determining asset limits for participants. Two states— Arkansas and Indiana—set $10,000 and $20,000 limits, respectively, on the amount of savings participants can accumulate in these accounts. State officials in a few states said allowing participants to exclude these retirement accounts and other assets helped support states’ goals of affording working individuals with disabilities greater independence and self-sufficiency. For example, under these rules, participants can save to buy cars or homes and can set aside money for retirement. In most of the 12 states, the Buy-In programs were especially beneficial for DI-eligible individuals who, in contrast to most SSI individuals, were not always eligible for Medicaid coverage. Prior to the Ticket to Work legislation, DI individuals in 11 of the 12 states could qualify for Medicaid by spending down their incomes to specified levels (Wyoming did not offer a spend-down option). In these 11 states, the spend-down income eligibility levels ranged from 15 percent to 100 percent of the FPL. Under the new Buy-In programs, the income eligibility levels significantly exceeded those established under the spend-down categories (see fig. 2), thus allowing individuals to qualify for the Medicaid Buy-In directly— rather than spending down their incomes to qualify for Medicaid coverage. For example, an individual receiving DI in Arkansas could obtain Medicaid coverage through the Buy-In program with an income up to 250 percent of the FPL; prior to the Buy-In, the individual would have had to incur medical expenses that reduced his or her income to approximately 15 percent of the FPL in order to qualify for the spend-down category of Medicaid. This allows an individual with disabilities in Arkansas to maintain an income of up to $22,150 per year under the Buy-In, whereas that person would have had to spend down to an income of $1,300 a year to qualify for Medicaid. Buy-In programs afforded DI beneficiaries more immediate—and sometimes expanded—Medicaid coverage. In addition to relieving individuals of the requirement to spend down their income to qualify for Medicaid, DI individuals, who are not entitled to receive Medicare coverage until they have been receiving DI cash benefits for 24 months, also received more immediate health insurance coverage through the Medicaid Buy-In. Buy-In participants may also have access to a more expanded benefit package than individuals who receive Medicaid through a state’s medically needy program. However, when considering participation in the Buy-In program, DI beneficiaries must weigh the benefits of the higher earnings allowed under the program against the possible loss of DI cash benefits and Medicare coverage if their earnings increase beyond a certain threshold. Specifically, after a 9-month trial work period and a 36-month extended period of eligibility, if a DI beneficiary’s earnings increase over the SGA limit in any month, the individual loses DI eligibility entirely. Additionally, DI beneficiaries who earn more than the SGA level after the initial 9-month trial period could lose Medicare coverage after 8-1/2 years. The loss of entitlement for Medicare may be of concern for those individuals with disabilities who would not reach age 65 by the end of the 8-1/2-year time period. To the extent that a state reduced its Medicaid Buy-In eligibility level, or discontinued its Buy-In program, these former DI-eligible Buy-In participants could potentially be without health care coverage until they reached age 65. In contrast, SSI beneficiaries have different considerations than those weighed by DI beneficiaries in deciding whether to enroll in the Medicaid Buy-In program. Most SSI beneficiaries were assured eligibility for Medicaid and thus did not need the Buy-In program or to spend down their incomes in order to qualify for Medicaid. SSI beneficiaries in Medicaid would receive the same benefit package as those in a Buy-In program. Even SSI beneficiaries who worked could remain eligible for Medicaid as participants in a work incentive program, which allowed individuals to increase their incomes while maintaining their Medicaid coverage. In 5 of the 12 states, Buy-In income eligibility levels were lower than the Medicaid eligibility levels for individuals in the SSI work incentive program, and Buy-In eligibility levels only slightly exceeded those for the SSI work incentive beneficiaries in another 5 states. Additionally, beneficiaries in SSI’s work incentive program are not subject to premium payments in Medicaid, while Buy-In programs generally have imposed premium requirements for participants. States may require Buy-In participants to share in the cost of their health care coverage. All 12 states adopted cost-sharing mechanisms, primarily premiums or copayments, for Buy-In participants. States calculated premiums for Buy-In participants using various methods. For example, Pennsylvania and Washington set premiums as a percentage of allowable income, while Indiana and Kansas established varying premium levels for different incomes. (See table 4.) Generally, states assessed premiums when income was at 100 percent of the FPL or higher. Among states that charged premiums in 2002, the percentage of participants whose incomes were high enough to be charged premiums varied significantly across the states, from 12 percent of participants in Connecticut to all or nearly all participants in Illinois, Pennsylvania, Washington, and Wyoming. Average monthly premiums ranged from $26 to $82, with nearly half of the states setting premiums from $40 to $60. Two states—Arkansas and New Jersey—did not charge premiums as of December 2002. Stating that premiums were difficult to administer and collect, Arkansas chose not to impose a premium requirement. New Jersey has a premium requirement for participants with incomes greater than 150 percent of FPL; however, the state did not assess premiums because only about 5 percent of beneficiaries owed a payment. Three states—Connecticut, Indiana, and New Hampshire—reported discounting the Buy-In premium if participants also paid premiums for Medicare part B, for employer-sponsored insurance coverage, or for individual insurance coverage. For example, New Hampshire deducted the Medicare part B premium from a participant’s total Buy-In premium. If a Buy-In participant were paying a Medicare part B premium of $54 a month, his or her Medicaid Buy-In premium would be discounted by that amount. Thus, if a participant’s Buy-In premium were $80 a month, the monthly premium for the Buy-In program would be discounted to $26 a month. In Connecticut, any amount that participants pay for Medicare part B premiums, employer-sponsored coverage, or other out-of-pocket medical insurance is deducted from their premium liability. For example, if a participant owes a Buy-In premium of $100 a month and also is paying an employer $80 a month for private coverage, the individual’s Buy-In premium liability would be reduced to $20. Participants in 8 of the 12 states also were required to pay copayments for health care services, such as $0.50 to $3 for an office visit or prescription drugs. Copayments for inpatient hospital care generally varied from $3 per day in Illinois to $48 per hospital stay in Kansas. In 7 of these states, copayments were the standard cost-sharing requirements for Medicaid. The remaining state—Arkansas—imposed a two-level copayment system for participants. Arkansas Buy-In participants with incomes below 100 percent of the FPL had the same copayment requirements and were charged the same amounts for pharmacy and inpatient hospital services as usually prescribed under the state’s Medicaid program. Participants with incomes of 100 percent of the FPL or greater were charged additional copayments for services and equipment such as physician services ($10 per visit), outpatient mental and behavioral health services ($10 per visit), and prosthetic devices (10 percent of the maximum Medicaid payment). In the four states in which we conducted more detailed work— Connecticut, Illinois, Minnesota, and New Jersey—Buy-In programs enrolled many individuals who previously were enrolled in Medicaid, often in eligibility categories with more restrictive income limits, such as the medically needy category. Buy-In participants in the four states generally also had Medicare coverage. Across the four states, few Buy-In participants had coverage from private insurance at the time of their enrollment in the Medicaid Buy-In programs. Based on the limited participation in private insurance, officials in several states did not believe that “crowd-out”—the substitution of newly available public coverage for private health insurance—was a concern for the Medicaid Buy-In programs. The limited employment information available for participants from two of the four states—Connecticut and Minnesota—showed that Buy-In participants generally were employed in low-wage jobs—many making less than the SGA threshold, which at the time was $780 per month. These four states, however, had little information regarding the extent to which the Buy-In programs fostered employment among individuals with disabilities. Across these four states, the share of Buy-In participants with previous Medicaid coverage was 53 percent in Connecticut, 81 percent in Illinois, 61 percent in Minnesota, and 58 percent in New Jersey. Whereas previous Medicaid coverage was largely due to eligibility through spend-down provisions, Buy-In participation allowed them to retain more of their income or assets and still qualify for Medicaid. Of those who switched from existing Medicaid coverage to the Buy-In program, Illinois and Minnesota estimated that 79 percent and 51 percent of participants, respectively, were beneficiaries who originally had spent down their income to qualify for Medicaid. While not offering a specific estimate, a New Jersey official indicated that most of the Buy-In participants who were enrolled in Medicaid before switching to the Buy-In category also had spent down their income to qualify for Medicaid. Buy-In eligibility was particularly beneficial for individuals in New Jersey because the state’s Medicaid coverage for medically needy beneficiaries did not include prescription drugs or community-based long-term care services, both of which were covered under the Buy-In. In three of the four states—Connecticut, Minnesota, and New Jersey— more than 80 percent of Buy-In participants also received health care coverage through Medicare. (See table 5.) State officials reported that those with Medicare relied on the Medicaid Buy-In for purposes of obtaining outpatient prescription drug coverage since Medicare generally does not cover this benefit. Few participants—less than 10 percent of participants in any of the four states—reported having employer- sponsored coverage at the time of their enrollment into the Medicaid Buy- In programs. For example, Connecticut, which requires Buy-In applicants who have access to employer-sponsored insurance coverage to apply for this coverage, found that less than 6 percent of Buy-In applicants had health care coverage through their workplace. For Buy-In participants with private health insurance coverage, which often has more limited benefits than those covered by Medicaid, the Buy-In can serve as a “wrap around” to private coverage by providing such services as home health and personal care, and items such as durable medical equipment. According to officials in several states, crowd-out was not a concern for Buy-In programs because most participants did not report having private health insurance coverage at the time of their enrollment into the Medicaid Buy-In programs. For example, Minnesota and New Jersey state officials said they did not view crowd-out as a significant issue for this population because many of the participants worked part-time and were rarely offered private insurance coverage. Additionally, both Minnesota and Connecticut required individuals to either enroll or remain enrolled in employer-sponsored coverage if it was offered. As of December 2002, these states had not formally analyzed whether Buy-In participants withdrew from private health insurance coverage prior to obtaining Medicaid coverage. New Jersey officials plan to monitor whether employees are deciding to or are being urged to pursue the Buy-In program rather than their employer-sponsored coverage. In the three states with data available, working individuals with disabilities who qualified for the Medicaid Buy-In program generally worked in low- wage jobs. (See table 6.) While one purpose of the Ticket to Work legislation was to enable individuals with disabilities to reduce their dependency on federal cash benefit programs through earnings from work, available data from Connecticut, Illinois, Minnesota showed that few participants earned more than the SGA limit, which was $780 in December 2002. Sixty-four percent of participants in Connecticut, 61 percent of participants in Illinois, and 77 percent of participants in Minnesota had earned income well below the SGA limit. None of these states had asked participants to identify their occupation or the industry in which they were employed on their Medicaid Buy-In applications; however, some states may conduct broader analyses of participants’ employment as part of required evaluations under a related Ticket to Work grant program. Two of the four states we reviewed could identify whether participants had increased their earnings once enrolled in the Buy-In. Forty percent of Minnesota participants and 28 percent of Connecticut participants increased their earnings between the time of initial enrollment and December 2001, the most recent date for which these data were available. Average monthly increases over previous earnings were $306 in Minnesota and $332 in Connecticut. New Jersey and Illinois were not able to provide this information. Minnesota found that 64 percent of those in the state’s Buy-In program as of December 2001 earned wages for at least one 3- month period in the 2-year period prior to enrollment. Minnesota officials cautioned that the analysis was limited by the lack of detail in the state database; for example, they did not know whether participants were disabled during this entire period, or whether individuals were consistently employed. We provided a draft of this report for comment to CMS and the 12 states in our sample. In its comments, CMS said that, in addition to the states with existing BBA and Ticket to Work Buy-In programs, at least three more states are planning to implement a Medicaid Buy-In program within the coming year, which would result in over half of the states offering health insurance to workers with disabilities. CMS noted that the expansion of Medicaid coverage to these individuals is encouraging particularly because states are experiencing fiscal budget constraints. CMS also said that it is collecting information on Medicaid Buy-In participants’ earnings and Medicaid costs for the first 2 years of operation. In addition, CMS expects to complete an extensive study of states’ experiences for 2001 and 2002 with the Buy-In programs authorized under both the BBA and the Ticket to Work and Work Incentives Improvement Act of 1999 in the fall of 2003 and to report its findings in 2004. CMS also suggested that, in view of general concerns over racial disparities and access to care in rural areas, it might be helpful for us to comment on these demographic factors as part of our findings. We did not include these factors in our scope of work, even for the four states where we did more detailed work, and therefore cannot comment on them. CMS provided technical comments, which we have incorporated as appropriate. The full text of CMS’s written comments appears in appendix II. Eleven of the 12 states responded with technical comments, which we incorporated where appropriate. We will send copies of this report to the Administrator of the Centers for Medicare & Medicaid Services and other interested parties. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http//www.gao.gov. If you or members of your staffs have any questions regarding this report, please contact me on (202) 512-7118 or Carolyn Yocom at (202) 512-4931. Other major contributors to this report were Catina Bradley, Karen Doran, Kevin Milne, and Elizabeth T. Morrison. To develop a national estimate and compare the characteristics of working-age individuals with disabilities with those for working-age individuals in the rest of the population, we analyzed data available from the Medical Expenditure Panel Survey (MEPS) household component, which provides data on individuals’ demographics, employment, health characteristics, and medical spending. MEPS, conducted by the Agency for Healthcare Research and Quality (AHRQ), consists of four surveys and is designed to provide nationally representative data on health care use and expenditures for U.S. civilian noninstitutionalized individuals. For our analysis, we used one of the four surveys—the Household Component. The Household Component is a survey of individuals regarding their demographic characteristics, health insurance coverage, and health care use and expenditures. The 1997 and 1998 versions of the MEPS Household Component were the most recently available at the time of our analysis that had both (1) a pooled estimation file published by AHRQ that allows pooling 2 or 3 years of data, and (2) complete demographic, health insurance, and health care expenditure data. We pooled data from 1997 and 1998 in order to increase our sample sizes for individuals with disabilities. Using the Medical Care Consumer Price Index from the Bureau of Labor Statistics, we inflated 1997 medical care expenditures to 1998 values. Our estimate of working-age individuals with disabilities includes individuals aged 16 to 64 with one or both of these conditions: (1) needing help or supervision in performing activities of daily living (ADL) or instrumental activities of daily living (IADL) because of an impairment or a physical or mental health problem or (2) being completely unable to work at a job, do housework, or go to school. Our analyses of working-age individuals with disabilities are based on a sample size of 1,680, representing a population of 6.68 million individuals with disabilities. Table 7 shows the unweighted and weighted sample sizes on which our analyses are based.
Over 7 million individuals with disabilities rely on medical and supportive services covered by Medicaid. However, if working-age individuals with disabilities desire to increase their self-sufficiency through employment, they could jeopardize their eligibility for Medicaid coverage, possibly leaving them without an alternative for health insurance. In an effort to help extend Medicaid coverage to certain individuals with disabilities who desire to work, Congress passed the Ticket to Work and Work Incentives Improvement Act of 1999. This legislation authorizes states to raise their Medicaid income and asset eligibility limits for individuals with disabilities who work. States may require that working individuals with disabilities "buy in" to the program by sharing in the costs of their coverage--thus, these states' programs are referred to as a Medicaid Buy-In. The act also required that GAO report on states' progress in designing and implementing the Medicaid Buy-In. GAO identified states that operated Buy-In programs as of December 2002 and analyzed the income eligibility limits and cost-sharing provisions established by those states. GAO also assessed the characteristics of the Buy-In participants in four states that were among the most experienced in implementing the program. As of December 2002, 12 states had implemented Medicaid Buy-In programs under the authority of the Ticket to Work legislation, which was effective October 1, 2000, enrolling over 24,000 working individuals with disabilities. These states used the flexibility allowed by the legislation to raise income eligibility and asset limits as well as cost-sharing fees. Across the 12 states, income eligibility levels ranged from 100 percent of the federal poverty level (FPL) in Wyoming to no income limit in Minnesota, with 11 states setting income eligibility limits at twice the FPL or higher. In addition to increasing income and asset levels, these states required participants to buy in to the program by charging premiums, ranging from $26 to $82 a month, and co-payments, generally ranging from $0.50 to $3 for office visits and prescription drugs. In detailed analysis of four states--Connecticut, Illinois, Minnesota, and New Jersey--GAO found that most Buy-In participants had prior insurance coverage by Medicaid and Medicare, few had prior coverage by private health insurance, and many earned low wages--most making less than $800 per month. In commenting on a draft of this report, the Centers for Medicare & Medicaid Services noted that it expects to report in 2004 on its current study of states' experiences in 2001 and 2002 with the Medicaid Buy-In programs.
Funded at $8 billion to nearly $10 billion annually, MDA’s BMDS is the largest research development program in the Department of Defense’s budget. Since the 1980s, DOD has spent more that $100 billion on the development and early fielding of this system and it estimates that continued development will require an additional $50 billion between fiscal years 2008 and 2013. Since 2002, MDA has worked to fulfill its mission through its development and fielding of a diverse collection of land-, air-, sea-, and space-based assets. These assets are developed and fielded through nine BMDS elements and include the Airborne Laser (ABL); Aegis Ballistic Missile Defense (Aegis BMD); BMDS Sensors; Command, Control, Battle Management, and Communications (C2BMC); Ground-based Midcourse Defense (GMD); Kinetic Energy Interceptors (KEI); Multiple Kill Vehicles (MKV); Space Tracking and Surveillance System (STSS); and Terminal High Altitude Area Defense (THAAD). To develop a system capable of carrying out its mission, MDA, until December 2007, executed an acquisition strategy in which the development of missile defense capabilities was organized in 2-year increments known as blocks. Each block was intended to provide the BMDS with capabilities that enhanced the development and overall performance of the system. The first 2-year block, known as Block 2004, fielded a limited initial capability that included early versions of the GMD, Aegis BMD, Patriot Advanced Capability-3, and C2BMC elements. The agency’s second 2-year block– Block 2006– culminated on December 31, 2007 and fielded additional BMDS assets. This block also provided improved GMD interceptors, enhanced Aegis BMD missiles, upgraded Aegis BMD ships, a Forward-Based X-Band-Transportable radar, and enhancements to C2BMC software. On December 7, 2007, MDA’s Director approved a new block construct that will be the basis for all future development and fielding, which I will discuss in more detail shortly. To assess progress during Block 2006, we examined the accomplishments of nine BMDS elements that MDA is developing and fielding. Our work included examining documents such as Program Execution Reviews, test plans and reports, production plans, and Contract Performance Reports. We also interviewed officials within each element program office and within MDA functional directorates. In addition, we discussed each element’s test program and its results with DOD’s Office of the Director, Operational Test and Evaluation. In following up on transparency, accountability, and oversight issues raised in our March 2007 report, we held discussions with officials in MDA’s Directorate of Business Operations to determine whether its new block structure improved accountability and transparency of the BMDS. In addition, we reviewed pertinent sections of the U.S. Code to compare MDA’s current level of accountability with federal acquisition laws. We also interviewed officials from the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics and DOD’s Joint Staff to discuss the oversight role of the new Missile Defense Executive Board (MDEB). Additionally, we reviewed the MDEB charter to identify the oversight responsibility of the board. MDA made progress in developing and fielding the BMDS during 2007. Additional assets were fielded and/or upgraded, several tests met planned objectives, and other development activities were conducted. On the other hand, fewer assets were fielded than originally planned, some tests were delayed, and the cost of the block increased by approximately $1 billion. To stay within the revised budget despite increasing contractor costs, MDA deferred some budgeted work to future blocks. Such deferrals, coupled with a planning methodology used by some contractors that could obscure cost reporting, prevent us from determining the full cost of Block 2006. MDA was able to meet most test objectives despite delays in several elements’ test schedules. Neither we nor DOD could evaluate the aggregate performance of fielded assets because flight testing to date has not generated sufficient data. An evaluation of aggregate performance would also have to consider that (1) some parts in fielded interceptors identified as potentially problematic have not been replaced yet, and (2) tests done to date have been developmental in nature and do not provide sufficient realism for DOD to determine if the BMDS is suitable and effective for battle. During Block 2006, MDA increased its inventory of BMDS assets while enhancing the system’s performance. It fielded 14 additional Ground-based interceptors, 12 Aegis BMD missiles designed to engage more advanced threats, 4 new Aegis BMD destroyers, 1 new Aegis BMD cruiser, and 8 Web browsers and 1 software suite for C2BMC. In addition, MDA upgraded half of its Aegis BMD ship fleet, successfully conducted four Aegis BMD and two GMD intercept tests, and completed a number of ground tests to demonstrate the capability of BMDS components. Although MDA fielded an increased capability, it was unable to deliver all assets originally planned for Block 2006. The Sensors element was the only Block 2006 element to meet all of its original goals set in March 2005 while the remaining elements––GMD, Aegis BMD, C2BMC––were unable to meet all of their original quantity goals. Sensors delivered a second FBX-T in January 2007 while the GMD element fielded 14 of 15 Ground- Based interceptors originally planned during Block 2006. Last year, we reported that MDA delayed the partial upgrade of the Thule early warning radar—one of GMD’s original goals-- until a full upgrade could be accomplished. Additionally, the Aegis BMD element delivered 4 additional Destroyers and 1 new Cruiser as originally planned, but did not meet its original goal for missile deliveries––delivering 12 of 19 SM-3 missiles planned for the block. C2BMC also did not deliver two of the three software suites originally planned for Block 2006. MDA’s Block 2006 program of work culminated with higher than anticipated costs. In March 2007, we reported that MDA’s cost goal for Block 2006 increased by approximately $1 billion because of greater than expected GMD operations and sustainment costs and technical problems. If the contractors continue to perform as they did in fiscal year 2007, we estimate that at completion, the cumulative overrun in the contracts could be between about $1.9 billion and $2.8 billion. To stay within its revised budget, MDA deferred some work it expected to accomplish during the block. When work is deferred, its costs are no longer accounted for in the original block. In other words, if work planned and budgeted for Block 2006 was deferred to Block 2008, that work would be counted as a Block 2008 cost. Because MDA did not track the cost of the deferred work, the agency could not make an adjustment that would have matched the cost with the correct block. Consequently, we were unable to determine the full cost of Block 2006. Another reason why it is difficult to determine the actual cost of Block 2006 is a planning methodology employed by MDA prime contractors that can obscure the full cost of work. Contractors typically divide the total work of a contract into small efforts in order to define them more clearly and to ensure proper oversight. Work is planned into types of work packages including: (1) level of effort– work that contains tasks of a general or supportive nature and does not produce a definite end product and (2) discrete—work that has a definable end product or event. When work is discrete, delivery of the end product provides a sound basis for determining actual contractor performance. When discrete work is instead planned as level of effort, the contractor’s performance becomes less transparent because work is considered complete when the time planned for it has expired, whether or not the intended product has been completed. Earned value management does not recognize such variances in completing scheduled work and to the extent more work has to be done to complete the product, additional costs could be incurred that are not yet recognized. Many of MDA’s prime contractors plan a large percentage of their work as level of effort. MDA officials agree that its contractors have improperly planned discrete work as level of effort, and are taking steps to remedy the situation. We also observed that while several contractors had difficulty with controlling costs, during fiscal year 2007, MDA awarded approximately 90 percent or $579 million of available award fee to its prime contractors. In particular, contractors developing the ABL and Aegis BMD Weapon System were rated as performing very well in the cost and/or program management elements and received commensurate fees, even though earned value management data showed that their cost and schedule performance was declining. Although DOD guidance discourages the use of earned value performance metrics in award fee criteria, MDA includes this––one of many factors for consideration in rating contractors’ performance––in several of its award fee plans. The agency recognizes that there is not always a good link between its intentions for award fees and the amount of fee being earned by its contractors. In an effort to rectify this problem, the agency has begun to revise its award fee policy to align agency practices more closely with DOD’s current policy that better links performance with award fees. Most test objectives were achieved during 2007, although several BMDS programs experienced setbacks in their test schedules. The MKV, KEI, and Sensors elements were able to execute all scheduled activities as planned. The Aegis BMD, THAAD, ABL, STSS, and C2BMC elements experienced test delays, but all were able to achieve their primary test objectives. GMD successfully completed an intercept with an operationally representative interceptor and a radar characterization test. A second intercept test employing the SBX radar has been delayed because a target malfunction delayed the execution of the first intercept test. The SBX capability is important as it is a primary sensor to be used to engage ballistic missiles in the midcourse phase of flight. As of yet, this capability has not been verified through flight testing. As we reported in March 2007, MDA altered its original Block 2006 performance goals commensurate with the agency’s reductions in the delivery of fielded assets. For several reasons, information is not sufficient to assess whether MDA achieved its revised performance goals. First, MDA uses a combination of simulations and flight tests to determine whether performance goals are met. However, too few flight tests have been completed to ensure the accuracy of the models and simulations predictions. Second, confidence in the performance of the BMDS is reduced because of unresolved technical and quality issues in the GMD element. For example, the GMD element has experienced the same anomaly during each of its flight tests since 2001. This anomaly has not yet prevented the program from achieving any of its primary test objectives, but to date neither its source nor solution has been clearly identified. Program officials plan to continue their assessment of test data to determine the anomaly’s root cause. The performance of some fielded GMD assets is also questionable because they contain parts identified by auditors in MDA’s Office of Quality, Safety, and Mission Assurance as less reliable or inappropriate for use in space that have not yet been replaced. MDA has begun to replace the questionable parts in the manufacturing process and to purchase the parts for retrofit into fielded interceptors. However, it will not complete the retrofit effort until 2012. Finally, tests of the GMD element have been of a developmental nature, and have not included operationally representative test geometries in which GMD will perform its mission. MDA has added operational test objectives to its developmental test program, but the objectives are mostly aimed at proving that military personnel can operate the equipment. The lack of data has limited the operational test and evaluation Director’s annual BMDS assessment to commenting on aspects of tests that were operationally realistic and thus has prevented the Director from determining whether the system is suitable and effective for the battlefield. Since its initiation in 2002, MDA has been given a significant amount of flexibility. While this flexibility allows agile decision making, it lessens the transparency of MDA’s acquisition processes, making it difficult to conduct oversight and hold the agency accountable for its planned outcomes and costs. As we reported in March 2007, MDA operates with considerable autonomy to change goals and plans, which makes it difficult to reconcile outcomes with original expectations and to determine the actual cost of each block and of individual operational assets. In the past year, MDA has begun implementing two initiatives—a new block construct and a new executive board—to improve transparency, accountability, and oversight. These initiatives represent improvements over current practices, although we see additional improvements MDA can make. In addition, Congress has directed that MDA begin buying certain assets with procurement funds like other programs, which should promote accountability for and transparency of the BMDS. In 2007, MDA redefined its block construct to better communicate its plans and goals to Congress. The agency’s new construct is based on fielding capabilities that address particular threats as opposed to the previous biennial time periods. MDA’s new block construct makes many positive changes. These include establishing unit cost for selected block assets, incorporating into a block only those elements or components that will be fielded during the block, and abandoning the practice of deferring work from block to block. These changes should improve the transparency of the BMDS program and make MDA more accountable for the investment being made in missile defense. For example, the actual cost of each block can be tracked because MDA will no longer defer work planned for one block, along with its cost, to a future block. In addition, MDA plans to develop unit cost for selected BMDS assets–– such as THAAD interceptors–– so that cost growth of those assets can be monitored. In addition, the agency plans to request an independent verification of these unit costs and report significant cost growth to Congress. However, MDA has not yet determined all of the assets that will report a unit cost or how much a unit cost must increase before it is reported to Congress. Although improvements are inherent in MDA’s proposed block construct, the new construct does not resolve all transparency and accountability issues. For example, MDA officials told us that the agency does not plan to estimate the full cost of a block. Instead, the cost baseline reported to Congress will include all prior costs of the block and the expected budget for the block for the 6 years included in DOD’s Future Years Defense Plan. Costs beyond the 6th year of the plan will not be estimated. Once baselined, if the budget for a block changes, MDA plans to report and explain those variations to Congress. Because the full cost of each block will not be known, it will be difficult for decision makers to compare the value of investing in each block to the value of investing in other DOD programs or to determine whether a block is affordable over the long term. Other DOD programs are required to provide the full cost estimate of developing and producing their weapon system, even if the costs extend beyond the Future Years Defense Plan. Another issue yet to be addressed is whether the concurrent development and fielding of BMDS assets will continue. Fully developing an asset and demonstrating its capability prior to production increases the likelihood that the product will perform as designed and can be produced at the cost estimated. To field an initial capability quickly, MDA accepted the risk of concurrent development and fielding during Block 2004. It continued to do so during Block 2006 as it fielded assets before they were fully tested. For example, by the end of Block 2004, the agency realized that the performance of some ground-based interceptors could be degraded because the interceptors included inappropriate or potentially unreliable parts. As noted earlier, MDA has begun the process of retrofitting these interceptors, but work will not be completed until 2012. Meanwhile, there is a risk that some interceptors might not perform as designed. MDA has not addressed whether it will accept similar performance risks under its new block construct or whether it will fully develop and demonstrate all elements/components prior to fielding. MDA has not addressed whether it will transfer assets produced during a block to a military service for production and operation at the block’s completion. Officials representing multiple DOD organizations recognize that transfer criteria are neither complete nor clear given the BMDS’s complexity. Without clear transfer criteria, MDA has transferred the management of only one element—the Patriot Advanced Capability-3—to the military for production and operation. For other elements, MDA and the military services have been negotiating the transition of responsibilities for the sustainment of fielded elements—a task that has proven to be time consuming. Although MDA documents show that under its new block construct the agency should be ready to deliver BMDS components that are fully mission-capable, MDA officials could not tell us whether at the end of a block MDA’s Director will recommend when management of components, including production responsibilities, will be transferred to the military. Oversight improvement initiatives are also underway for MDA. In March 2007, the Deputy Secretary of Defense established a Missile Defense Executive Board (MDEB) to recommend and oversee implementation of strategic policies and plans, program priorities, and investment options for protecting the United States and its allies from missile attacks. The MDEB is also to replace existing groups and structures, such as the Missile Defense Support Group. The MDEB appears to be vested with more authority than the Missile Defense Support Group. When the Support Group was chartered in 2002, it was to provide constructive advice to MDA’s Director. However, the Director was not required to follow the advice of the group. According to a DOD official, although the Support Group met many times initially, it did not meet after June 2005. This led to the formation of the MDEB. Its mission is to review and make recommendations on MDA’s comprehensive acquisition strategy to the Deputy Secretary of Defense. It is also to provide the Under Secretary of Defense for Acquisition, Technology and Logistics, with a recommended strategic program plan and a feasible funding strategy based on business case analysis that considers the best approach to fielding integrated missile defense capabilities in support of joint MDA and warfighter objectives. The MDEB will be assisted by four standing committees. These committees, who are chaired by senior-level officials from the Office of the Secretary of Defense and the Joint Staff, could play an important oversight role as they are expected to make recommendations to the MDEB, which in turn, will recommend courses of action to the Under Secretary of Defense and the Director, MDA as appropriate. Although the MDEB is expected to exercise some oversight of MDA, it will not have access to all the information normally available to DOD oversight bodies. For other major defense acquisition programs, the Defense Acquisition Board has access to critical information because before a program can enter the System Development and Demonstration phase of the acquisition cycle, statute requires that certain information be developed. However, in 2002, the Secretary of Defense deferred application of DOD policy that, among other things, require major defense programs to obtain approval before advancing from one phase of the acquisition cycle to another. Because MDA does not yet follow this cycle, and has not yet entered System Development and Demonstration, it has not triggered certain statutes requiring the development of information that the Defense Acquisition Board uses to inform its decisions. For example, most major defense acquisition programs are required by statute to obtain an independent verification of life-cycle cost estimates prior to beginning system development and demonstration, and/or production and deployment. Independent life-cycle cost estimates provide confidence that a program is executable within estimated cost. Although MDA plans to develop unit cost for selected block assets and to request that DOD’s Cost Analysis Improvement Group verify the unit costs, the agency does not yet plan to do so for a block cost estimate. Statute also requires an independent verification of a system’s suitability for and effectiveness on the battlefield through operational testing before a program can proceed beyond low-rate initial production. After testing is completed, the Director for Operational Test and Evaluation assesses whether the test was adequate to support an evaluation of the system’s suitability and effectiveness for the battlefield, whether the test showed the system to be acceptable, and whether any limitations in suitability and effectiveness were noted. However, a comparable assessment of the BMDS assets being fielded will not be available to the MDEB as MDA conducts primarily developmental tests of its assets with some operational test objectives. As noted earlier, developmental tests do not provide sufficient data for operational test officials to make such an assessment of BMDS. MDA will also make some decisions without needing approval from the MDEB or any higher level official. Although the charter of the MDEB includes the mission to make recommendations to MDA and the Under Secretary of Defense for Acquisition, Technology and Logistics on investment options, program priorities, and MDA’s strategy for developing and fielding an operational missile defense capability, the MDEB will not necessarily have the opportunity to review and recommend changes to BMDS blocks. MDA documents show that the agency plans to continue to define each block of development without requiring input from the MDEB. According to a briefing on the business rules and processes for MDA’s new block structure, the decision to initiate a new block of BMDS capability will be made by MDA’s Director. Also cost, schedule, and performance parameters will be established by MDA when technologies that the block depends upon are mature, a credible cost estimate can be developed, funding is available, and the threat is both imminent and severe. The Director will inform the MDEB as well as Congress when a new block is initiated, but he will not seek the approval of either. Finally, there will be parts of the BMDS program that the MDEB will have difficulty overseeing because of the nature of the work being performed. MDA plans to place any program that is developing technology in a category known as Capability Development. These programs, such as ABL, KEI, and MKV, will not have a firm cost, schedule, or performance baseline. This is generally true for technology development programs in DOD because they are in a period of discovery, which makes schedule and cost difficult to estimate. On the other hand, the scale of the technology development in BMDS is unusually large, ranging from $2 billion to about $5 billion dollars a year—eventually comprising nearly half of MDA’s budget by fiscal year 2012. The MDEB will have access to the budgets planned for these programs over the next five or six years, each program’s focus, and whether the technology is meeting short term key events or knowledge points. But without some kind of baseline for gauging progress in these programs, the MDEB will not know how much more time or money will be needed to complete technology maturation. MDA’s experience with the ABL program provides a good example of the difficulty in estimating the cost and schedule of technology development. In 1996, the ABL program believed that all ABL technology could be demonstrated by 2001 at a cost of about $1 billion. However, MDA now projects that this technology will not be demonstrated until 2009 and its cost has grown to over $5 billion. In an effort to further improve the transparency of MDA’s acquisition processes, Congress has directed that MDA’s budget materials delineate between funds needed for research, development, test and evaluation; procurement; operations and maintenance; and military construction. Congress gave MDA the flexibility to field certain assets using research, development, test and evaluation funding which allowed MDA to fund the purchase of assets over multiple years. Congress recently restricted MDA’s authority and required MDA to purchase certain assets with procurement funds. Using procurement funds will mean that MDA will be required to ensure that assets are fully funded in the year of their purchase, rather than incrementally funded over several years. Additionally, our analysis of MDA data shows that incremental funding is usually more expensive than full-funding, in part, because inflation decreases the buying power of the dollar each year. For example, after reviewing MDA’s incremental funding plan for THAAD fire units and Aegis BMD missiles, we analyzed the effect of fully funding these assets and found that the agency could save about $125 million by fully funding their purchase and purchasing them in an economical manner. Our annual report on missile defense is in draft and with DOD for comment. It will be issued in final by March 15, 2008. In that report, we are recommending additional steps that could build on efforts to further improve the transparency, accountability, and oversight of the missile defense program. Our recommendations include actions needed to improve cost reporting as well as testing and evaluation. DOD is in the process of preparing a formal response to the report and its recommendations. Mr. Chairman, this concludes my statement. I would be pleased to respond to any questions you or members of the subcommittee may have. For questions about this statement, please contact me at (202) 512-4841 or Francisp@gao.gov. Individuals making key contributions to this statement include David Best, Assistant Director; LaTonya D. Miller; Steven B. Stern; Meredith Allen Kimmett; Kenneth E. Patton; and Alyssa Weir. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Funded at $8 billion to $10 billion per year, the Missile Defense Agency's (MDA) effort to develop and field a Ballistic Missile Defense System (BMDS) is the largest research and development program in the Department of Defense (DOD). The program has been managed in 2-year increments, known as blocks. Block 2006, the second BMDS block, was completed in December 2007. By law, GAO annually assesses MDA's progress. This testimony is based on GAO's assessment of MDA's progress in (1) meeting Block 2006 goals for fielding assets, completing work within estimated cost, conducting tests, and demonstrating the performance of the overall system in the field, and (2) making managerial improvements to transparency, accountability, and oversight. In conducting the assessment, GAO reviewed the assets fielded; contractor cost, schedule, and performance; and tests completed during 2007. GAO also reviewed pertinent sections of the U.S. code, acquisition policy, and the charter of a new missile defense board. We have previously made recommendations to improve oversight in the areas that MDA has recently taken action. We also have a draft report that is currently with DOD for comment that includes additional recommendations. In the past year, MDA has fielded additional and new assets, enhanced the capability of some existing assets, and achieved most test objectives. However, MDA did not meet the goals it originally set for the block. Ultimately, MDA fielded fewer assets, increased costs by about $1 billion and conducted fewer tests. Even with the cost increase, MDA deferred work to keep costs from increasing further, as some contractors overran their fiscal year 2007 budgets. Deferring work obscures the cost of the block because such work is no longer counted as part of Block 2006. The cost of the block may have been further obscured by a way of planning work used by several contractors that could underestimate the actual work completed. If more work has to be done, MDA could incur additional costs that are not yet recognized. MDA also sets goals for determining the overall performance of the BMDS. Similar to other DOD programs, MDA uses models and simulations to predict BMDS performance. We were unable to assess whether MDA met its overall performance goal because there have not been enough flight tests to provide a high confidence that the models and simulations accurately predict BMDS performance. Moreover, the tests done to date have been developmental in nature, and do not provide sufficient realism for DOD's test and evaluation Director to determine whether BMDS is suitable and effective for battle. GAO has previously reported that MDA has been given unprecedented funding and decision-making flexibility. While this flexibility has expedited BMDS fielding, it has also made MDA less accountable and transparent in its decisions than other major programs, making oversight more challenging. MDA, with some direction from Congress, has taken significant steps to address these concerns. MDA implemented a new way of defining blocks--its construct for developing and fielding BMDS increments--that should make costs more transparent. For example, under the newly-defined blocks, MDA will no longer defer work from one block to another. Accountability should also be improved as MDA will for the first time estimate unit costs for selected assets and report variances from those estimates. DOD also chartered a new executive board with more BMDS oversight responsibility than its predecessor. Finally, MDA will begin buying certain assets with procurement funds like other programs. This will benefit transparency and accountability, because to use procurement funding generally means that assets must be fully paid for in the year they are bought. Previously, MDA has been able to pay for assets incrementally using research and development funds. Some oversight concerns remain, however. For example, MDA does not plan to estimate the total cost of a block, nor to have a block's costs independently verified--actions required of other programs to inform decisions about affordability and investment choices. Also, the executive board faces a challenge in overseeing MDA's large technology development efforts and does not have approval authority for some key decisions made by MDA.
FDA is responsible for overseeing the safety and effectiveness of human drugs that are marketed in the United States, whether they are manufactured in foreign or domestic establishments. Foreign establishments that market their drugs in the United States must register with FDA. As part of its efforts to ensure the safety and quality of imported drugs, FDA may inspect foreign establishments whose products are imported into the United States. Regular inspections of manufacturing establishments are an essential component of ensuring drug safety. Conducting testing of finished dosage form drug products cannot reliably determine drug quality. Therefore, FDA relies on inspections to determine an establishment’s compliance with current good manufacturing practice regulations (GMP). These inspections are a critical mechanism in FDA’s process of assuring that the safety and quality of drugs are not jeopardized by poor manufacturing practices. Requirements governing foreign and domestic inspections differ. Specifically, FDA is required to inspect every 2 years those domestic establishments that manufacture drugs marketed in the United States, but there is no comparable requirement for inspecting foreign establishments. FDA does not have authority to require foreign establishments to allow the agency to inspect their facilities. However, FDA has the authority to conduct physical examinations of products offered for import, and if there is sufficient evidence of a violation, prevent their entry at the border. Within FDA, CDER sets standards and evaluates the safety and effectiveness of prescription and over-the-counter drugs. Among other things, CDER requests that ORA inspect both foreign and domestic establishments to ensure that drugs are produced in conformance with federal statutes and regulations, including current GMPs. CDER requests that ORA conduct inspections of establishments that produce drugs in finished-dosage form as well as those that produce bulk drug substances, including APIs used in finished drug products. These inspections are performed by investigators and, on occasion, laboratory analysts. ORA conducts two primary types of drug manufacturing establishment inspections: Preapproval inspections of domestic and foreign establishments are conducted before FDA will approve a new drug to be marketed in the United States. These inspections occur following FDA’s receipt of a new drug application (NDA) or an abbreviated new drug application (ANDA) and focus on the manufacture of a specific drug. Preapproval inspections are designed to verify the accuracy and authenticity of the data contained in these applications to determine that the manufacturer is following commitments made in the application. FDA also determines that the manufacturer of the finished drug product, as well as each manufacturer of a bulk drug substance used in the finished product, manufactures, processes, packs, and labels the drug adequately to preserve its identity, strength, quality, and purity. Postapproval GMP surveillance inspections are conducted to ensure ongoing compliance with the laws and regulations pertaining to the manufacturing processes used by domestic and foreign establishments in the manufacture of drug products marketed in the United States and bulk drug substances used in the manufacture of those products. These inspections focus on a manufacturer’s systemwide controls for ensuring that drug products are of high quality. Systems examined during these inspections include those related to materials, quality control, production, facilities and equipment, packaging and labeling, and laboratory controls. These systems may be involved in the manufacture of multiple drug products. FDA has established arrangements with regulatory bodies in other countries to facilitate the sharing of information about drug inspections. FDA has entered into arrangements related to GMP inspections with Canada, Japan, the European Union, and others. The scope of such arrangements can vary. Some arrangements may allow FDA to obtain reports of inspections conducted by other countries, for informational purposes. Other arrangements may involve more than the exchange of information. For example, FDA and another country may enter into an arrangement to work towards the mutual recognition of each other’s inspection standards or the acceptance of one another’s inspections, in lieu of their own. CDER uses a risk-based process to select some foreign and domestic establishments for postapproval GMP surveillance inspections. The process uses a risk-based model to identify those establishments that, based on characteristics of the establishment and of the product being manufactured, have the greatest public health risk potential should they experience a manufacturing defect. For example, FDA considers the risk to public health from poor quality over-the-counter drugs to be lower than for prescription drugs. Consequently establishments manufacturing only over-the-counter drugs receive a lower score on this factor in the risk- based process than other manufacturers. Through this process, CDER annually prepares a prioritized list of domestic establishments and a separate, prioritized list of foreign establishments. FDA uses multiple databases to manage its foreign drug inspection program. DRLS contains information on foreign and domestic drug establishments that have registered with FDA to market their drugs in the United States. These establishments must also list any drugs they market in the United States. These establishments provide information, such as company name and address and the drug products they manufacture for commercial distribution in the United States, on paper forms, which are entered into DRLS by FDA staff. OASIS contains information on drugs and other FDA-regulated products offered for entry into the United States, including information on the establishment that manufactured the drug. The information in OASIS is automatically generated from data managed by Customs and Border Protection (CBP). The data are originally entered by customs brokers based on the information available from the importer. CBP specifies an algorithm by which customs brokers generate a manufacturer identification number from information about an establishment’s name, address, and location. FACTS contains information on FDA’s inspections of foreign and domestic drug establishments. FDA investigators and laboratory analysts enter information into FACTS following completion of an inspection. According to DRLS, in fiscal year 2007, foreign countries that had the largest number of registered establishments were Canada, China, France, Germany, India, Italy, Japan, and the United Kingdom. These countries are also listed in OASIS as having the largest number of manufacturers offering drugs for entry into the United States. Specifically, according to OASIS, China had more establishments manufacturing drugs that were offered for entry into the United States than any other country. According to OASIS, in fiscal year 2007, a wide variety of prescription and over-the- counter drug products manufactured in China were offered for entry into the United States, including pain killers, antibiotics, blood thinners, and hormones. In November 2007, we testified on preliminary findings that identified weaknesses in FDA’s program for inspecting foreign establishments manufacturing drugs for the U.S. market. Specifically, we found that, as in 1998, FDA’s effectiveness in managing the foreign drug inspection program continued to be hindered by weaknesses in its data on foreign establishments. FDA did not know how many foreign establishments were subject to inspection. FDA relied on databases that were designed for purposes other than managing the foreign drug inspection program. Further, these databases contained inaccuracies that FDA could not easily reconcile. DRLS indicated there were about 3,000 foreign establishments registered with FDA in fiscal year 2007, while OASIS indicated that about 6,800 foreign establishments actually offered drugs for entry in that year. FDA recognized these inconsistencies, but could not easily correct them partly because the databases could not exchange information. Any comparisons of the data must be performed manually, on a case-by-case basis. We also testified that FDA inspected relatively few foreign establishments. Data from FDA suggested that the agency may inspect about 8 percent of foreign establishments in a given year. At this rate, it would take FDA more than 13 years to inspect each foreign establishment once, assuming that no additional establishments require inspection. However, FDA could not provide an exact number of foreign establishments that had never been inspected. From fiscal year 2002 through fiscal year 2007, FDA conducted 1,479 inspections of foreign establishments, and three quarters of these inspections were concentrated in 10 countries. (See table 1.) Because some establishments were inspected more than once during this time period, FDA actually inspected 1,119 unique establishments. For example, of the 94 inspections that FDA conducted of Chinese establishments, it inspected 80 unique establishments across this six year period. The lowest rate of inspections in these 10 countries was in China, for which FDA inspected 80 of its estimated 714 establishments, or fewer than 14 establishments per year, on average. We testified that, while enforcing GMP compliance through surveillance inspections was FDA’s most comprehensive program for monitoring the quality of marketed drugs, most of FDA’s inspections of foreign manufacturers occurred when they were listed in an NDA or ANDA. The majority of these preapproval inspections were combined with a GMP surveillance inspection. Although FDA used a risk-based process to develop a prioritized list of foreign establishments for GMP surveillance inspections, few were completed in a given year—about 30 in fiscal year 2007. The usefulness of the process was weakened by the incomplete and possibly inaccurate information on those foreign establishments that FDA had not inspected recently, as well as those that had never been the subject of a GMP surveillance inspection. We also testified that FDA’s foreign inspection process involves unique circumstances that are not encountered domestically. For example, FDA relies on staff that inspect domestic establishments to volunteer for foreign inspections. Unlike domestic inspections, FDA does not arrive unannounced at a foreign establishment. It also lacks the flexibility to easily extend foreign inspections if problems are encountered. Finally, language barriers can make foreign inspections more difficult than domestic ones. FDA does not generally provide translators to its inspection teams. Instead, they may have to rely on an English-speaking representative of the foreign establishment being inspected, rather than an independent translator. FDA has initiated several recent changes to its foreign drug inspection program, but the changes do not fully address the weaknesses that we previously identified. FDA has initiatives underway to reduce the inaccuracies in its registration and import databases that make it difficult to determine the number of foreign establishments subject to inspection, although to date these databases still do not provide an accurate count of such establishments. FDA has taken steps that could help it select foreign establishments for inspection by obtaining information from foreign regulatory bodies. However, the agency has not fully utilized arrangements with foreign regulatory bodies in the past that would allow it to obtain such information. FDA has made progress in conducting more foreign inspections, but it still inspects relatively few establishments. FDA is also pursuing initiatives that could address some of the challenges that we identified as being unique to foreign inspections, but implementation details and timeframes associated with these initiatives are unclear. FDA has initiatives underway to reduce inaccuracies in its databases, but actions taken thus far will not ensure that the agency has an accurate count of establishments subject to inspection. As we previously testified, DRLS does not provide FDA with an accurate count of foreign establishments manufacturing drugs for the U.S. market. For example, foreign establishments may register with FDA, whether or not they actually manufacture drugs for the U.S. market, and the agency does not routinely verify the information provided by the establishment. Beginning in late 2008, CDER plans to implement an electronic registration and listing system that could improve the accuracy of information the agency maintains on registered establishments. The new system will allow drug manufacturing establishments to submit registration and listing information electronically, rather than submitting it on paper forms. FDA hopes that electronic registration will result in efficiencies allowing the agency to shift resources from data entry to assuring the quality of the databases. However, electronic registration alone will not prevent foreign establishments that do not manufacture drugs for the U.S. market from registering, thus still presenting the problem of an inaccurate count. Recently, another FDA center implemented changes affecting the registration of medical device manufacturers, an activity for which we previously identified problems similar to those found in CDER. In fiscal year 2008, CDRH implemented, in addition to electronic registration, an annual user fee of $1,706 per registration for certain medical device establishments and an active re-registration process. According to CDRH, as of early April 2008, about half of the previously registered establishments have reregistered using the new system. While CDRH officials expect that this number will increase, they expect that the elimination of establishments that do not manufacture medical devices for the U.S. market—and thus should not be registered—will result in a smaller, more accurate database of medical device establishments. CDRH officials indicated that implementation of electronic registration and the annual user fee seems to have improved the data so CDRH can more accurately identify the type of establishment registered, the devices manufactured at an establishment, and whether or not an establishment should be registered. According to CDRH officials, the revenue from device registration user fees is applied to the process for the review of device applications, including establishment inspections undertaken as part of the application review process. CDER does not currently have the authority to assess a user fee for registration of drug establishments, but officials indicated that such a fee could discourage registrations of foreign manufacturers that are not ready, are not actively importing, or have not been approved to market drug products in the United States. Officials also suggested that such fees could be used to supplement the resources available for conducting inspections. FDA has proposed, but not yet implemented, the Foreign Vendor Registration Verification Program, which could help improve the accuracy of information FDA maintains on registered establishments. Through this program, FDA plans to contract with an external organization to conduct on-site verification of the registration data and product listing information of foreign establishments shipping drugs and other FDA-regulated products to the United States. As of April 2008, FDA had solicited proposals for this contract but was still developing the specifics of the program. For example, the agency had not yet established the criteria it would use to determine which establishments would be visited for verification purposes or determined how many establishments it would verify annually. FDA currently plans to award this contract in May 2008. Given the early stages of this process, it is too soon to determine whether this program will improve the accuracy of the data FDA maintains on foreign drug establishments. In addition to changes to improve DRLS, FDA has supported a proposal that has the potential to address weaknesses in OASIS, but FDA does not control the implementation of this change. As we previously testified, OASIS contains an inaccurate count of foreign establishments manufacturing drugs imported to the United States as a result of unreliable identification numbers generated by customs brokers when the product is offered for entry. FDA officials told us that these errors result in the creation of multiple records for a single establishment, which results in inflated counts of establishments offering drugs for entry into the U.S. market. FDA is pursuing the creation of a governmentwide unique establishment identifier, as part of the Shared Establishment Data Service (SEDS), to address these inaccuracies. Rather than relying on the creation and entry of an identifier at the time of import, SEDS would provide a unique establishment identifier and a centralized service to provide commercially verified information about establishments. The standard identifier would be submitted as part of import entry data where required by FDA or other government agencies. SEDS could thus eliminate the problem of having multiple identifiers associated with an individual establishment. The implementation of SEDS is dependent on action from multiple federal agencies, including the integration of the concept into a CBP import and export system currently under development and scheduled for implementation in 2010. In addition, once implemented by CBP, participating federal agencies would be responsible for bearing the cost of integrating SEDS with their own operations and systems. FDA officials are not aware of a specific timeline for the implementation of SEDS. Developing an implementation plan for SEDS is a recommendation of the Interagency Working Group on Import Safety’s Action Plan for Import Safety: A Roadmap for Continual Improvement. Finally, FDA is in the process of implementing additional initiatives to improve the integration of its current data systems, which could make it easier for the agency to establish an accurate count of foreign drug manufacturing establishments subject to inspection. The agency’s Mission Accomplishments and Regulatory Compliance Services (MARCS) is intended to help FDA electronically integrate data from multiple systems. It is specifically designed to give individual users a more complete picture of establishments. FDA officials estimate that MARCS, which is being implemented in stages, could be fully implemented by 2011 or 2012. However, FDA officials told us that implementation has been slow because the agency has been forced to shift resources away from MARCS and toward the maintenance of current systems that are still heavily used, such as FACTS and OASIS. Taken together, electronic registration, the Foreign Vendor Registration Verification Program, SEDS, and MARCS could provide the agency with more accurate information on the number of establishments subject to inspection. However, it is too early to tell. FDA has taken steps to help it select establishments for inspection by obtaining information on foreign establishments from regulatory bodies in other countries, despite encountering difficulties in fully utilizing these arrangements in the past. FDA has recognized the importance of receiving information about foreign establishments from other countries and has taken steps to develop new, or strengthen existing, information-sharing arrangements to do so. For example, according to FDA, the agency is enhancing an arrangement to exchange information with the Swiss drug regulatory agency. FDA officials have highlighted such arrangements as a means of improving the agency’s oversight of drugs manufactured in foreign countries. For example, they told us that in selecting establishments for GMP surveillance inspections, they sometimes use the results of an establishment inspection conducted by a foreign government to determine whether to inspect an establishment. FDA told us that it received drug inspection information from foreign regulatory bodies six times in 2007. FDA has previously encountered difficulties which prevented it from taking full advantage of information-sharing arrangements with other countries. Obtaining inspection reports from other countries and using this information has proved challenging. In order for FDA to determine the value of inspection reports from a particular country, it must consider whether the scope of that country’s inspections is sufficient for FDA’s needs. Evaluation of inspections conducted by foreign regulatory bodies can be complex and may include on-site review of regulatory systems and audit inspections. Further, to obtain results of inspections conducted by its foreign counterparts, FDA must specifically request them—they are not automatically provided. While FDA has provided certain foreign regulatory bodies access to its Compliance Status Information System—which provides information from the results of FDA’s inspections—foreign regulatory bodies have not established similar systems to provide FDA access to data about their inspections. FDA indicated that such systems are under development in some countries and FDA has been promised access when they are available. However, currently, FDA cannot routinely incorporate the results of inspections conducted by foreign regulatory authorities into its risk-based selection process. FDA officials stated that, in the past, they encountered difficulties using inspection reports from other countries that were not readily available in English. Consequently, the existence of such information-sharing arrangements alone may not help FDA systematically address identified weaknesses in its foreign inspection program. Arrangements that have the potential to allow FDA to formally accept the results of inspections conducted by other countries have been prohibitively challenging to implement. Although these arrangements allow countries to leverage their own inspection resources, according to FDA officials, assessing the equivalence of other countries’ inspections and the relevance of the information available is difficult. They added that complete reliance on another country’s inspection results is risky. The activities associated with establishing these agreements may be resource intensive, which may slow FDA’s implementation of them. For example, FDA told us that a lack of funding for establishing such an arrangement with the European Union effectively stopped progress. Although FDA has completed preliminary work associated with this arrangement, the agency has concluded that it will be more beneficial to pursue other methods of cooperating with the European Union. The agency has no plans at this time to enter into other such arrangements. FDA’s current efforts to obtain more information from foreign regulatory bodies may help it better assess the risk of foreign establishments when prioritizing establishments for GMP surveillance inspections. However, most foreign inspections are conducted to examine an establishment referenced in an NDA or ANDA. The agency conducts relatively few foreign GMP surveillance inspections selected through its risk-based process. Therefore, these efforts may be of limited value to the foreign inspection program if the agency does not increase the number of such inspections. FDA has made progress in conducting more foreign inspections, but it still inspects relatively few establishments. FDA conducted more foreign establishment inspections in fiscal year 2007 than it had in each of the 5 previous fiscal years. However, the agency still inspected less than 11 percent of the foreign establishments on the prioritized list that it used to plan its fiscal year 2007 GMP surveillance inspections. The agency also still conducts far fewer inspections of foreign establishments than domestic establishments. Its budget calls for incremental increases in funding for foreign inspections. FDA officials told us that, for fiscal year 2008, the agency plans to conduct more GMP surveillance inspections based on its prioritized list of foreign establishments. FDA officials estimated that the agency conducted about 30 such inspections in fiscal year 2007 and plans to conduct at least 50 in fiscal year 2008. If FDA were to inspect foreign establishments biennially, as is required for domestic establishments, this would require FDA to dedicate substantially more funding than it has dedicated to such inspections in the past. In fiscal year 2007, FDA dedicated about $10 million to inspections of foreign establishments. FDA estimates that, based on the time spent conducting inspections of foreign drug manufacturing establishments in fiscal year 2007, the average cost of such an inspection ranges from approximately $41,000 to $44,000. Our analysis suggests that it could cost the agency $67 million to $71 million each year to biennially inspect each of the 3,249 foreign drug establishments on the list that FDA used to plan its fiscal year 2007 GMP surveillance inspections. Based on these same estimates, it would take the agency $15 million to $16 million each year to inspect the estimated 714 drug manufacturing establishments in China every 2 years. According to FDA budget documents, the agency estimates that it will dedicate a total of about $11 million in fiscal year 2008 and $13 million in fiscal year 2009 to all foreign inspections. In its fiscal year 2009 budget, FDA proposed instituting a reinspection user fee. Reinspections are conducted to verify that corrective actions the agency has required establishments to take in response to previously identified violations have been implemented. FDA’s proposal to institute a reinspection user fee would allow it to charge establishments a fee when the agency determines a reinspection is warranted. However, as proposed, the reinspection user fee would be budget neutral, meaning that the other appropriated funds the agency receives would be offset by the amount of collected reinspection fees. As a result, this proposal would not provide the agency with an increase in funds that could be used to pay for additional foreign inspections. FDA has recently announced proposals to address some of the challenges unique to conducting foreign inspections, but specific implementation steps and associated time frames are unclear. We previously identified the lack of a dedicated staff devoted to conducting foreign inspections as a challenge for the agency. FDA noted in its report on the revitalization of ORA that it is exploring the creation of a cadre of investigators who would be dedicated to conducting foreign inspections. However, the report does not provide any additional details or timeframes about this proposal. In addition, FDA recently announced plans to establish a permanent foreign presence overseas, although little information about these plans is available. Through an initiative known as “Beyond our Borders,” FDA intends that its foreign offices will improve cooperation and information exchange with foreign regulatory bodies, improve procedures for expanded inspections, allow it to inspect facilities quickly in an emergency, and facilitate work with private and government agencies to assure standards for quality. FDA’s proposed foreign offices are intended to expand the agency’s capacity for regulating, among other things, drugs, medical devices, and food. The extent to which the activities conducted by foreign offices are relevant to FDA’s foreign drug inspection program is uncertain. Initially, FDA plans to establish a foreign office in China with three locations—Beijing, Shanghai, and Guangzhou—comprised of a total of eight FDA employees and five Chinese nationals. The Beijing office, which the agency expects will be partially staffed by the end of 2008, will be responsible for coordination between FDA and the Chinese regulatory agencies. FDA staff located in Shanghai and Guangzhou, who will be hired in 2009, will be focused on conducting inspections and working with Chinese inspectors to provide training as necessary. FDA has noted that the Chinese nationals will primarily provide support to FDA staff including translation and interpretation. The agency is also considering setting up offices in other locations, such as India, the Middle East, Latin America, and Europe, but no dates have been specified. While the establishment of both a foreign inspection cadre and offices overseas have the potential for improving FDA’s oversight of foreign establishments and providing the agency with better data on foreign establishments, it is too early to tell whether these steps will be effective or will increase the number of foreign drug inspections. Agreements with foreign governments, such as one recently reached with China’s State Food and Drug Administration, may help the agency address certain logistical issues unique to conducting inspections of foreign establishments. We previously testified that one challenge faced by FDA involved the need for its staff to obtain a visa or letter of invitation to enter a foreign country to conduct an inspection. However, FDA officials told us that their agreement with China recently helped FDA expedite this process when it learned of the adverse events associated with a Chinese heparin manufacturer. According to these officials, the agreement with China greatly facilitated its inspection of this manufacturer by helping FDA send investigators much more quickly than was previously possible. Americans depend on FDA to ensure the safety and effectiveness of the drugs they take. The recent incident involving heparin underscores the importance of FDA’s initiatives and its steps to obtain more information about foreign drug establishments, conduct more inspections overseas, and improve its overall management of its foreign drug inspection program. FDA has identified actions that, if fully implemented, could address some, but not all, of the concerns we first identified 10 years ago and reiterated 5 months ago in our testimony before this subcommittee. Given the growth in foreign drug manufacturing for the U.S. market and the current large gaps in FDA’s foreign drug inspections, FDA will need to devote considerable resources to this area if it is to increase the rate of inspections. However, FDA’s plans currently call for incremental increases that will have little impact in the near future to reduce the interval between inspections for these establishments. In addition, many of FDA’s initiatives will take several years to implement and require funding and certain interagency or intergovernmental agreements that are not yet in place. Taken together, FDA’s plans represent a step forward in filling the large gaps in FDA’s foreign drug inspection program, but do little to accomplish short-term change. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or the other Members of the subcommittee may have at this time. For further information about this testimony, please contact Marcia Crosse at (202) 512-7114 or crossem@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Geraldine Redican-Bigott, Assistant Director; Katherine Clark; William Hadley; Cathleen Hamann; Julian Klazkin; Lisa Motley; Daniel Ries; and Monique B. Williams made key contributions to this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Food and Drug Administration (FDA) is responsible for overseeing the safety and effectiveness of human drugs that are marketed in the United States, whether they are manufactured in foreign or domestic establishments. FDA inspects foreign establishments to ensure that they meet the same standards required of domestic establishments. Ongoing concerns regarding FDA's foreign drug inspection program recently were heightened when FDA learned that contaminated doses of a common blood thinner had been manufactured at a Chinese establishment that the agency had never inspected. FDA has announced initiatives to improve its foreign drug inspection program. In November 2007, GAO testified on weaknesses in FDA's foreign drug inspection program (GAO-08-224T). This statement presents preliminary findings on how FDA's initiatives address the weaknesses GAO identified. GAO interviewed FDA officials and analyzed FDA's initiatives. GAO examined reports and proposals prepared by the agency, as well as its plans to improve databases it uses to manage its foreign drug inspection program. Recent FDA initiatives--some of which have been implemented and others proposed--could strengthen FDA's foreign drug inspection program, but these initiatives do not fully address the weaknesses that GAO previously identified. GAO testified in November 2007 that FDA's databases do not provide an accurate count of foreign establishments subject to inspection and do provide widely divergent counts. Through one recent initiative, FDA has taken steps to improve its database intended to include foreign establishments registered to market drugs in the United States. This initiative may reduce inaccuracies in FDA's count of foreign establishments. However, these steps will not prevent foreign establishments that do not manufacture drugs for the U.S. market from erroneously registering with FDA. Further, to reduce duplication in its import database, FDA has supported a proposal that would change the data it receives on products entering the United States. However, the implementation of this proposal is not certain and would require action from multiple federal agencies, in addition to FDA. Efforts to integrate these databases have the potential to provide FDA with a more accurate count of establishments subject to inspection, but it is too early to tell. GAO testified that gaps in information weaken FDA's processes for prioritizing the inspection of foreign establishments that pose the greatest risk to public health. While FDA recently expressed interest in obtaining useful information from foreign regulatory bodies that could help it prioritize foreign establishments for inspections, the agency has faced difficulties fully utilizing these arrangements in the past. For example, FDA had difficulties in determining whether the scope of other countries' inspection reports met its needs and these reports were not always readily available in English. GAO also testified that FDA inspected relatively few foreign establishments each year. FDA made progress in inspecting more foreign establishments in fiscal year 2007, but the agency still inspects far fewer of them than domestic establishments. FDA dedicated about $10 million to foreign drug inspections in fiscal year 2007 and plans to dedicate about $11 million to such inspections in fiscal year 2008. Finally, GAO testified that FDA faced certain logistical and staffing challenges unique to conducting foreign inspections. FDA is pursuing initiatives that could address some of the challenges that we identified as being unique to foreign inspections, such as volunteer inspection staff and lack of translators. FDA has proposed establishing a dedicated cadre of staff to conduct foreign inspections, but the timeframe associated with this initiative is unclear. FDA plans to open an office in China and is considering establishing offices in other countries, but the impact that this will have on the foreign drug inspection program is unknown.
FACE was enacted on May 26, 1994. The act gave the federal government a new tool for investigating and prosecuting abortion-related violence and disruptions. It established federal criminal penalties and civil remedies for “certain violent, threatening, obstructive and destructive conduct that is intended to injure, intimidate or interfere with persons seeking to obtain or provide reproductive health services.” FACE also prohibited the damage or destruction of property that belongs to a reproductive health care facility. Appendix I contains the text of FACE in its entirety. FACE not only sought to protect the rights of those seeking or providing abortion services, it also sought to protect the rights of anti-abortion protestors in expressing their views. The act states that it must not be construed to prohibit any expressive conduct, including peaceful picketing or other peaceful demonstration, protected by the First Amendment. Thus, circumstances dictate whether such actions as picketing violate the law. For example, peaceful, nonobstructive picketing on public property would not violate FACE; obstructive or threatening picketing on clinic property could be found to violate the act. Criminal and civil FACE cases can be initiated in different ways and result in different penalties. Criminal FACE prosecutions may be brought only by the Attorney General of the United States, and only if an alleged FACE violation has already occurred. The act sets out criminal penalties of fines, imprisonment, or both, depending on the nature of the violation and whether it is a first or subsequent offense. For example, in cases of nonviolent physical obstructions, first-time offenders can receive a maximum sentence of 6 months’ imprisonment and a $10,000 fine. If bodily injury results from an offense, regardless of whether it is a first or subsequent offense, the maximum sentence is 10 years; and if death results, the maximum sentence is any term of years or life imprisonment. Civil actions may be brought by the Attorney General of the United States; the Attorney General of any state on behalf of anyone who is injured or may be injured by a violation of FACE; or any aggrieved person involved in providing, obtaining, or seeking to provide or obtain reproductive health services. Courts have the discretion to award appropriate relief, including injunctions, damages, attorneys’ fees, and costs of suit. The Attorney General has vested in DOJ’s Civil Rights Division the federal government’s civil and criminal enforcement authority to bring cases in court under FACE and other federal statutes that can be applied to abortion clinic violence. According to a high level DOJ official, the Civil Rights Division has often investigated and brought cases in collaboration with U.S. Attorneys’ offices in the field, and occasionally U.S. Attorneys’ offices have brought cases on their own. However, all civil and criminal charging decisions must be approved by the Assistant Attorney General for Civil Rights. In August 1994, shortly after two murders took place outside an abortion clinic in Florida, the Attorney General formed a federal task force to investigate the possible existence of a national conspiracy against reproductive health care providers and to coordinate federal enforcement activities. The task force consisted of representatives from DOJ’s Criminal Division, Civil Rights Division, Federal Bureau of Investigation (FBI), and United States Marshals Service; and ATF. This task force played an important role in the early implementation of FACE, according to a high-level DOJ official. The task force prompted the first criminal prosecution under FACE and made federal prosecutors across the country more aware of the applicability of other preexisting federal criminal statutes to clinic violence. The Attorney General’s task force went out of existence in early 1997 after its lead prosecutors concluded that it lacked sufficient evidence to prove the existence of a national conspiracy beyond a reasonable doubt. In January 1995, following shootings at clinics in Massachusetts and Virginia, the President instructed DOJ to take certain steps to address clinic violence. He instructed DOJ to direct (1) each United States Attorney to immediately head an abortion violence task force of federal, state, and local law enforcement officials; and (2) each U.S. Marshal to consult with clinics regarding communications with law enforcement agencies. The task force was to formulate plans to address clinic security. The Attorney General followed the President’s order with a memorandum outlining the task forces’ responsibilities, including developing plans to address abortion clinic security, coordinating law enforcement efforts relating to abortion violence, assisting local law enforcement in responding to abortion clinic incidents, and ensuring that cases that could be tried under FACE are filed appropriately. The Attorney General’s memo also lays out in general terms federal and local law enforcement responsibilities regarding abortion clinic violence. It states that “violence against abortion providers is, in the first instance, a violation of state and local law and the duty to prevent such crime and investigate and prosecute it when it occurs falls primarily to state and local officials, where they are able to deal effectively with it. However, the federal government has an important role in assisting state and local authorities and bringing to bear federal tools and resources . . . .” The memo states that clinics in need of assistance should be advised to first contact their local police departments. In mid-1997, due to concerns that clinic violence had increased and federal coordination had declined, the Acting Assistant Attorney General for Civil Rights established an Abortion Violence Working Group. The group was formed to promote communication and coordination among federal law enforcement components and agencies involved in investigating and prosecuting abortion violence cases. Comprising senior representatives of DOJ’s Civil Rights Division, Executive Office for United States Attorneys (EOUSA), the FBI, the Marshals Service, and senior representatives of ATF, the group reportedly meets every 5 to 6 weeks to share information, coordinate law enforcement and litigation strategies, and formulate plans to respond to perceived threats to providers of reproductive health services. Our report objectives were to provide information on (1) the occurrence of abortion clinic incidents before and after FACE, as reported to us by representatives of clinics that abortion rights groups identified as having experienced relatively high levels of incidents before the enactment of FACE; (2) views regarding FACE and its effectiveness from representatives of these clinics, selected police departments and U.S. Attorney offices, and other representatives from DOJ, ATF, three national abortion rights organizations, and two national anti-abortion organizations; (3) efforts by local and federal law enforcement agencies following the enactment of FACE and clinic, U.S. Attorney office, police department, abortions rights group, and anti-abortion rights group representatives’ satisfaction with these agencies; and (4) any court cases pertaining to FACE and the courts’ rulings in those cases. To obtain information from clinics on changes in the occurrence of clinic incidents, we conducted structured telephone interviews with a judgmental sample of representatives of 42 abortion clinics identified as having experienced relatively high levels of violence during the 2-year period prior to the passage of FACE. Three national abortion rights groups used data they had collected during the 1993 and 1994 time frame to identify these clinics for us. During the interviews with clinic representatives, we listed types of incidents that the clinics might have experienced, and we asked whether each type of incident had occurred at the clinic in either of the two time periods we were studying. We asked only whether each incident had occurred, not how many times it occurred. Furthermore, we did not ask respondents whether each type of incident they experienced was a violation of FACE, for this would have required a legal assessment. (See app. II for the questionnaire we used to interview clinic representatives and their aggregate responses to these and other questions.) To obtain the views of representatives from abortion clinics, police departments, U.S. Attorney offices, and others regarding the effectiveness of FACE and the efforts of federal and local law enforcement following the enactment of FACE, we conducted structured telephone interviews with representatives of 42 abortion clinics and 15 police departments selected from a stratified sample of the 40 departments that serve the locations of the clinics we contacted. In addition, we sent questionnaires to the 36 U.S. Attorneys who serve the federal judicial districts in which the clinics we contacted were located. We also interviewed other officials at DOJ, ATF, and representatives of three national abortion rights organizations and two anti-abortion organizations. We reviewed studies and documents related to FACE and abortion clinic violence. (See app. II through app. IV for copies of our survey instruments. See app. V for a listing of the organizations we contacted during our review.) Because we did not draw a representative sample of abortion clinics, police departments, and U.S. Attorney offices, the results of our structured surveys represent only the clinics, police departments, and U.S. Attorney offices we contacted. Nevertheless, these survey results are useful for better understanding the implications of FACE. In responding to open-ended survey questions, respondents provided narrative answers that sometimes involved more than one theme or topic. Consequently, when we categorize and discuss responses in this report, the number may appear to exceed the total number of respondents. (See app. VI for further information on our survey methodology.) We took the following steps to identify cases in which FACE was litigated and the courts’ rulings in those cases: We obtained from DOJ a summary of all the criminal prosecutions and civil lawsuits it had initiated or completed pertaining to FACE as of September 11, 1998. We conducted a search of WESTLAW and LEXIS databases to identify any reported decisions under FACE. We consulted the National Abortion Federation’s Quarterly Report on legal issues relating to abortion and a listing of FACE cases compiled by the National Organization for Women Legal Defense and Education Fund. We conducted our work from December 1997 through August 1998 in accordance with generally accepted government auditing standards. We received comments on a draft of this report from DOJ and the Department of the Treasury. These comments are summarized at the end of this letter. Most of the 42 clinic representatives we interviewed reported experiencing fewer types of incidents during the 2-year period before we began our interviews (April 1996 through March 1998) than they had during the 2-year period before the May 1994 enactment of FACE (June 1992 through May 1994). Many respondents also indicated that the frequency of incidents had decreased, as did severity, particularly the severity of picketing. Although clinics’ experiences with anti-abortion incidents varied, representatives of almost all these clinics told us they experienced picketing and hate mail or harassing phone calls during both time periods. In addition, their responses indicated declines in the number of clinics experiencing blockades, vandalism, invasions, bomb threats, death threats, assaults, and stalking of clinic staff or family members. For example, 27 of the 42 respondents told us their clinics had experienced blockades during the 2 years prior to FACE; 6 said they had experienced blockades during the most recent 2 years. Thirty-six respondents told us of vandalism at their clinics during the first time period; 19 reported vandalism in the more recent period. Table 1 identifies the number of clinic respondents who reported specific types of incidents as occurring in each of the two time periods we studied. We also asked clinic respondents whether any other incidents had occurred at their clinics during these two time periods. Twenty-four of the 42 respondents described additional incidents, including butyric acid attacks and cases of suspicious packages. In addition to asking representatives whether their clinics had experienced each type of incident during the two time periods, we asked whether, in general, incidents were more or less frequent in the 2 years preceding our survey compared to the 2 years preceding the passage of FACE. Thirty-four respondents indicated that overall, they had seen a pronounced change in the frequency of incidents at their clinics, noting decreases in the following types of incidents: hate mail or harassing phone calls (19), picketing (17), blockades (16), invasions of their clinics (14), vandalism (13), and stalking (11). At a few clinics, however, respondents thought that these types of incidents had become more frequent. We also asked representatives of these clinics whether incidents were more or less severe in the 2 years preceding our survey compared to the 2 years preceding the passage of FACE. Of 35 respondents who indicated that overall, they had seen a pronounced change in the severity of incidents at their clinics, 26 said that picketing was less severe in the more recent 2-year period than it had been during the 2 years prior to FACE; 3 said that it was more severe. Of those explaining how picketing had become less severe, one respondent told us that picketers were very aggressive and verbally abusive prior to FACE, but that events are now quieter and take the form of “prayer vigils.” Another said that protestors are still picketing, but they do so in fewer numbers and they follow the rules. Ten respondents said that hate mail or threatening phone calls had become less severe, but 1 thought that this type of incident had increased in severity. For the most part, representatives of the clinics and U.S. Attorney offices we surveyed thought that FACE had made a difference, often citing its deterrent effect. Representatives of the three national abortion rights groups and one of the anti-abortion groups we contacted expressed similar views. A representative of the other anti-abortion group said that his group was not affected by FACE because its members do not engage in activities that would violate the act. Clinic respondents, DOJ officials, and representatives of national abortion rights groups said that other factors in addition to FACE, such as local injunctions, other federal laws, and improved clinic security measures, also may have reduced incidents. Police department respondents expressed divergent views about FACE’s effectiveness. A small number of respondents from the U.S. Attorney offices, police departments, and clinics we contacted during our review believed that FACE did not have an effect on clinic incidents. Some said that incidents had already declined before FACE was enacted, and others mentioned specific weaknesses of the act. Most clinic and U.S. Attorney office respondents believed that FACE had a positive effect on clinic incidents, often citing its deterrent effect. Others we contacted, including DOJ and ATF officials and representatives of three national abortion rights and one anti-abortion group, expressed the same belief, particularly regarding blockades. In addition, U.S. Attorney office respondents and DOJ officials described strengths of FACE, such as the additional tools it provides federal agents for investigating and prosecuting clinic incidents. Thirty-seven of the clinic representatives we interviewed believed that FACE had an effect on violent or disruptive incidents at their clinics, and 35 described that effect as reducing or deterring incidents. For example, one respondent said that violent demonstrators were given only a “slap on the wrist” before FACE, but FACE made them realize that consequences can be more severe. Another respondent whose clinic had been involved in a FACE prosecution credited FACE with ending what he described as a cycle in which some protestors endlessly engaged—blockading, being arrested, spending a few days in jail, and then blockading again. Five clinic respondents credited FACE with increasing support or awareness of local law enforcement. For example, one respondent explained that as a result of FACE, local authorities more seriously enforced clinic-related violations of the law. Of the 36 U.S. Attorney office representatives surveyed, 21 believed that FACE had an effect on clinic violence or disruptions in their districts. In describing the effect of the act, 15 respondents said that FACE, or federal actions taken as a result of FACE, had reduced or deterred incidents. A representative from 1 U.S. Attorney office stated that his district’s prosecutions of 12 individuals in 3 separate physical obstruction cases resulted in removing the violators from the streets and appeared to have deterred similar illegal conduct by others. In his view, FACE, in conjunction with state and local enforcement efforts, appeared to have reduced the number of illegal protestors to a core group of offenders who were unlikely to be easily deterred. Twenty-three of the 36 U.S. Attorney office respondents believed that FACE enhanced local law enforcement’s ability to protect clinics from violence, and 27 believed it enhanced federal law enforcement’s ability to do the same. Representatives of 27 U.S. Attorney offices cited strengths they saw in FACE. Eight respondents focused on the flexibility FACE provides or the additional federal tools it offers. These respondents cited federal restraining orders and injunctions and the law’s flexibility that allows for bringing either civil or criminal causes of action. Seven respondents saw the act’s strength in its establishment of federal authority. For example, one respondent explained that FACE allows for intervention in an area that was previously outside federal jurisdiction. To a lesser extent, respondents cited other strengths of FACE, including the additional attention it has brought to the issue of clinic incidents, its harsher penalties, and the communication it promotes among law enforcement agencies. DOJ officials we contacted said that FACE has been a significant tool that has allowed the federal government to undertake investigations, prosecutions, and civil actions in an area in which it previously had limited criminal authority and no authority to pursue civil remedies. FACE, in effect, gave the FBI jurisdiction to investigate abortion clinic violence and, in so doing, allows the use of FBI resources to augment ATF’s continued role in investigating clinic arsons and bombings. Also, according to DOJ officials, the existence of FACE, as well as the prosecutions that result from it, deters such incidents as massive disruptions and blockades. They said that civil actions brought under FACE can result in civil remedies, including injunctive relief, damages, and penalties. An official in the Special Litigation Section of DOJ’s Civil Rights Division stated that civil cases often lead to an injunction and offered the opinion that federal court injunctions have been effective in reducing incidents. Civil injunctions can provide relief, such as establishing a “buffer zone” around a clinic entrance that demonstrators may not enter, or banning excessive noise during a clinic’s hours of operation. ATF officials with whom we spoke indicated that FACE has made a difference by making a large dent in “more minor” incidents, such as blockades. They believe FACE has deterred more minor crimes because it attacks violators in the pocketbook by imposing substantial federal penalties. For example, prior to FACE, it would not have been a federal offense to throw a “stink bomb” or acid into a clinic; FACE made this a federal violation with a potential fine of up to $10,000. ATF officials added, however, that if a clinic incident involves an arson or bombing with no injuries or fatalities, FACE penalties are weak compared to those provided under the federal statute governing arson and explosives. In such cases, prosecutors may choose not to prosecute under FACE. Representatives of two of the national abortion rights organizations we contacted viewed FACE as a major factor in deterring anti-abortion violence and saving lives, and a representative of a third abortion rights organization said that FACE had deterred blockades. The head of one group expressed the view that without FACE, violence would have continued to skyrocket. The head of another group stated that the penalties of FACE and the threat that federal law enforcement could show up at any time have deterred blockades. She noted that even though the number of blockades was falling before FACE was passed, it declined even further because of FACE. According to these organizations, blockades that occur now involve fewer people and fewer clinics. A representative from one of the two national anti-abortion groups we contacted said that FACE has had a “chilling effect” on the number of people willing to be involved in anti-abortion activities. He added that FACE has “raised the pricetag for participation” because many are not willing to risk federal charges and prosecution despite their commitment to the anti-abortion cause. He attributed a decline in his group’s anti-abortion actions to FACE. A representative from the other anti-abortion organization we contacted said that FACE had not affected his group’s activities because the group does not organize protests, nor does it support activities that would violate FACE. Nevertheless, this organization is opposed to FACE because it believes FACE targets pro-life activists’ free speech. Representatives of clinics we surveyed, DOJ officials, police department representatives, and a national abortion rights study acknowledged that factors other than FACE also played a role in reducing anti-abortion incidents at clinics. They identified a variety of factors, including local injunctions, other federal statutes, strong local law enforcement, and clinic security measures, as having an effect on clinic incidents. When asked whether factors other than FACE had an effect on incidents, most clinic respondents indicated that there were factors in addition to FACE that had an effect on incidents at their clinics. Among the 31 respondents who expressed this view, 11 cited other legal actions, such as local injunctions that were in place prior to the passage of FACE. Ten respondents thought that negative reactions to violence had caused a reduction in clinic incidents. As examples, they said that people with anti-abortion views do not want to be associated with the violent actions of extremists and that press coverage of the more heinous acts decreased public support for the anti-abortion position. Seven respondents credited their local law enforcement agencies with reducing clinic incidents, and four said that other state or federal laws decreased clinic violence. Alleged perpetrators of clinic incidents have been prosecuted under federal statutes other than FACE. Although DOJ officials believed FACE had reduced clinic incidents, they found it difficult to isolate the effects of FACE on convictions because FACE has been used in conjunction with other statutes. One official described FACE as “one of an arsenal of statutory weapons available.” Depending on the case, federal prosecutors may find it more effective to use statutes that carry heavier penalties, such as the federal statute governing arson and explosives, in addition to or in place of FACE. Although we did not ask representatives of police departments to identify factors other than FACE that affected clinic incidents, several nevertheless provided information on this subject. Three respondents believed that their departments’ strong response to clinic incidents before FACE was enacted prevented future incidents. Three others explained that injunctions or other court orders prior to FACE curtailed incidents in their communities. In a report of its 1997 clinic violence survey, the Feminist Majority Foundation credited several factors, in addition to FACE, with reducing clinic violence. The report stated that FACE, increased clinic security, better law enforcement, and community mobilization all worked toward mitigating violence at abortion clinics. Police department respondents’ views regarding the effectiveness of FACE did not exhibit a clear pattern. Of 15 respondents, 10 said they were knowledgeable about FACE and, therefore, could answer questions about it. Three said they believed FACE had the effect of ending blockades, reducing protest activity, or “calming things down.” Three of the four respondents who said FACE had not affected incidents explained that serious incidents at clinics in their jurisdictions had already stopped before FACE was passed. Three respondents told us they did not know whether FACE affected incidents in their jurisdictions. Nine of these 10 police department respondents described a variety of strengths they saw in the act. Among the strengths they cited were that FACE provides a level of consistency, recognizes a nationwide problem, and deters illegal activity by most protestors. One respondent told us that the greatest effect of FACE is that it deters illegal activity by most protestors because “reasonable people” are afraid of violating federal laws. Another respondent explained that being able to prosecute cases at a federal level under FACE was the tool they needed because the local district attorney had not been willing to prosecute anti-abortion activists. Although many of the people we contacted shared positive views of FACE, some did not think FACE had reduced clinic incidents, in part because incidents were already down prior to May 1994, when the act was passed. In addition, some described weaknesses in the act, despite their belief that it had reduced clinic incidents. Representatives of 9 of the 36 U.S. Attorney offices we surveyed believed that FACE had not affected violent or disruptive incidents in their districts for a variety of reasons. Three respondents, for example, explained that FACE had no effect because incidents in their districts were either rare or nonexistent. Another explained that incidents had already declined prior to FACE because of civil judgments and a tough state law covering interference at clinics. Twenty of the 36 U.S. Attorney office respondents cited weaknesses of FACE, although most of the 20 had also identified strengths of the act. Of those citing weaknesses, nine noted that penalties available under FACE are relatively weak. Some commented on weak misdemeanor penalties, and others compared FACE’s penalties to stiffer treatment available under other applicable statutes. Four respondents cited weaknesses that had to do with FACE not clearly delineating the roles of law enforcement agencies. As one U.S. Attorney office representative stated, “Local law enforcement agencies get confused over when to call federal law enforcement and who retains what jurisdiction.” Another expressed the view that it would be helpful to have a state law to complement federal jurisdiction and give more alternatives to state and local police. All 10 of the police department respondents who said they were knowledgeable about FACE noted weaknesses in the act, with half focusing on weaknesses with its enforcement. One respondent explained that the day-to-day workings of FACE are left up to the local police, although, in the respondent’s view, the police have no authority to enforce it. Four of the 10 respondents expressed the view that FACE had not affected clinic incidents in their jurisdictions, but 3 explained that serious problems had already stopped before FACE was passed. Only 5 of the 42 clinic respondents indicated that FACE did not affect clinic incidents and 3 expressed the belief that FACE actually led to more threatening and violent incidents. One respondent explained that the extreme fringe, frustrated by a decline in overall numbers of protests and protestors, has taken more threatening and violent actions. This view was shared by a representative of one of the national anti-abortion organizations we contacted, who believed that FACE has actually caused more extreme clinic violence because it has driven peaceful protestors away. At the local level, representatives of most of the police departments we surveyed said their departments had taken steps to better prepare officers or clinics in their jurisdictions to respond to incidents since FACE was enacted. At the federal level, most judicial districts we surveyed had established abortion violence task forces and achieved positive results, according to representatives of U.S. Attorney offices. Representatives of most of the 15 police departments we surveyed reported their departments had been involved in a variety of efforts to respond to and reduce clinic incidents since FACE was passed. Most reported their officers received training about clinic incidents and that their departments conducted outreach or education efforts with clinics in their jurisdictions. In addition, close to half also told us of steps their departments had taken to prevent incidents from occurring at clinics. According to police department respondents, officers in nine of the departments we surveyed received training regarding abortion clinic violence, although the content and participants varied. The type of training differed by department and included such topics as a review of constitutional rights and applicable local ordinances, civil disobedience arrests, managing blockades, and dealing with protestors who go limp. Departments also differed regarding which officers were trained. For example, one respondent told us all sworn officers were trained, but another reported training only supervisors. Others said only officers in units that respond to clinic incidents received training. Departments also differed in when their training was provided. For example, one police department respondent told us that most patrol officers in his department were trained “five or six years ago.” On the other hand, another respondent told us uniformed officers and detectives in his department were trained annually. Twelve police department respondents said that since FACE was passed, they had conducted outreach or education with clinic staff about what to do in the event of violence or disruptions. Nine described efforts designed to improve communication with clinic staff, including five who said their departments had assigned specific officers to serve as liaisons with clinics. Seven respondents told us of their departments’ efforts to improve clinic security and described a variety of formal and informal measures. For example, one respondent told us that officers from his department go to clinics and offer physical and personal security advice. Another said that officers trained clinic staff on how to spot suspicious packages. In addition to the outreach and education efforts reported by police department respondents, seven told us their departments had taken special steps to prevent violence or disruption at clinics since the passage of FACE. Five reported they increased patrols at clinics during high-risk times, such as the anniversary of Roe v. Wade or on Saturdays, when the most demonstrators are present. One respondent told us that as a proactive measure, his captain recently began assigning two officers to every clinic in the jurisdiction during their hours of operation, posting one officer at each clinic’s front door and one at the back. Most of the U.S. Attorney offices we contacted said that their districts had an abortion violence task force, and almost all believed that these groups had produced positive results, most frequently in the area of increased communication and coordination with other agencies. Most districts formed their task forces around the same time, but the number of meetings held varied by district. According to most U.S. Attorney respondents, these meetings were typically attended by representatives of both federal and local law enforcement agencies. Representatives of 31 of the 36 U.S. Attorney offices we surveyed reported that their districts had an abortion violence task force. Eighteen of these respondents indicated their task forces were established in January 1995—the month in which the President instructed DOJ to direct U.S. Attorneys to immediately head such a task force. Ten respondents told us their districts had established task forces prior to 1995. Of the three respondents reporting that their task forces were established after January 1995, one said the task force was established in March 1995. The other two respondents said their task forces were established in early 1998. Twenty-nine of the 31 U.S. Attorney office respondents who said their districts had task forces told us they had seen particularly positive results from these groups. According to 25 of these respondents, their task forces resulted in increased communication or coordination within the law enforcement community or with clinics. For example, one respondent told us that in addition to significantly increasing all levels of communication, the task force also helped to coordinate federal enforcement efforts with state and local prosecutors to identify and implement the most effective response to a given situation. Nine respondents reported that their task forces established procedures for responding to clinic incidents. Four told us that a greater awareness of FACE or abortion clinics’ problems resulted from their task forces, and six said that fewer incidents occurred as a result of these groups. Responses from the U.S. Attorney offices we surveyed showed a wide range in the frequency of task force meetings. One respondent said that although her district had established a task force, it had not held any formal meetings. In contrast, 2 respondents said their task forces had met 15 times. The median number of times that task forces had reportedly met was four. U.S. Attorney representatives from the 31 districts with task forces said that task force meetings were typically attended by representatives of the U.S. Attorney’s office, the FBI, and the U.S. Marshals Service. Twenty-seven said that ATF representatives typically attend task force meetings, 29 said that local law enforcement representatives typically attend, and 14 said that clinic representatives typically attend. Twenty-nine of the U.S. Attorney respondents indicated their task forces have procedures for sharing information or coordinating efforts with federal law enforcement agencies, and 30 indicated they have procedures for doing the same with local law enforcement agencies. Clinic and U.S. Attorney office respondents generally viewed both local and federal law enforcement agencies favorably. Most clinic respondents were satisfied with the protection provided by local law enforcement, with their relationship with local law enforcement, and with the arrests that were made. Most were also satisfied with federal law enforcement regarding anti-abortion activities that took place at their clinics. However, several clinic respondents and representatives of three national abortion rights groups expressed some dissatisfaction with local and federal authorities. Most representatives of the clinics and U.S. Attorney offices we surveyed were satisfied with local law enforcement efforts regarding FACE during the 1996 to 1998 time period. All 42 clinic respondents told us that local law enforcement had been contacted regarding at least 1 type of incident. Most respondents were generally satisfied when we asked about their overall impressions of local law enforcement in terms of clinic protection, relationship with the clinic, and arrests, as well as their experiences regarding specific types of incidents. Most U.S. Attorney office representatives also reported general satisfaction with the local law enforcement agencies in their districts. Thirty-three of the 42 clinic respondents said they had been generally satisfied with the effectiveness of local law enforcement in protecting their clinics over the past 2 years. Most often, they pointed to their satisfaction with how local law enforcement responded to calls or incidents. Nineteen of the respondents’ comments fell into this category, which included statements about local law enforcement’s quick response, willingness to define limits for protestors, and strong presence during blockades and picketing. Sixteen noted law enforcement’s good or serious attitude, including remarks about understanding the severity of the problem or understanding clinics’ needs. Twelve respondents highlighted the proactive aspect of local law enforcement’s protection. For example, one respondent told us that the police had provided personal and physical plant security training to clinic staff, and another said that local law enforcement had participated in the clinic’s security planning. One respondent explained that a police officer is present at the clinic on the days that abortions are performed and when picketers are present. Another told us of a panic button that links the clinic to the police department. Ten respondents spoke of good communications or relationships with local law enforcement. Clinic respondents also reported general satisfaction with their relationship with local law enforcement and the appropriateness of arrests made over the past 2 years. Thirty-two of the 42 clinic respondents told us they were generally satisfied with their clinics’ relationships with local law enforcement. Of 35 respondents who answered our question about the appropriateness of arrests made by local law enforcement, 19 indicated that they were satisfied. Seven of the 42 respondents indicated that the question about arrests was not applicable to their clinics. Thirty-four clinic respondents said they had observed particularly positive aspects of local law enforcement actions, with half describing positive responses to calls or incidents. For example, one respondent told us that the police had intervened between clinic staff and demonstrators and helped bring about peaceful resolutions to incidents. Another explained that the police department had created a strong presence during blockades and picketing and sent a strong message that breaking the law would not be tolerated. Nine respondents described proactive steps local authorities had taken. For example, one respondent said that the police randomly drop by and check in with clinic staff. Another reported that the police had scheduled a meeting with representatives of both sides of the issue. When the anti-abortion faction did not attend, the police scheduled separate meetings with each side. Nine respondents described a good attitude on the part of local law enforcement, and eight talked about good communication or relationships between clinics and their local law enforcement agencies. For the most part, clinic respondents told us they were also satisfied when local law enforcement was contacted about specific types of incidents that occurred at their clinics during the past 2 years, although satisfaction varied by incident. For example, 8 of the 10 respondents who said local law enforcement had been contacted about invasions at their clinics in the past 2 years said they were generally satisfied with local law enforcement’s response; only 5 of the 10 who reported local law enforcement was contacted about stalking were generally satisfied. See table 2 for information on clinic respondents’ satisfaction when local law enforcement was contacted about different types of incidents. All of the police department representatives we surveyed indicated that their departments would respond to all of the types of incidents we asked about in our survey. One respondent noted that his department generally would not respond to picketing if it simply involved someone holding a sign on public property. All the representatives of the U.S. Attorney offices we surveyed expressed general satisfaction with the way in which at least some of the local law enforcement agencies protected clinics over the past 2 years. Thirty-two respondents said they were generally satisfied with the effectiveness of all or most local law enforcement agencies in protecting clinics in their districts. Four reported general satisfaction with some of their districts’ local law enforcement agencies. Although most representatives of clinics we surveyed told us they were generally satisfied with several dimensions of local law enforcement, some indicated they were dissatisfied with local law enforcement responses, including arrests. Representatives of two of the national abortion rights groups we contacted expressed concern over the uneven enforcement of FACE at the local level, as did a representative of one national anti-abortion group. Most of the 42 clinic respondents said they were generally satisfied with the effectiveness of local law enforcement in protecting their clinics over the past 2 years, yet 7 respondents told us that they were generally dissatisfied with this aspect of local law enforcement. Five of these respondents explained they were dissatisfied because of poor response on the part of local law enforcement, including slow response or even a lack of response. Other reasons respondents cited for their dissatisfaction included that response or enforcement varied by individual officer. Five clinic respondents said they were generally dissatisfied with their clinics’ relationships with local law enforcement. Again, these respondents mostly (4) pointed to local law enforcement’s poor response as the cause of their dissatisfaction. Of 35 respondents who answered our question about the appropriateness of arrests made by local law enforcement, 9 reported being dissatisfied. Some reasons included local law enforcement failing to strictly enforce local ordinances or failing to make arrests when respondents believed arrests were warranted. Fourteen clinic respondents said they had observed particularly negative aspects of local law enforcement’s actions over the past 2 years, although 8 of these respondents also told us of particularly positive actions they had observed. Eight of the respondents providing negative comments described poor response or enforcement on the part of local law enforcement. For example, one respondent described an incident in which fuel was spread on the clinic’s floor, but no fire was set. Although the clinic had experienced arson twice before, the respondent said it took 2 calls and 25 minutes before the police, located 4 blocks away, responded to the calls. Another respondent told us that upon responding to incidents, officers had suggested the clinic would not have these problems if it closed. A representative of one of the national abortion rights organizations we contacted expressed the view that local authorities’ enforcement of FACE was relatively poor until the past year. In her opinion, as local authorities have witnessed the same anti-abortion activists repeatedly breaking the law, they have taken a more serious and professional approach to clinic violence. Representatives of the other two national abortion rights organizations with whom we spoke expressed concerns that local law enforcement has been uneven in its enforcement of FACE. The head of one of these groups stated that in some cities local law enforcement has done an excellent job, but in other locations local enforcement of FACE has been a problem. She believed that there are communities where local law enforcement lacks the commitment to enforce FACE. A representative of one of the national anti-abortion organizations with whom we spoke also believed that FACE has not been evenly enforced. In his view, some police do not charge protestors or call in federal authorities if the police officer’s ideology is pro-life. In other cases, he believed that pro-life advocates are sometimes unfairly arrested for actions such as kneeling on a public sidewalk and praying. According to representatives of the police departments we surveyed, their departments enforce the law regardless of the personal views of their officers. Eleven of the 15 respondents indicated that police officers’ personal ideologies or religious beliefs about abortion did not interfere with their carrying out their duties when violent or disruptive incidents occurred at abortion clinics. Of the four who indicated that they had encountered problems, all said they had found ways to avoid these officers’ involvement with clinics. For example, one respondent told us of an officer who had refused to intervene at a clinic because of religious beliefs. The officer was subsequently disciplined and not allowed to respond to future clinic incidents. A respondent from another police department said that officers who have alerted the department that their religious beliefs would make it difficult to respond to a clinic have been assigned other duties. Most of the representatives of the clinics and U.S. Attorney offices we surveyed said that, overall, they were satisfied with federal law enforcement efforts. To a lesser extent, so were the representatives of police departments we surveyed. Most clinic respondents had observed particularly positive federal efforts that often involved good communication. Also, for the most part, clinic respondents said they were satisfied with their experiences with federal law enforcement regarding specific incidents at their clinics. Representatives of the three national abortion rights groups we contacted expressed positive views of federal authorities, but they also voiced some concerns. Thirty of the 42 clinic respondents said that they were generally satisfied with federal law enforcement regarding anti-abortion activities, and 30 said they had observed particularly positive aspects of federal law enforcement. Most often, respondents described good communication efforts, with 16 providing responses that fell into this category. For example, one respondent told us that federal law enforcement kept the lines of communication open by just calling to see how things were going at the clinic. Thirteen respondents made positive observations having to do with proactive steps such as federal agents helping the clinic establish its security system. Ten respondents cited responses to and investigations of incidents as positive aspects of federal law enforcement. For most types of incidents about which clinics contacted federal law enforcement during the past 2 years, clinic respondents said they were generally satisfied with federal law enforcement. Thirty of the 42 respondents told us that federal law enforcement had been contacted about at least 1 type of incident during the past 2 years. The relative number of clinics that were satisfied varied depending on the type of incident. For example, 9 out of the 12 who reported that federal law enforcement had been contacted about picketing said they were generally satisfied. However, two of the five respondents who said federal authorities had been contacted about assaults said they were generally satisfied. Table 3 shows clinic respondents’ satisfaction with federal law enforcement when contacted about different types of incidents. Representatives of the police departments and U.S. Attorney offices we surveyed were generally satisfied with federal law enforcement. Eight of the 15 police department respondents said their departments had called federal authorities regarding incidents at clinics in their jurisdictions, and 5 of these 8 told us they were generally satisfied with the support they received from federal law enforcement. The remaining three respondents said they were neither satisfied nor dissatisfied. Thirty of the 36 U.S. Attorney office respondents reported being generally satisfied with all or most federal law enforcement agencies regarding anti-abortion activities directed at clinics in their district. Of the remainder, five said they were generally satisfied with some federal law enforcement agencies, and one did not know. Representatives of the three national abortion rights organizations we contacted voiced praise for, as well as some concerns about, federal authorities. According to the head of one group, federal efforts to identify and protect clinics and doctors most at risk have saved lives. She further pointed to improved federal law enforcement reaction to clinic violence since the 1995 Oklahoma City bombing. However, she expressed concern about some federal agents and her belief that some are on the anti-abortion side. Since reporting specific concerns to DOJ, she believes this situation has improved. The head of another group stated that FACE has saved lives and reduced violence where aggressively enforced, but it has not been uniformly and appropriately enforced. She believes this problem exists at all levels of enforcement, including federal law enforcement agencies and U.S. Attorney offices. She further criticized DOJ for not pursuing more FACE cases. A representative of the third group we contacted said that federal law enforcement did not aggressively enforce FACE in the first years following the enactment of FACE. However, as federal law enforcement’s experience with clinic violence grew, so did its effectiveness. She observed that some federal agents have a strong commitment to enforce FACE regardless of their beliefs about abortion. Constitutional arguments have been raised in most of the reported FACE cases we identified, but they have ultimately proven unsuccessful. Constitutional challenges have included charges that FACE violates the freedom of speech and religion protections in the First Amendment. FACE creates both criminal penalties and civil remedies against those who use force, threats of force, or physical obstruction to interfere with persons obtaining or providing reproductive health services. We identified a total of 46 criminal and civil cases that were either completed or pending as of September 11, 1998. In 15 of the 17 criminal cases we identified, defendants pled guilty or were found guilty of FACE violations. Many of the 29 civil cases we identified resulted in civil remedies, including injunctive relief. Appendix VII contains summaries of these cases. The constitutionality of FACE has been challenged in various courts and on many grounds beginning on the day of its enactment. Constitutional arguments were raised in 24 of the 28 reported cases we identified but, ultimately, the courts all have found FACE to be constitutional. Although two district court decisions did hold FACE unconstitutional, they were reversed on appeal. To date, the U.S. Supreme Court has declined to review any of the U.S. courts of appeals’ decisions upholding FACE. Constitutional challenges were raised in all 8 of the reported criminal cases we identified and in 16 of the 20 reported civil lawsuits we identified. We could not identify whether constitutional challenges were raised in the unreported or pending cases because we had limited information on these cases. Opponents have challenged the constitutionality of FACE on a number of grounds. For example, they have argued that Congress lacked the authority under the Commerce Clause to pass such a statute. This provision of the Constitution gave Congress the power to regulate interstate commerce. The courts have consistently held that the enactment of FACE was a valid exercise of the commerce power. The courts reasoned that because Congress rationally determined that violence at reproductive health facilities affects interstate commerce, Congress had the authority to regulate that activity. Some First Amendment challenges to FACE have been based on freedom of speech. Courts have held that FACE was “content neutral” because it did not outlaw conduct for expressing an idea but rather sought to protect safety and interstate commerce. Furthermore, the act explicitly stated that nothing shall be construed to interfere with the exercise of protected First Amendment rights. Courts have also held that FACE was “viewpoint neutral,” as it sought to protect access to all reproductive health services, including both abortions and services connected to carrying a fetus to term. Also, arguments that FACE was unconstitutionally overbroad and vague and, thus, had a “chilling effect” on peaceful activities have been unsuccessful. Other First Amendment challenges have been based on the “Free Exercise of Religion” clause. Courts have determined that FACE has been applied neutrally towards all religions, as it sought only to punish violent, forceful or threatening conduct without regard to expressive content or viewpoint. FACE has been challenged on grounds that it violated other constitutional amendments, too. Opponents have argued that the penalties imposed by FACE were excessive and, thus, violated the Eighth Amendment proscription against excessive fines. These challenges have consistently been dismissed due to the charges not being “ripe,” that is, not ready for the courts to address. Courts have also struck down arguments that by enacting FACE, Congress exceeded its authority under the Fourteenth Amendment and, thus, usurped powers reserved to the states by the Tenth Amendment. In 15 of 17 criminal cases, defendants pled guilty or were found guilty of FACE violations. Of the remaining two cases, one resulted in the defendant receiving pretrial diversion, and one case is ongoing. Criminal FACE cases have involved prosecution for activities ranging from nonviolent physical obstruction of clinic entrances to the use of force or threatening conduct. The criminal prosecutions we identified generally resulted in fines, incarceration, or both. The nature of the activity prosecuted and the sentence received varied considerably. For example, in one case a defendant was found guilty of throwing a bottle at a doctor’s car when the doctor attempted to enter the clinic property. The defendant was sentenced to 1 year in prison followed by 1 year of supervised release with the special condition that he stay at least 1,000 feet from any abortion clinic. The defendant was also ordered to pay restitution to the doctor for damage to the car. In another case, a defendant found guilty of fatally shooting a doctor and shooting two escorts—one fatally—was sentenced to life in prison without parole. Criminal FACE charges may be brought in conjunction with charges of a violation of another federal statute. For example, other federal statutes DOJ has used in conjunction with FACE include the arson and explosives statute, which, among other things, prohibits threatening to use fire and explosives to damage a building (18 U.S.C. 844); the statute prohibiting solicitation to commit a crime (18 U.S.C. 373); and the statute prohibiting the use of interstate commerce to communicate a threat (18 U.S.C. 875). In addition to charging FACE violations, one of the eight reported cases also included a federal charge for knowingly using and carrying a firearm during a crime of violence (in violation of 18 U.S.C. 924(c)). According to summary case information provided by DOJ, four of the nine unreported or pending cases included additional federal charges, such as a violation of the arson and explosives statute. We identified 29 civil lawsuits involving FACE—17 brought by DOJ against alleged FACE violators and 12 brought by private parties. In 14 of the 17 lawsuits DOJ brought against alleged violators, the courts awarded injunctive relief, damages, and/or civil penalties; in the remaining 3 lawsuits, no decision had been rendered as of September 11, 1998. The other 12 lawsuits were brought by private parties, including anti-abortion activists challenging the constitutionality of FACE and abortion clinics filing civil actions against alleged FACE violators. In one case, Greenhut v. Hand, the court noted that FACE was being invoked to penalize threats against an anti-abortion volunteer. Civil lawsuits initiated by DOJ have involved a range of offenses, including clinic obstruction, the use of physical force outside abortion clinics, and verbal threats to clinic staff and to physicians. Relief has included preliminary and permanent injunctions, damages, and civil penalties. Various remedies have been imposed depending on the nature of the activity litigated. For example, in 1 lawsuit where 35 defendants were charged with blocking the entrances to an abortion clinic for several hours, the court granted a preliminary injunction prohibiting the defendants from entering clinic property and later granted a motion for summary judgment and a permanent injunction. In another lawsuit, defendants stalked an abortion clinic doctor and his wife and gathered on a weekly basis near their home and chanted, shouted, and displayed signs protesting abortion. In this case, the court granted a preliminary injunction, which became permanent, prohibiting the defendants from demonstrating, congregating, or picketing within 45 feet of the intersection near the doctor’s home, coming closer than 15 feet of the doctor or his wife, or driving within 3 car lengths of their cars. We requested comments on a draft of this report from the Attorney General and the Secretary of the Treasury or their designees. DOJ provided us suggested clarifications and technical comments, which we incorporated into the report where appropriate. The Department of the Treasury provided written comments stating that it was unaware of any evidence that FACE has supplemented ATF’s role in investigating arson and bombing cases by giving the FBI jurisdiction to investigate abortion clinic violence, and that ATF’s response to arson and bombing incidents has not changed since the enactment of FACE. We did not intend to suggest that FACE changed ATF’s jurisdiction in bombing and arson incidents at abortion clinics. Nevertheless, FACE did give the FBI a role in these types of incidents, and we revised the report to clarify this point. Treasury’s comments also stated that ATF sees no advantage to the FACE statute when it applies to arson and explosives cases because a violation of FACE is usually a misdemeanor charge. This reiterates a point we addressed in the report that ATF views FACE penalties as weak for arson and explosives incidents that do not involve injury or death. We are sending copies of this report to the Chairman of your Subcommittee, the Chairmen and Ranking Minority Members of the Senate and House Committees on the Judiciary, the Chairmen and Ranking Minority Members of the Senate Committee on Governmental Affairs and the House Committee on Government Reform and Oversight, the Attorney General of the United States, and the Secretary of the Treasury. Copies also will be made available to others upon request. The major contributors to this report are listed in appendix XIII. If you or your staff have any questions concerning this report, please contact me or Evi Rezmovic, Assistant Director, on (202) 512-8777. In order to obtain information on the first three research objectives, we surveyed representatives from three groups: abortion clinics, local police departments, and U.S. Attorney offices. We conducted structured telephone interviews with the first two groups and arranged for the Executive Office for United States Attorneys (EOUSA) to send a survey to the third group. To obtain views of clinic representatives, we contacted clinics that reportedly had experienced relatively high levels of violence or disruption prior to the passage of FACE. We believed that such clinics were in a position to be particularly affected by the act. To get a better perspective on how key parties viewed the effects and enforcement of FACE, we also surveyed representatives from local police departments and U.S. Attorney offices whose jurisdictions covered the locations of the clinics included in our study. Our staff, who had been trained in telephone interviewing skills, conducted the structured telephone interviews. The training covered general telephone interviewing techniques, as well as information specific to the surveys. To determine whether clinic and police department respondents believed that abortion clinic incidents had increased, decreased, or stayed the same since the passage of FACE in May 1994, we chose two specific time periods of equal length for respondents to compare. The first time period was the 2 years preceding FACE (June 1992 through May 1994), and the second was the most recent 2-year period at the time we began our interviews (April 1996 through March 1998). All survey questions that asked the respondent to characterize his or her response along a continuum utilized a three-point scale. For example, in question 3 on the clinic survey, the respondent was asked whether he or she was “very knowledgeable,” “moderately knowledgeable,” or “not knowledgeable” about what activities are legal or illegal under FACE. In question 6A1 on the same survey, the respondent was asked whether he or she was “generally satisfied,” “neither satisfied nor dissatisfied,” or “generally dissatisfied” with law enforcement’s response to picketing. We asked three national abortion rights groups to use data they had collected prior to FACE in order to help us identify clinics that had experienced relatively high levels of violence and/or disruption during the 2 years prior to the act’s passage. (See app. V for the names of these national groups.) In identifying clinics, representatives of these groups considered their 1993 and 1994 incident data on clinics that had provided such data to them, as well as their knowledge of incidents at other clinics. They agreed on 45 clinics as having “high violence” before FACE. Most of these clinics had experienced at least three different types of incidents in the 1- to 2- year period prior to FACE. At our request, they included reproductive health service facilities that perform abortions, but not doctors’ offices or hospitals. According to a national reproductive health organization and one of the national abortion rights groups we contacted, there are roughly 900 clinics in the country, but these estimates include some doctors’ offices. Because the clinics in our sample were selected judgmentally—and therefore subject to potential selection bias—and not selected using probability sampling from a known universe, our clinic survey results cannot be generalized either to all abortion clinics nationwide, or to all abortion clinics that experienced high violence during the June 1992 through May 1994 period. The three national abortion rights groups faxed a joint letter to all abortion clinics in our sample. The letter alerted the clinics to the upcoming study and encouraged them to participate. We followed up with our own letter and then called clinic representatives to schedule a telephone interview. To ensure the anonymity of the clinic respondents, we discarded the cover page of each survey form upon completion of our study. Because the cover page was the only page that contained identifying information on respondents, its removal ensured that a link could not be made between respondents’ identities and their survey responses. We sought to interview the person who had the most knowledge of incidents that occurred at the clinic during the time periods of interest. We interviewed 15 clinic directors; 10 administrators; and 17 other representatives, including owners, presidents, managers, and security directors. On average, representatives with whom we spoke had been with the clinic for 12 years. We used a structured interview format to interview abortion clinic representatives. The interview included both close-ended and open-ended questions, and each interview lasted about 1 hour. Two of the 45 clinics selected had closed; our resulting sampling frame consisted of 43 clinics. We completed interviews with representatives from 42 of these 43 clinics, for a response rate of 98 percent. (The one nonrespondent clinic could not provide a staff member who had been there long enough to answer our questions.) We identified 40 local police departments that served the 42 clinics we surveyed. We developed 3 strata from which we selected our sample of 15: (1) departments with which clinic respondents were satisfied; (2) departments with which clinic respondents were dissatisfied; and (3) departments whose jurisdiction covered multiple clinics in our sample, and those clinics differed in their level of satisfaction with the department. We determined satisfaction on the basis of the clinic respondent’s answers to clinic survey questions regarding effectiveness in protecting the clinic, making appropriate arrests, and the clinic’s relationship with the department (questions 48, 49, and 50 from the clinic interview instrument). We selected all five of the police departments where clinic respondents reported dissatisfaction with local law enforcement, the two departments with jurisdiction over multiple clinics in our sample where local law enforcement received mixed ratings, and eight randomly selected departments where clinic respondents said they were satisfied with local law enforcement on all of the applicable questions. We completed interviews with all 15 of the local law enforcement agencies we selected, for a 100 percent response rate. As with the clinic interviews, we ensured local law enforcement respondents’ anonymity by discarding information that could be used to identify them, thereby severing the link between respondents’ identities and their survey responses. On average, the respondents we spoke with had been involved with handling abortion clinic violence and disruption for 9 years. We identified 36 U.S. Attorney offices with judicial districts that included the clinics we surveyed. EOUSA reviewed our instrument for appropriate language and clarity and e-mailed the survey to all 36 U.S. Attorney offices in our sample. It included a cover memorandum explaining the study and requesting that the appropriate person complete and return the survey to us. We telephoned nonrespondents to encourage participation and obtained a 100 percent response rate. Table VI.1 summarizes the selection of potential respondents and response rates obtained for all three surveys. In analyzing the three surveys, we computed descriptive statistics on the close-ended survey responses, and we conducted a systematic content analysis of the open-ended survey responses. For the content analysis of the open-ended responses, two staff members reviewed all the narrative responses to a particular question and mutually agreed on response categories. Then, two staff members, at least one of whom had not worked on developing the categories, independently placed responses into the appropriate response categories. Any discrepancies were discussed and resolved. The number of narrative response categories varied by question, as did the number of responses in each of the categories. In general, we have reported the response categories that were the most frequent or common. Because of the way we selected our samples, the results of our structured surveys are not generalizable to the universes of clinics, police departments, or U.S. Attorneys in the country. Reported responses to the surveys are illustrative rather than representative; statements represent only the views of the individual respondents. We took steps to minimize nonsampling errors. Draft questionnaires were designed by social science survey specialists and reviewed by representatives of organizations who were knowledgeable about both the subject matter and the terms used by the respondents. The three national abortion rights groups reviewed the clinic questionnaire, and EOUSA reviewed the instrument for the U.S. Attorneys. The clinic interview was pretested with two abortion clinic representatives, and the local law enforcement interview was pretested with two police departments. We kept in mind that the national abortion rights groups hold a position on the issue of abortion, and their input did not cause us to make any substantive changes to our instrument. However, the groups were in a position to offer useful advice on words and phrases that would be best understood by the abortion clinics. The groups also encouraged the participation of the abortion clinic representatives in the study, which reduced nonresponse bias. We took steps to minimize nonresponse bias by following up with potential respondents to encourage them to participate. We obtained a 100 percent response rate from the police department interviews and U.S. Attorney surveys and a 98 percent response rate from the clinic interviews. All data were double-keyed and verified after data entry, and computer analyses were double-checked against hand-tallies of key information. All computer programs were also checked by a second independent programmer. This appendix contains summaries of reported and unreported or pending cases brought under the Freedom of Access to Clinic Entrances Act of 1994 (FACE). The summaries of reported cases are based on the text of written decisions and additional information from the Department of Justice (DOJ) that updated the status of these cases. We based our summaries of unreported or pending cases on information provided by DOJ. We were not able to identify cases relating to actions brought by private parties in unreported or pending status because no central databases exist for identifying this information. 1. United States v. Bird, 124 F.3d 667 (5th Cir. 1997), cert. denied, 118 S. Ct. 1189 (March 9, 1998). The defendant, an abortion protestor, was charged under FACE with use of force and threat of force for throwing a bottle through a window of a car being driven by an abortion provider and making death threats. The jury returned a guilty verdict, and the district court sentenced the defendant to imprisonment for 1 year, followed by 1 year of supervised release with the special condition that he stay at least 1,000 feet from any abortion clinic. The defendant was also required to pay restitution to the doctor for damage to the car. On appeal, the defendant challenged the constitutionality of FACE; however, he did not contest his guilt under FACE. The court of appeals affirmed the district court’s opinion. It held that FACE was a legitimate regulation of interstate activity having substantial effect on interstate commerce; that the defendant lacked standing to advance a claim that FACE was unconstitutional because it protected certain relationships, but failed to protect others; and that FACE was neither overbroad nor vague. The court also held that the special condition that the defendant stay at least 1,000 feet away from abortion clinics did not violate the First Amendment. The Supreme Court denied a petition to review the court of appeals’ decision. 2. United States v. Brock, et al., 863 F. Supp. 851 (E.D. Wis. 1994), mandamus denied sub nom., Hatch v. Stadtmuller, 41 F.3d 1510 (7th Cir. 1994) (table) (unpublished order), aff’d sub nom., United States v. Soderna, 82 F.3d 1370 (7th Cir. 1996) cert. denied sub nom., Hatch v. United States, 117 S. Ct. 507 (1996). Six defendants were charged with violating FACE for physical obstruction of a Milwaukee, WI, clinic. The complaint was based on an affidavit that stated that the defendants blockaded both doors to the clinic with automobiles, to which they secured themselves using cement and steel devices. The defendants argued that FACE was constitutionally infirm because it was a content-based regulation of expressive activity and because it was vague and overbroad. They requested a jury trial. The district court held that FACE was not unconstitutional and that it was neither a content-based restriction of speech nor vague or overbroad. Although FACE itself was silent on the issue of whether a jury trial was required, the court determined that the defendants were not entitled to a jury trial because the maximum possible sentence constituted a “petty offense.” The defendants were convicted of violating FACE. Fines and incarceration terms of various lengths were imposed, the maximum being 6 months. The defendants appealed their convictions, raising a variety of constitutional questions. The court of appeals affirmed, holding that FACE did not exceed Congress’ constitutional authority to regulate interstate commerce; that FACE did not violate the defendants’ First Amendment rights; that FACE’s proscription against obstruction of facilities was not unconstitutionally vague, so as to violate the First Amendment; and that the defendants were not entitled to a jury trial. The Supreme Court denied a petition to review the court of appeals’ decision. 3. United States v. Unterburger, Olson 97 F.3d 1413 (11th Cir. 1996), cert. denied, 117 S. Ct. 2517 (1997). The defendants were charged under FACE with physical obstruction of an abortion clinic in Lake Clark Shores, FL, for chaining themselves to the main entrance of the clinic. Because the defendants had no prior convictions under FACE and the alleged offense involved “exclusively a nonviolent physical obstruction,” the defendants faced a maximum prison term of 6 months and a maximum fine of $10,000. The defendants requested a jury trial, but the district court agreed with the magistrate judge that the charged offense was not sufficiently serious to trigger the constitutional right to a jury trial. Both defendants were convicted and sentenced to time served during pretrial detention and supervised release. The defendants appealed. The court of appeals affirmed, holding that FACE did not violate the First or Tenth Amendments, as FACE was both content and viewpoint neutral and was not unconstitutionally vague or overbroad. The court also held that a sentence of 6 months and a fine of $10,000 constituted a “petty offense,” and thus the defendants were not entitled to a jury trial. The Supreme Court denied a petition to review the court of appeals’ decision. 4. United States v. Weslin, et al., 964 F. Supp. 93 (W.D. N.Y. 1997), __ F.3d __, 1998 WL 537941 (2nd Cir. N.Y. Aug. 25, 1998). The 11 defendants, anti-abortion activists, were charged with violating FACE for blocking the entrances to a reproductive health facility in Rochester, NY. One of the defendants moved to dismiss the charges on the grounds that FACE violated the First Amendment. The defendant argued that FACE was an impermissible content-based regulation because it was aimed at speech and expressive conduct intended to prevent persons from providing or obtaining reproductive health services. The Court held that FACE did not violate the free speech or free exercise clause of the First Amendment and that FACE did not exceed Congress’ authority to regulate interstate commerce. Two of the 11 defendants were sentenced to 4 months in prison, and 2 other defendants were sentenced to 2 months in prison. The remaining seven defendants were sentenced to time served, supervised release, and community service. All of the defendants were ordered to pay $105 restitution for the damage to the clinic doors. The defendants filed an appeal. The court of appeals affirmed, holding that FACE was constitutional under the Free Speech clause of the First Amendment and the Commerce Clause. 5. United States v. Wilson, 880 F. Supp. 621 (E.D. Wis. 1995), rev’d 73 F.3d 675 (7th Cir. 1995), reh’g en banc denied, 1996 U.S. App. LEXIS 2870 (7th Cir. Feb. 21, 1996), cert. denied, 117 S. Ct. 47 (1996). Six defendants were charged under FACE with blockading the doors of a Milwaukee, WI, clinic using a method similar to the one used in Brock. The district court held that FACE exceeded Congress’ power to legislate under the Commerce Clause. The court also held that because FACE was invalid under the Commerce Clause, it violated the Fourteenth Amendment because it was an impermissible regulation of private conduct. The court of appeals reversed, holding that FACE was constitutional under the Commerce Clause as a regulation that substantially affected interstate commerce. The Supreme Court denied a petition to review the court of appeals’ decision. The bench trial concluded on May 27, 1997. On April 30, 1998, the court found the defendants guilty of violating FACE as charged. One of the six defendants was sentenced to 167 days’ confinement. No jail time was imposed at sentencing for the other five defendants. All the defendants were ordered to pay restitution to the clinic in the total amount of $1,759.04. Two of the defendants filed appeals. 6. United States v. Wilson and Hudson, __ F.3d __, 1998 WL 452342 (7th Cir. Wis. Aug. 6, 1998). The defendants were convicted on April 24, 1997, under FACE and conspiracy to commit a violation of FACE, for positioning themselves inside vehicles and blocking the front and rear entrances to the Wisconsin Women’s Health Care Center. This was the second obstruction at the same clinic, see United States v. Wilson, 73 F.3d 675 (7th Cir. 1995). One defendant was sentenced to 120 days in prison and ordered to pay a fine of $1,500 and restitution of $454.97. The other defendant was sentenced to 24 months in prison and ordered to pay a fine of $3,000 and restitution of $454.97. Additionally, he was ordered to serve 3 years’ supervised release following incarceration. As a special condition of his supervised release, he was also required to participate in a mental health treatment program. The defendants appealed. The court of appeals affirmed the district court’s opinion. It held that FACE did not violate the First Amendment rights to freedom of speech and freedom of association. The court also held that the defendants’ conspiracy convictions did not violate the First Amendment and that the district court did not abuse its discretion by requiring one defendant to participate in a mental health program as a condition of supervised release. 1. United States v. Hill, 893 F. Supp. 1034 (N.D. Fla. 1994), 893 F. Supp. 1039 (N.D. Fla. 1994), 893 F. Supp. 1044 (N.D. Fla. 1994), 893 F. Supp. 1048 (N.D. Fla. 1994). On July 29, 1994, a doctor and two escorts were shot while outside of the Ladies Center clinic in Pensacola, FL. The doctor and one escort were killed, and the other escort was wounded. The defendant was charged with intentionally injuring and interfering with individuals who had been providing reproductive health services. He was also charged with knowingly using and carrying a firearm during a crime of violence for which he may be prosecuted in federal court, in violation of 18 U.S.C. 942(c). The defendant moved to dismiss the indictment, alleging that FACE was unconstitutional and that its vagueness precluded escorts from being considered “providers of reproductive services.” The district court held that Congress had the power under the Commerce Clause to enact FACE. The court also held that FACE, in light of its purpose and legislative history, included a doctor’s escort in the definition of “provider,” at least where, as here, the escort was performing his or her duties at the time of the alleged violation of the act. In a subsequent decision, the district court entered an order granting the government’s motion to exclude evidence offered by the defendant on the “necessity” or “justification” defense (which excuses criminal conduct committed in order to prevent an imminent greater harm). The court held that the defense could not be applied to justify averting acts that have expressly been declared by the Supreme Court to be constitutional and legally protected. The defendant was convicted of violating FACE with death resulting and was sentenced to life without parole. A local murder prosecution resulted in imposition of the death penalty. The defendant withdrew his federal appeal. 2. United States v. Lucero and Lacroix, 895 F. Supp. 1419 (D. Kan. 1995), 895 F. Supp. 1421 (D. Kan. 1995). The defendants were charged with interfering by physical obstruction with persons obtaining or providing reproductive health services in violation of FACE after blocking the entrances to a clinic in Wichita, KS, where abortions were performed. The defendants’ conduct amounted to “exclusively a nonviolent physical obstruction,” subjecting the defendants to a maximum term of imprisonment of 6 months and a maximum fine of $10,000 for the first offense. The United States moved for a nonjury trial of the defendants. The district court held that the maximum penalty that could be imposed on the defendants exceeded the statutory definition of “petty offense”—one that carries a maximum penalty of no more than 6 months’ imprisonment and a $5,000 fine—and thus, the defendants were entitled to a jury trial. The defendants moved for dismissal of the charges on the ground that FACE was unconstitutional under the Commerce Clause and the First Amendment. The district court held that FACE was not unconstitutional, as it was content and viewpoint neutral and Congress acted within its power to regulate interstate commerce. Both defendants were found guilty after a jury trial, and each was sentenced to 6 months’ incarceration and 1 year supervised release. The nine summaries in this section were prepared by DOJ. 1. United States v. Blackburn (D. Mont.). The defendant was indicted on May 19, 1995, for making threatening telephone calls to numerous clinics that provided abortion services. The defendant was charged with six counts of violating FACE and six counts of violating 18 U.S.C. 844(e), threatening to use fire and explosives to damage a building. On October 26, 1995, the defendant pled guilty to one count of FACE and one count of 844(e). The defendant was sentenced on February 21, 1996, to 5 years’ probation with mandatory psychological treatment. 2. United States v. Cabanies (W.D. Okla.). The defendant pled guilty to entering a clinic in Warr Acres, OK, on January 24, 1998, and physically assaulting the clinic’s only doctor. Prior to entering the clinic, the defendant had been protesting outside the building. The defendant pled guilty to one FACE violation. The defendant was sentenced to 3 months in prison to be followed by 3 years’ supervised release with a special condition of 90 days’ home detention. The defendant was also ordered to pay $700 restitution to the doctor for medical expenses. 3. United States v. Embry (W.D. Ky.). The defendant pled guilty to telephoning a bomb threat to a Women’s Choice Clinic in Indianapolis, IN, on January 4, 1994, in violation of FACE. The defendant was sentenced to 2 years’ probation and ordered to perform 100 hours of community service. 4. United States v. Hart (E.D. Ark.). The defendant was charged with two FACE violations for abandoning two Ryder trucks in front of the Little Rock Family Planning Services and Women’s Community Health Center clinics on September 25, 1997, in a manner as to communicate a credible bomb threat to the clinics’ staff. Each truck obstructed vehicular access to the respective clinic’s parking areas. Several businesses and residences near the clinics’ locations were evacuated for several hours while bomb and arson experts investigated the trucks. 5. United States v. Lang (N.D. Ala.). The defendant was charged with a FACE violation after threatening to kill a doctor during a telephone call to a TV reporter on January 8, 1995, in Huntsville, AL. The defendant received pretrial diversion on February 24, 1995. 6. United States v. Mathison (E.D. Wash.). The defendant was indicted in Yakima, WA, for making a series of threatening calls, some interstate, to an anti-abortion counseling and referral service on December 31, 1994. The defendant was charged with a violation of FACE and a violation of 18 U.S.C. 875, use of interstate commerce to communicate a threat. In these calls, the defendant stated he had a gun and threatened to kill as many office workers as he could find. The defendant pled guilty to the FACE count on June 6, 1995. Sentencing on August 31, 1995, resulted in 5 years probation with 30 days’ home detention and 10 weekends’ confinement, as well as mandatory substance abuse treatment. The defendant did not appeal his conviction. 7. United States v. McDonald (D. N.M.). The defendant pled guilty on June 24, 1996, to chaining clinic doors shut on January 2, 1995, and setting fire to the same clinic on February 24, 1995, in violation of FACE and arson statutes. The defendant was sentenced to 30 months in prison on October 22, 1996. 8. United Stated v. Priestley (D. Or.) The defendant pled guilty on September 27, 1995, to an unrelated arson charge in Eugene, OR, as well as a threat to commit arson at a clinic in Grants Pass, OR, on January 19, 1995, in violation of FACE. The defendant was sentenced to 58 months in prison on April 9, 1996. 9. United States v. McManus (D. Mass). The defendant pled guilty to two counts of FACE and two counts of 18 U.S.C. 844(e), threatening to use fire and explosives to damage a building, for making threatening telephone calls on May 21, 1996, to the Planned Parenthood in Worcester, MA, and to the Repro Associates in Brookline, MA. On March 24, 1997, the defendant was sentenced to 27 months in prison and 2 years’ supervised release. 1. American Life League v. Reno, 855 F. Supp. 137 (E.D. Va. 1994), aff’d, 47 F.3d 642 (4th Cir. 1995), cert. denied, 116 S. Ct. 55 (1995). The plaintiffs brought an action challenging the constitutionality of FACE. They argued that Congress lacked the authority to enact FACE. They also argued that FACE violated the Free Exercise of Religion clause of the First Amendment, was unconstitutionally vague, and was overbroad because it prohibited protected First Amendment expression. The district court dismissed the case. The plaintiffs appealed. The court of appeals affirmed the dismissal. It concluded that FACE was within Congress’ authority to regulate commerce because Congress rationally concluded that reproductive health services affect interstate commerce and that FACE was reasonably adapted to permissible ends. The court also concluded that FACE did not violate the First Amendment’s Free Speech Clause because FACE was content and viewpoint neutral and targeted unprotected expression. The court ruled that the liquidated damages provision did not subject anyone to damages caused by protected expression and was therefore constitutionally valid. It also concluded that FACE was neither overbroad nor vague and did not violate the Free Exercise clause of the First Amendment. The Supreme Court of the United States denied a petition to review the court of appeals’ decision. 2. Cheffer v. Reno, No. 94-611-CIV-ORL-18 (M.D. Fla. July 26, 1994), aff’d, 55 F.3d 1517 (11th Cir. 1995). Anti-abortion activists brought suit challenging the constitutionality of FACE. The district court dismissed the plaintiffs’ claims. The court of appeals affirmed, finding that FACE withstood the plaintiffs’ constitutional challenges. Specifically, the court found that FACE constituted a valid exercise of Congress’ power under the Commerce Clause and did not infringe on state sovereignty under the Tenth Amendment. The court also found that FACE was not content or viewpoint based, was not unconstitutionally vague or overbroad, did not violate the appellants’ First Amendment rights, and did not threaten any of their lawful expressive activities. The court declined to review the plaintiffs’ claim that the act violated the Eighth Amendment by imposing excessive fines on the basis that the claim was not ripe, that is, not ready for the court to address. 3. Cook v. Reno, 859 F. Supp. 1008 (W.D. La. 1994), vacated, 74 F.3d 97 (5th Cir. 1996). The plaintiffs were anti-abortion demonstrators who sought to enjoin the use and implementation of FACE. The district court denied the plaintiffs’ request for a preliminary injunction, finding that they did not have a substantial likelihood of success on the merits. In its ruling, the district court rejected all of the plaintiffs’ constitutional challenges and found FACE narrowly tailored to its purpose of curbing violence without burdening freedom of speech. The government moved to dismiss the plaintiffs’ suit for lack of standing, a jurisdictional requirement that the plaintiffs are entitled to have the court decide the merits of the case. According to the government, the plaintiffs’ complaint was carefully worded to refer only to peaceful, nonconfrontational activities. Thus, the government asserted that the plaintiffs failed to allege that they intend to participate in any activity that will violate FACE. The district court, concurring with the government’s reading of the plaintiffs’ complaint and finding that FACE was constitutional, dismissed the plaintiffs’ suit for lack of standing. The plaintiffs appealed this ruling. The court of appeals held that the district court improperly considered the merits of the demonstrators’ claim when deciding the issue of standing and rejected the plaintiffs’ request that the matter be remanded to a different trial judge. The court of appeals vacated the district court’s judgment and remanded this suit for further proceedings after the plaintiffs have been provided with an opportunity to amend their complaint. 4. Hoffman v. Hunt, 923 F. Supp. 791 (W.D. N.C. 1996), rev’d 126 F.3d 575 (4th Cir. 1997), cert. denied, 118 S. Ct. 1838 (May 26, 1998). Anti-abortion activists brought action seeking a judgment that a North Carolina statute prohibiting obstruction of health care facilities violated their First Amendment rights. The district court determined that North Carolina law enforcement officers threatened the plaintiffs with arrest for attempting to distribute literature to persons entering clinics and for merely being present at clinics. The plaintiffs later amended their complaint to add a claim challenging the constitutionality of FACE. The district court held that the North Carolina law violated the First Amendment because it was unconstitutionally vague and overbroad, both on its face and as applied. Similarly, it held that FACE was impermissibly vague and overbroad and that Congress lacked the authority to enact FACE under the Commerce Clause, as not all forms of reproductive health services affect interstate commerce. The court of appeals reversed the district court’s decision. It held that although the North Carolina statute, on its face, was neither vague nor overbroad, law enforcement officers exceeded their authority in threatening the plaintiffs with arrest for attempting to distribute literature to persons entering clinics and merely being present at clinics. The court of appeals also held that Congress acted within its authority under the Commerce Clause in enacting FACE and that the statute did not violate the First Amendment. The Supreme Court of the United States denied a petition to review the court of appeals’ decision. 5. Terry v. Reno, No. 94-1154 ( D. D.C. Nov. 21, 1995), aff’d, 101 F.3d 1412 (D.C. Cir. 1996), cert. denied, 117 S. Ct. 2431 (1997). The plaintiffs were anti-abortion activists who filed suit challenging the constitutionality of FACE both on its face and “as applied or threatened to be applied” to them. The district court granted the government’s motion for judgment on the pleading. The court ruled that Congress had the power to enact the statute under the Commerce Clause and that it did not violate the First Amendment. The district court also ruled that FACE did not violate principles of due process or equal protection and that the plaintiffs’ Eighth Amendment claims were not ripe. The court of appeals affirmed the judgment of the district court. It held that in enacting FACE, Congress did not exceed its Commerce Clause power, that the statute was compatible with freedom of speech under the First Amendment, and that FACE was not overbroad or unconstitutionally vague. The Supreme Court of the United States denied a petition to review the court of appeals’ decision. 6. United States v. Dinwiddie, 885 F. Supp. 1286 (W.D. Mo. 1995), 885 F. Supp. 1299 (W.D. Mo. 1995), aff’d in part, remanded in part, 76 F.3d 913 (8th Cir. 1996), cert. denied, 117 S. Ct. 613 (1996). The Attorney General brought a civil action seeking a temporary restraining order and permanent injunction alleging that an abortion protestor’s conduct directed at a Kansas City, MO, abortion clinic violated FACE. The district court found that the defendant violated FACE by obstructing, using physical force against, and threatening to use physical force against a number of Planned Parenthood’s patients and members of its staff. The court issued a permanent injunction prohibiting the protestor from being within 500 feet of an entrance of any facility in the United States that provides reproductive health services except for the purposes of engaging in legitimate personal activity that could not be remotely construed to violate the statute. On appeal, the defendant argued that the “motive requirement,” which limits application to those who obstruct, threaten, or use force “because is or has been, or in order to intimidate from, obtaining or providing a reproductive health service,” transformed FACE into a content-based statute, as it punished only abortion-related expressive conduct. The court ruled that this type of restriction was quite common and prevented random crimes committed in the vicinity of abortion clinics from being federalized. The court of appeals held that FACE was within the commerce power of Congress, was not inconsistent with the First Amendment, and not overbroad or vague. In ruling that FACE was not vague, the court articulated definitions for several terms in the statute. It said the following nonexhaustive and nonconclusive factors can be used to determine whether a statement constitutes a threat: the reaction of the recipient and other listeners to the statement; whether the statement was communicated directly to the victim; if similar statements had previously been made to the victim; and whether the victim had a reason to believe the speaker had a propensity to engage in violence. It also upheld the permanent injunction, with some modifications, ruling that portions of the injunction were inconsistent with the First Amendment, such as the prohibition of certain types of nonthreatening speech and other forms of expression. However, it said a permanent injunction that is more limited in scope would be constitutional. The Supreme Court of the United States denied a petition to review the court of appeals’ decision. 7. Woodall v. Reno, 47 F.3d 656 (4th Cir. 1995), cert. denied, 115 S. Ct. 2577 (1995). The plaintiffs, a demonstrator and an anti-abortion women’s organization, alleged that they pray peacefully in front of abortion clinic entrances and nonviolently discourage access to the entrances. The plaintiffs raised a challenge on constitutional grounds to FACE. The district court dismissed their complaint and the plaintiffs appealed. The court of appeals rejected the plaintiffs’ claim that FACE violated the First Amendment or was vague and overbroad and affirmed on the reasoning of the opinion in American Life League, a decision the court handed down on the same day as Woodall. The plaintiffs also argued that FACE was unconstitutional because it allowed the Attorney General to seek injunctive relief if he/she had reasonable cause to believe that a person might be injured by conduct violating FACE, and thus it constituted prior restraint. Because they were not subject to an injunction under FACE at the time, however, the court ruled that their claim was being raised prematurely. The Supreme Court of the United States denied a petition to review the court of appeals’ decision. 1. Council for Life Coalition v. Reno, 856 F. Supp. 422 (S.D. Cal. 1994). The plaintiffs brought an action for declaratory and injunctive relief seeking to enjoin the enforcement of FACE on a variety of constitutional and statutory grounds. The court held that FACE did not infringe the plaintiffs’ rights under the First and Fifth Amendments, and Congress had full authority to enact FACE under the Commerce Clause. The defendant’s motion to dismiss the complaint was granted. The plaintiffs’ motion for a preliminary injunction was denied because the plaintiffs failed to state a claim upon which relief could be granted. 2. Greenhut v. Hand, 996 F. Supp. 372 (D. N.J. 1998). The plaintiff was a volunteer for an anti-abortion organization. The defendant left a telephone message at the plaintiff’s residence that stated “Hello, Janet. Get your murderers away from abortion clinics now or you will be killed.” About 1 hour and 15 minutes later, the defendant left a second message that stated, “Janet, get your pro-lifers away from our clinics or we will kill you.” Criminal charges were brought against the defendant and on December 11, 1995, she pled guilty to one count of making terroristic threats in violation of a New Jersey statute. Subsequently, the plaintiff filed this civil action seeking relief against the defendant under FACE. The district court noted that FACE was being invoked to penalize threats directed against an anti-abortion volunteer. The defendant contended that the plaintiff had not satisfied two elements under FACE; namely, the plaintiff was not providing “reproductive health services” and the defendant did not act with the requisite intent. The district court held that FACE covered the plaintiff’s activities, since her organization provided emotional support and guidance to pregnant women, and other courts ruled that FACE was not limited to medical services. The court also held that the defendant had the requisite intent to impede, interfere with, or intimidate the plaintiff from furnishing reproductive health services. The court awarded the plaintiff $10,000 in statutory damages under FACE. 3. Lucero v. Trosch, 904 F. Supp. 1336 (S.D. Ala. 1995), 928 F. Supp. 1124 (S.D. Ala. 1996), aff’d in part, vacated in part, and remanded, 121 F.3d 591 (11th Cir. 1997). A physician and a health care clinic sued an anti-abortion activist for violation of FACE and for private nuisance based on statements the defendant made to the physician on a television show at which they appeared together as guests. The defendant moved to dismiss the complaint on the grounds that it failed to state a claim upon which relief could be granted. The court held that a reasonable jury could have found that the anti-abortion activist’s statements to the physician that he “should be dead” and that the activist would kill the abortion doctor if he had a gun in his hand were threats of force for purposes of FACE even though the speaker did not expressly tell the physician that he was going to kill him at some future time. The court also held that FACE was not unconstitutional and that statements made during the television show did not constitute actionable private nuisance to the physician’s clinic under Alabama law. In a subsequent decision, the district court held that the activist’s statements did not constitute “threats of force” that were violative of FACE. The physician and health care clinic also sued abortion protestors for their protest activities held outside the clinic. The court of appeals held that (1) the provisions of a preliminary injunction enjoining the defendants from congregating, picketing, praying, loitering, patrolling, demonstrating, or communicating with others orally, by signs, or otherwise, within 25 feet of the clinic did not seem unreasonable and does not burden speech more than necessary to preserve the patients’, doctors’, and staff’s right to enter the clinic; (2) a provision of a preliminary injunction enjoining the defendants from approaching, congregating, picketing, patrolling, demonstrating, or using bullhorns or other sound amplification equipment within 200 feet of the residences of the clinic’s staff operated as a generalized restriction on protesting and thus was unconstitutional under the First Amendment; (3) a provision enjoining the defendants from blocking or attempting to block, barricade, or obstruct the entrances, exits, or driveways of the residences of the clinic staff; and inhibiting or impeding or attempting to impede the free ingress and egress of persons to any street providing the sole access to the residences of clinic staff did not burden speech more than was necessary to serve the state’s significant interest in promoting the free flow of traffic on public streets; and (4) a provision enjoining the defendants from knowingly being within 20 feet of any person seeking to obtain or provide clinic services was unconstitutional because it burdened speech more than was necessary to serve the significant government interests. On the basis of these rulings, the case was remanded so that the district court would revise the preliminary injunction. 4. Milwaukee Women’s Medical Center, Inc., and United States v. Brock et al., 1998 WL 228158 (E.D. Wis. April 30, 1998). This civil lawsuit arose out of the Milwaukee Clinic blockade for which the defendants were criminally prosecuted in United States v. Brock, 863 F. Supp. 851 (E.D. Wis. 1994), mandamus denied sub nom., Hatch v. Stadtmuller, 41 F.3d 1510 (7th Cir. 1994) (unpublished order), aff’d sub nom., United States v. Soderna, 82 F.3d 1370 (7th Cir. 1996) cert. denied sub nom., Hatch v. United States, 117 S. Ct. 507 (1996). This was DOJ’s only FACE lawsuit prosecuted both criminally and civilly. The clinic filed a civil action against the defendants seeking (1) a declaration that the defendants violated FACE, (2) injunctive relief, and (3) statutory damages. The parties agreed to stay the matter pending resolution of the criminal proceedings. On December 20, 1994, approximately 1 month after the criminal convictions were obtained, DOJ intervened in the civil FACE lawsuit. The suit sought a declaratory judgment, a permanent injunction enjoining the defendants from blocking access to the clinic, and $5,000 in statutory damages against each defendant as well as separate awards for punitive damages. The presiding district court judge stayed this case for almost a year pending the outcome of appellate and Supreme Court review of his decision in United States v. Wilson, 880 F. Supp. 621 (E.D. Wis. 1995), a criminal FACE prosecution in which he had declared the statute unconstitutional. The Seventh Circuit ultimately reversed the district court’s decision, and the Supreme Court denied a petition to review the court of appeals’ decision. United States v. Wilson, 73 F.3d 675 (7th Cir. 1995); cert. denied Wilson v. United States, 47 S. Ct. 117 (1996). The court (1) granted summary judgment and issued a declaratory judgment in the clinic’s and the government’s favor against six of eight defendants stating that the defendants violated FACE; (2) awarded compensatory damages in the total amount of $5,000, for which the defendants were each jointly and severally liable; and (3) issued a permanent injunction enjoining the defendants from rendering impassable the entry to and exit from the clinic or rendering passage to or from the clinic unreasonably difficult or hazardous. The court rejected the claim for punitive damages, holding that the peaceful obstruction of entrances did not warrant the imposition of punitive damages. At the time of our review, the case remained pending against the other two defendants. 5. Planned Parenthood of the Columbia/Wilmette, Inc. et al. v. American Coalition of Life Activists, et al., 945 F. Supp. 1355 (D. Oregon 1996). The plaintiffs filed suit against the defendants alleging violations of FACE and the Racketeer Influenced and Corrupt Organizations (RICO) Act and a similar provision of Oregon law, the Oregon Racketeer Influenced and Corrupt Organizations Act (ORICO). The individual plaintiffs were doctors who performed abortions; the two corporate plaintiffs operated clinics and provided health services, including abortions. The defendants included associations that oppose abortions and individuals from the associations. The plaintiffs alleged that the defendants conspired to violate FACE by intending to injure, threaten, and intimidate the plaintiffs through the dissemination of posters that accused individual abortion providers of murder and provided their descriptions, addresses, and phone numbers. The plaintiffs alleged that the defendants violated FACE by threatening, injuring, and intimidating them because they provided reproductive health services. The district court held that (1) the defendants were subject to personal jurisdiction in Oregon, (2) FACE was within Congress’ power under the Commerce Clause, (3) FACE did not violate the First Amendment, and (4) the plaintiffs adequately stated RICO and ORICO claims against all but one of the defendants. 6. Planned Parenthood of Southeastern Pennsylvania v. Walton, 949 F. Supp. 290 (E.D. Pa. 1996), 1997 WL 734012 (E.D. Pa. Nov. 14, 1997), 1998 WL 88373 (E.D. Pa. Feb. 12, 1998). The plaintiff, a reproductive counseling association, brought action under FACE against a number of anti-abortion activists for obstructing access to the plaintiff’s Philadelphia, PA, clinic. The district court held FACE to be constitutional. It concluded that (1) FACE did not violate the First Amendment, (2) Eighth Amendment Cruel and Unusual Punishment claims were not ripe, and (3) enactment of FACE was within Congress’ authority under the Fourteenth Amendment and the Commerce Clause. In a subsequent action, the defendants challenged the plaintiff’s ability to bring suit under FACE, claiming that they lacked standing. The district court held that under the plain language of FACE, the plaintiff corporation qualified as a “person involved in providing or seeking to provide . . . services” within the meaning of the statute and thus had standing to bring an action under FACE. On February 12, 1998, the court granted the plaintiff’s motion for summary judgment, granted a permanent injunction, and awarded statutory damages. 7. Riely v. Reno, 860 F. Supp. 693 (D. Ariz. 1994). The plaintiffs filed suit to challenge the constitutionality of FACE. The defendants moved for dismissal on the grounds that the plaintiffs’ claims were not ripe for review and on the alternative grounds that their complaint failed to state a claim upon which relief could be granted. The district court found that the plaintiffs failed to state a claim upon which relief could be granted. The court found (1) that Congress acted within its authority under the Commerce Clause when it enacted FACE, (2) that FACE did not impermissibly regulate protected expression or burden religion, (3) the plaintiffs failed to show that FACE was vague or overbroad, (4) the punishments imposed and statutory damages allowed by FACE did not violate the Eighth Amendment prohibition against cruel and unusual punishment and excessive fines, and (5) finally, having found that FACE did not violate the First, Fourth, Fifth, Eighth, or Tenth Amendments, that the enforcement of FACE by state officials did not violate the Fourteenth Amendment. 8. United States v. Lindgren, et al., 883 F. Supp. 1321 (D. N.D. 1995). The Attorney General brought a civil action against abortion protestors alleging that the defendants violated FACE during their anti-abortion efforts relating to a clinic in Fargo, ND. The district court found that the protestors blockaded the clinic using an immobilized car with people attached to it and made verbal threats to clinic staff members on several occasions. The court issued a preliminary injunction in light of the substantial probability of success on claims of FACE violations. The injunction prohibited, among other things, one defendant from coming within 100 feet of the clinic, its staff, and their homes, and the other defendants from blocking the clinic or entering onto the clinic’s property. The preliminary injunction was made permanent by agreement of the parties. 9. United States v. Lynch and Moscinski, No. 95 Civ. 9223 (S.D. N.Y. Feb. 26, 1996) (issuing permanent injunction), aff’d 1996 U.S. App. LEXIS 32729 (2d Cir. Dec. 11, 1996), cert. denied, 117 S. Ct. 1436 (1997), 952 F. Supp. 167 (S.D. N.Y. 1996) (dismissing criminal contempt charges). The Attorney General filed a lawsuit alleging that the defendants violated FACE by blocking access to an abortion clinic in Dobbs Ferry, NY. The court issued a permanent injunction prohibiting the defendants and anyone acting in concert with them from impeding or obstructing access to the clinic. The defendants contended on appeal that the district court should have accepted a defense to the injunction based upon “natural law.” Specifically, the defendants argued that the FACE statute protected the taking of innocent human life and was therefore contrary to natural law. The court of appeals affirmed the judgment of the district court, declining to invalidate FACE on the basis of natural law principles. The Supreme Court denied a petition to review the court of appeals’ decision. DOJ subsequently secured a civil contempt finding and sought a criminal contempt finding, which was rejected by the district court. DOJ appealed the district court’s decision on the criminal contempt motion. That appeal was pending before the Second Circuit. 10. United States v. McMillan, 946 F. Supp. 1254 (S.D. Miss. 1995). The Attorney General filed a civil action against the defendant, the founder and executive director of Christian Action Group, an anti-abortion organization, alleging three instances of threats and obstruction. The court denied the defendant’s motion to dismiss the lawsuit on the grounds that FACE was constitutionally infirm. It found that Congress validly enacted FACE pursuant to its powers under the Commerce Clause and Section 5 of the Fourteenth Amendment. The court found that the defendant endorsed the use of force and violence as a means to protest against abortion. The court further found that the plaintiff had a substantial likelihood of showing that the defendant has committed three violations of FACE. The court granted a preliminary injunction prohibiting the defendant from, among other things, being within 25 feet of the Jackson Women’s Health Organization. The parties agreed to a permanent injunction incorporating the terms of the preliminary injunction and adding a 15-foot buffer zone at a second clinic. The Attorney General moved for civil contempt for a violation of the injunction at a second clinic. No decision had yet been issued on this matter. 11. United States v. Roach, et al., 947 F. Supp. 872 (E.D. Pa. 1996). The Attorney General filed a civil action against 35 defendants, alleging that they blocked clinic entrances at the Reproductive Health and Counseling Center in Upland, PA, in violation of FACE. The district court held that Congress enacted FACE pursuant to its authority under the Commerce Clause and the Fourteenth Amendment and that the statute did not chill First Amendment freedom of speech or religion as it was enacted by Congress or as applied in this case. The court granted a preliminary injunction prohibiting the defendants and anyone acting in concert with them from, among other things, entering or remaining on the private property of the clinic. On May 5, 1998, the court granted the motion for summary judgment and granted a permanent injunction. 12. United States v. Scott, 919 F. Supp. 76 (D. Conn. 1996), 958 F. Supp. 761 (D. Conn. 1997), 975 F. Supp. 428 (D. Conn. 1997). The Attorney General and the State of Connecticut filed a civil action alleging that the defendants repeatedly used force, threats of force, and physical obstruction against the staff, escorts, clients, and companions of clients at a reproductive health facility located in Bridgeport, CT. The defendants moved to dismiss on grounds that FACE was unconstitutional. The district court held that FACE was a constitutional exercise of Congress’ authority under the Commerce Clause. The court subsequently ruled that FACE was constitutional under the First Amendment; that the United States and Connecticut did not violate the dual sovereignty doctrine by jointly filing action; that FACE was not overbroad or vague; that injunctive relief was constitutional; and that one of the defendants, Stanley G. Scott, had violated FACE. The court issued a permanent injunction prohibiting Scott from, among other things, approaching within 15 feet of the clinic’s front entrance, coming within 5 feet of any person providing or receiving reproductive health services who indicated that he/she wished to be left alone, or coming within 5 feet of any vehicle containing such a person. The United States obtained one finding of civil contempt against Scott for which he was assessed $200. In subsequent rulings, the court ordered a modification of the injunction to expand the buffer zone around the clinic entrance in light of Scott’s repeated violations of the injunction and issued a finding of contempt against Scott, assessing him a fine of $300. 13. United States v. White, et al., 893 F. Supp. 1423 (C.D. Cal. 1995). The Attorney General filed a motion for a preliminary injunction pursuant to FACE. The complaint requested that the court enjoin the defendants and all individuals acting in concert with them from, among other things, using force or threats of force in violation of the statute to interfere with or intimidate the physician who was the target of the defendants’ activities or the physician’s wife. The defendants moved to dismiss the complaint on constitutional grounds. The court held that Congress had the authority under the Commerce Clause to enact FACE, that the Fourteenth Amendment did not preclude Congress from legislating in the area of clinic violence, and that FACE did not violate the defendants’ First Amendment rights. The court granted the motion for a preliminary injunction. The preliminary injunction, which became permanent by agreement of the parties, prohibited the defendants from, among other things, demonstrating, congregating, or picketing within 45 feet of the intersection near the doctor’s home; coming closer than 15 feet of the doctor or his wife; or driving within 3 car lengths of their cars. We prepared the following summaries for United States v. Alaw, United States v. Burke, and United States v. Operation Rescue National, et al. based on information contained in complaints or court orders that DOJ provided to us. DOJ prepared the other six summaries in this section. 1. United States v. Alaw, No. 1:98 CV01446 (D.D.C.). Groups of individuals blocked and physically obstructed access to all the entrances of a clinic that provides comprehensive reproductive health services in Washington, D.C., on January 24, 1998. The Attorney General filed a civil FACE lawsuit against 17 defendants on June 9, 1998, seeking both preliminary and permanent injunctions, statutory damages, and civil penalties. On August 19, 1998, a motion was filed to have the court approve consent decrees with two of the defendants. 2. United States v. Brown, No. 3-97CV1423-R (N.D. Tex.). A Texas man communicated a threat to a staff person of a Dallas, TX, abortion clinic: “I’ve been in Oklahoma, Atlanta and Washington, D.C. taking care of business, and now I’m here to take care of business.” The Attorney General filed a civil FACE lawsuit against the defendant on June 13, 1997. The Department of Justice obtained a permanent injunction, by agreement with the defendant, prohibiting the defendant from coming within 50 feet of the clinic. 3. United States v. Burke, No. 98-2319-JWL (D. Kan.). The Attorney General filed a civil action against the defendant on July 14, 1998, alleging that the defendant, an abortion protestor, violated FACE during his anti-abortion efforts at a reproductive health care center located in Overland Park, KS. The district court first issued a preliminary injunction on July 17, 1998, and then on July 31, 1998, a permanent injunction, prohibiting the defendant from committing criminal trespass and engaging in conduct that violates FACE. Specifically, the defendant was enjoined from, among other things, obstructing access to the clinic and physically abusing persons working at or using services at the clinic. The Attorney General moved for civil contempt for the defendant’s alleged violation of the preliminary injunction. 4. United States v. Gregg, et al., 97 Civ. 2020 (JCL) (D. N.J.). On three different dates, groups of individuals blockaded an Englewood, NJ, abortion clinic by, among other things, sitting and lying in front of clinic’s front entrance. The Attorney General filed a civil FACE lawsuit against 30 defendants on April 18, 1997, alleging obstruction. On December 22, 1997, DOJ obtained a preliminary injunction prohibiting the defendants from obstructing access. Discovery was concluded and the parties are preparing pre-trial motions. 5. United States v. McDaniel, et al., 96 Civ. 9202 (JES) (S.D. N.Y.). A group of individuals blockaded a New York City abortion clinic by pushing their way into the clinic and locking themselves together in front of the clinic’s doors and elevators. On December 6, 1996, the Attorney General filed a civil FACE lawsuit against the 10 blockaders, alleging obstruction. A jury found all defendants liable for violating the statute. (This was DOJ’s first civil FACE lawsuit to go before a jury.) The court issued a permanent injunction prohibiting the defendants and those acting in concert with them from impeding or obstructing access to the clinic. The judge assessed civil penalties ranging from $1,000 to $22,000 against seven defendants and found that three defendants did not have sufficient assets to permit the imposition of civil penalties. 6. United States v. Menchacha, et al., No. 96 Civ. 5305 (SS) (S.D. N.Y.). A group of individuals conducted a blockade of a Dobbs Ferry, NY, abortion clinic by sitting in the driveway and entrance to the clinic’s parking lot. DOJ filed a civil FACE lawsuit against four blockaders on July 17, 1996, alleging obstruction. The court issued a permanent injunction prohibiting the defendants and anyone acting in concert with them from, among other things, coming within 15 feet of the clinic’s property. 7. United States v. Operation Rescue National, et al., No. C3-98-113 (S.D. Ohio). Between July 13 and July 19, 1997, Operation Rescue organized and directed a week-long campaign protesting abortion in the Cincinnati/Dayton areas. During this campaign, groups of individuals blocked entrances and physically obstructed access to three clinics in the Cincinnati/Dayton area. The Attorney General filed a civil FACE lawsuit against Operation Rescue National and individual defendants on March 23, 1998, seeking both permanent and preliminary injunctions, statutory damages, and civil penalties. 8. United States v. Smith, No. 4:95-CV-0025 (N.D. Ohio) (6th Cir. 1997). Beginning in the summer of 1994, an Ohio man engaged in a series of unlawful activities directed at a reproductive health doctor and his family. These included attempting to run the doctor off the road with his truck, pantomiming the act of shooting the doctor outside his home, telling the doctor’s teenage stepdaughter that the doctor “was dead,” and, along with other anti-abortion demonstrators, surrounding the car of the doctor’s wife, who worked as his receptionist, outside one of the doctor’s offices. The Attorney General filed a civil FACE lawsuit on January 4, 1995, alleging use of force, threats, and obstruction by the defendant. DOJ obtained, by agreement with the defendant, a temporary restraining order and a preliminary injunction, respectively, in January and February 1995. In August 1996, DOJ obtained a finding of criminal contempt against the defendant for violating the preliminary injunction by verbally threatening the doctor outside a clinic in Youngstown, Ohio. The court imposed a fine of $1,500. The defendant appealed the conviction; the Court of Appeals for the Sixth Circuit denied that appeal in December 1997. This case went to trial between January 21 and February 3, 1997. DOJ was awaiting a decision. 9. United States v. Tomanek, No. 3:95-CV-0881-T (N.D. TX). A Dallas, TX, man engaged in a pattern of conduct in which he made intimidating phone calls and statements to the staff of an abortion clinic at their homes and outside the clinic, and, on one occasion, chased the clinic doctor in his car. The Attorney General filed a FACE lawsuit against the defendant on May 11, 1995, alleging threats and obstruction. The trial took place between February 24 and February 28, 1997. DOJ was awaiting a decision. Lori A. Weiss, Evaluator-in-Charge Marco Gomez, Evaluator Leslie Clayton, Intern The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on: (1) the occurrence of abortion clinic incidents before and after the enactment of the Freedom of Access to Clinic Entrances Act of 1994 (FACE); (2) views regarding FACE and its effectiveness from representatives of these clinics, selected police departments and U.S. Attorney offices, and other representatives from the Department of Justice (DOJ), the Bureau of Alcohol, Tobacco and Firearms, and national anti-abortion organizations; (3) efforts by local and federal law enforcement agencies following the enactment of FACE; and (4) any court cases pertaining to FACE and the courts' rulings in those cases. GAO noted that: (1) clinic survey responses indicated that most of the clinics experienced fewer types of incidents during the 2 years preceding GAO's survey than they had in the 2 years prior to the passage of FACE; (2) respondents from 35 of the 42 clinics GAO surveyed credited FACE with deterring or reducing abortion clinic incidents; (3) respondents from 21 of the 36 U.S. Attorney offices GAO surveyed thought that FACE had positively affected incidents, including deterring or reducing their occurrence; (4) most of the other officials whom GAO interviewed from DOJ and national abortion rights organizations also felt that FACE has been a deterrent to clinic violence; (5) representatives of the police departments and anti-abortion organizations that GAO contacted were less consistent in their views; (6) representatives of 9 of the 15 police departments GAO contacted said their officers had received training pertaining to abortion clinics, and 12 said their departments had conducted outreach and education with clinics since FACE became law; (7) about half reported engaging in prevention activities; (8) representatives of 31 of the 36 U.S. Attorney offices GAO surveyed reported that their districts had established abortion violence task forces, and 29 reported accomplishments that included improved coordination and communication; (9) nearly all the U.S. Attorney respondents whose districts had task forces reported that meetings were typically attended by representatives of federal and local law enforcement agencies; (10) most clinic respondents were satisfied with both local and federal law enforcement; (11) clinic respondents who observed negative aspects of local law enforcement most often cited poor response to incidents and poor enforcement of laws; (12) 30 of the 42 clinic respondents were generally satisfied with federal law enforcement, often citing good communication, proactive steps, and good response to and investigation of incidents; (13) GAO identified 46 criminal and civil cases pertaining to FACE that were either completed or pending as of September 11, 1998; (14) many of these cases raised constitutional challenges to FACE; (15) these challenges were all ultimately unsuccessful; and (16) convictions were obtained in most of the reported criminal FACE prosecutions, and civil remedies were obtained in most of the civil lawsuits in which a FACE violation was alleged.
Many federal statutes, and the regulations that implement them, impose requirements on state, local, and tribal governments or private sector parties in order to achieve certain legislative goals. Such statutes and their regulations can provide substantial benefits, as well as imposing costs. OMB’s 2003 final report on the costs and benefits of federal regulations estimated that the total annual quantified benefits of major rules issued from October 1, 1992, to September 30, 2002, ranged from $146 billion to $230 billion, while the total annual quantified costs ranged from $36 billion to $42 billion. Title I of UMRA focuses on the legislative process, and title II focuses on the regulatory process. For both legislation and regulations, UMRA was intended to provide more information on and prompt more careful consideration of the costs and benefits of federal mandates that affect nonfederal parties. UMRA generally defines a federal mandate as any provision in legislation, statute, or regulation that would impose an enforceable duty on state, local, or tribal governments or the private sector or that would reduce or eliminate the amount of funding authorized to cover the costs of existing mandates. However, as discussed in the body of this report, some other definitions, exclusions, and thresholds in the act vary according to whether the mandate is in legislation or a rule and whether a provision imposes an intergovernmental or private sector mandate. If legislation or a rule contains a federal mandate, as defined by UMRA, a major consequence is that other requirements in the act are triggered. Under title I, when a committee of authorization of the Senate or the House of Representatives reports a bill or joint resolution that contains any federal mandates to the full legislative body, the committee is required to provide the bill or resolution to the Director of CBO and identify the mandates it contains. UMRA then requires CBO to analyze each of these bills and resolutions—and, on request, other legislative proposals—for the presence of such mandates and to estimate their associated costs. CBO prepares UMRA statements that are to be included in the authorizing committees’ reports. The CBO mandate statements must also include an assessment of whether the legislation authorizes or otherwise provides funding to cover the costs of any new federal mandates. UMRA’s specific enforcement mechanism for the requirements of title I is a point of order, which a member of Congress may raise to indicate that a rule of procedure has been or will be violated. Generally, a point of order is available under UMRA if there is no CBO UMRA statement for the legislation or if the legislation contains an unfunded intergovernmental mandate with costs over UMRA’s threshold or if it was not feasible to estimate the costs of the intergovernmental mandate. However, points of order are not available under UMRA for private sector mandates that exceed the cost threshold or if the private sector mandates’ costs are not feasible to estimate. UMRA’s rules are not self-enforcing and a point of order must be actively raised to hinder the passage of unfunded federal mandates. Specifically raising an UMRA point of order may serve to heighten the profile of “unfunded mandate” implications in the challenged legislation. As of March 2004, 13 points of order had been raised in the House of Representatives and no points of order had been raised in the Senate under UMRA. Only 1 of these 13, regarding the minimum wage in the Contract with America Advancement Act in 1996, resulted in the House voting to reject consideration of a proposed provision. For rules that contain federal mandates, title II of UMRA requires the agencies to prepare written statements containing specific descriptions and estimates, including a qualitative and quantitative assessment of the anticipated costs and benefits of the mandate. For such rules, agencies are to “identify and consider a reasonable number of regulatory alternatives and from those alternatives select the least costly, most cost-effective, or least burdensome alternative that achieves the objectives of the rule” or explain why that alternative was not selected. UMRA requires OMB to collect the written statements prepared by the agencies on federal mandates in rules and periodically forward them to CBO. UMRA also requires OMB to submit annual reports to Congress detailing agencies’ compliance with title II. OMB’s Office of Information and Regulatory Affairs (OIRA) has the primary responsibility for monitoring agencies’ compliance with this title. CBO and OMB regularly produce reports on, respectively, activities under titles I and II of UMRA. CBO has prepared an annual report on its activities under title I each year since UMRA’s enactment. Included in these reports is information on two requirements placed on CBO by title I, identifying (1) proposed legislation that would have imposed federal mandates on another level of government or the private sector and (2) the subset of the legislation examined by CBO that was found to contain mandates with costs at or above the relevant thresholds. Although not required by UMRA to do so, CBO also reviews all statutes enacted to identify mandates enacted into law for its annual reports. Since 2001, OMB has fulfilled its requirement to report to Congress on compliance with title II in the same document used to fulfill a statutory requirement for reporting to Congress on the costs and benefits of federal regulations. OMB’s reports provide information on the rules that agencies have identified as containing federal mandates and also discuss agencies’ efforts to consult with state, local, and tribal governments in the development of significant rules. To describe the applicable procedures, definitions, and exclusions for identifying federal mandates in statues and rules under UMRA, we reviewed the act, other related guidance documents, and CBO and OMB reports on the implementation of UMRA. We also interviewed persons knowledgeable about the implementation of UMRA in OMB, CBO, and other congressional offices. To identify statutes and final rules that were and were not identified as containing federal mandates under UMRA and analyze the reasons for those determinations, we focused our review on statutes enacted and final rules published during 2001 and 2002, as agreed with your staff. For our review and analysis of the implementation of title I, we relied on information provided to us by the CBO officials responsible for preparing UMRA statements on proposed legislation and the annual CBO reports on UMRA. At our request, CBO identified from that 2-year period the 5 statutes that contained federal mandates at or above UMRA’s cost thresholds and 43 examples of statutes that were not so identified but nevertheless contained provisions having impacts on nonfederal parties. We did not ask CBO to compile a comprehensive list of all statutes in those years that might have impacts on nonfederal parties. For our review and analysis of the implementation of title II, we reviewed all 122 major and/or economically significant final rules—generally, those that would have an annual effect on the economy of $100 million or more or raise other significant economic or policy issues—that federal agencies issued during 2001 and 2002. Parallel to the information on statutes provided by CBO, we focused on identifying two sets of final rules—those that were identified as containing federal mandates at or above UMRA’s threshold and those that were not but included provisions affecting nonfederal parties that might be perceived by those parties as potential “unfunded mandates.” To determine whether the statutes and final rules we examined were perceived by affected parties as potentially having “unfunded mandate” implications, we shared them with the following national organizations representing nonfederal levels of government: National Association of Counties, National Conference of State Legislatures (NCSL), National Governors Association, the National League of Cities, and the U.S. Conference of Mayors. We then analyzed the statutes and rules to identify how they had been treated under UMRA, in particular identifying the application of procedural, definitional, and other provisions of UMRA that guide the identification of federal mandates. The scope of our review was limited to a 2-year period and, within that period, only to examples of legislation enacted and rules that were finalized (i.e., we did not include all legislation considered by Congress or rules that were proposed but not finalized). Therefore, the examples we reviewed might not illustrate all possible ways that a statute or rule with a perceived mandate could be enacted or issued without being identified as a federal mandate under UMRA. However, the representatives from external public sector organizations who reviewed the statutes and rules we examined generally concurred that they were perceived as potential “unfunded mandates” and that we did not exclude any major examples that they believed should have been included. It is also important to recognize that perceived “unfunded mandates” could result from nonstatutory, nonregulatory federal actions, such as Homeland Security threat level adjustments, which are not covered by UMRA and therefore were outside the scope of our specific objectives. (See app. I for a more detailed description of our objectives, scope, and methodology.) Statutory provisions that impose requirements on nonfederal parties might not be identified as federal mandates under UMRA because some legislative actions do not trigger a review and even if the provisions are subject to review, UMRA circumscribes the definition of a federal mandate. When legislation containing “mandates” does undergo UMRA’s formal scrutiny, it has to meet three definitional requirements, not fall into any of seven exclusions, and impose costs at or above certain thresholds to be identified as containing federal mandates exceeding the cost thresholds under UMRA. In 2001 and 2002, 5 of the 377 statutes enacted were identified as containing provisions that were federal mandates exceeding the thresholds. From the remaining statutes, CBO identified 43 examples that had some kind of impact on nonfederal parties but were not identified during the legislative process as containing federal mandates at or above UMRA’s thresholds. For 24 of those examples, this was because their estimated direct costs were below the thresholds. There is some evidence that the existence of UMRA served to hinder the introduction of intergovernmental mandates, or led to their modification before enactment in the past. There is also evidence that suggests that some of CBO’s cost estimates under UMRA may have led lawmakers to reduce the cost of some mandates before enactment. The type of legislation that a provision is contained in and how the legislation is considered determines if it is subject to automatic review by CBO. If provisions are subject to automatic CBO review, they are analyzed based on UMRA’s definitional requirements and exclusions. The feasibility of developing a cost estimate and the level of the cost estimate is then compared to applicable thresholds. Figure 1 depicts this general sequence of conditions that must be met before a statutory provision would be identified as a federal mandate at or above UMRA’s cost thresholds. The following sections discuss these procedures, exclusions, definitions, and cost thresholds in more detail. Provisions that are (1) not contained in authorizing bills, or (2) not reported by an authorizing committee are not automatically subject to CBO review before going to the floor (see fig. 1), and thus a CBO UMRA statement may not be issued. For example, appropriations bills are not automatically subject to CBO review under UMRA. In addition, even if a provision is contained in an authorizing bill, it still must be “reported” by that committee—as opposed to going directly to the full House or Senate or “discharged” by the committee without a vote to send it to the full House or Senate—to be subject to automatic CBO review. CBO was not required to review seven bills that contained federal mandates during 2001 and 2002 that ultimately became law because they either were appropriations bills or were authorizing bills not reported by authorizing committees. For example, a provision prohibiting states from issuing a permit or lease for certain oil and gas drilling in the Great Lakes was not reviewed by CBO prior to enactment because it was contained in the Energy and Water Development Appropriations Act of 2002. Although UMRA does not require an automatic CBO review of provisions not contained in authorizing bills or bills not reported by authorizing committees, CBO told us that it initiates an informal review of provisions in appropriations bills and the results of these informal reviews are communicated to appropriations committee clerks when CBO finds potential mandates in these bills. During these informal reviews, CBO does not estimate costs unless CBO already has cost data from an earlier review or unless Congress requests it. CBO will also review any legislation on request. UMRA does not require automatic CBO review of provisions added after CBO’s initial review. Amendments containing mandates may be added to legislation after CBO issues its statement about whether the legislation contains any federal mandates. UMRA states, however, that “the committee of conference shall insure to the greatest extent practicable” that CBO prepare statements on amendments offered subsequent to its initial review that contain federal mandates. According to CBO’s annual report for 2002, three laws were enacted in 2002 that contained federal mandates not reviewed by CBO prior to enactment because they were added after CBO reviewed the legislation. For example, a provision requiring insurers of commercial property to offer terrorism insurance was added to the Terrorism Risk Insurance Act of 2002 after CBO’s UMRA review, and thus not identified as a private sector mandate under UMRA prior to enactment. There is one other important caveat regarding legislative provisions for which a CBO UMRA review is not required. The Joint Committee on Taxation (JCT), rather than CBO, has jurisdiction over proposed tax legislation and produces revenue estimates for all such legislation considered by either the House or the Senate. In addition, JCT examines legislative provisions that affect the tax code for federal mandates and estimates their costs. According to a JCT legislative counsel, a statement regarding the existence of federal mandates should be included in the House or Senate committee report. Also, according to CBO, JCT estimates of revenue impacts are included in CBO cost estimates for legislation. A provision must meet the formal definition of a mandate and not be not be classified as an “exception” to be identified as a federal mandate. UMRA defines a federal mandate as a provision that would impose an enforceable duty upon state, local, or tribal governments (intergovernmental mandate) or upon the private sector (private sector mandate). Exceptions are defined as enforceable duties that are conditions of federal financial assistance or arise from participation in a voluntary federal program. UMRA does include as intergovernmental mandates certain conditions on federal assistance programs and reductions in the authorization of appropriations for federal financial assistance and the control of borders under certain conditions. A provision would also meet the definition of a intergovernmental mandate if it relates to an existing federal program of $500 million or more (annually) to state, local, and tribal governments if the provision would increase the stringency of conditions of funding, place caps or reduce the funding and the state, local, or tribal governments cannot modify their financial or programmatic responsibilities regarding the federal program. A private sector mandate is also a provision that would reduce or eliminate the amount of authorization of appropriations for federal financial assistance that would be provided to the private sector for the purposes of ensuring compliance with such an enforceable duty. UMRA also excludes certain provisions from its application. Specifically, UMRA does not apply to any provision in legislation that: 1. enforces Constitutional rights of individuals; 2. establishes or enforces any statutory rights that prohibit discrimination on the basis of race, color, religion, sex, national origin, age, handicap, or disability; 3. requires compliance with accounting and auditing procedures with respect to grants or other money or property provided by the federal government; 4. provides for emergency assistance or relief at the request of any state, local, or tribal government or any official of a state, local, or tribal government; 5. is necessary for the national security or the ratification or implementation of international treaty obligations; 6. the President designates as emergency legislation and that Congress so designates in statute; or 7. relates to the old age, survivors, and disability insurance program under title II of the Social Security Act (including taxes imposed by sections 3101(a) and 3111(a) of the Internal Revenue Code of 1986 relating to old-age, survivors, and disability insurance). If provisions are excluded, CBO will state the reason for the exclusion and make no statement regarding mandates in those provisions. If a provision is not excluded and meets the definition of a federal mandate without exception under UMRA, CBO identifies the provision as a federal mandate under UMRA, and then determines if a cost estimate is feasible. For intergovernmental mandates, if a cost estimate is feasible, the direct costs (to state, local, or tribal governments) of all mandates contained within the legislation must equal or exceed $50 million (in 1996 dollars) in any of the first 5 fiscal years that the relevant mandates would be effective for CBO to determine that the mandate meets or exceeds UMRA’s cost threshold. The same requirements apply for private sector mandates, except that the cost threshold is $100 million (in 1996 dollars) or more. CBO adjusts both the intergovernmental and private sector cost thresholds annually for inflation. If an intergovernmental mandate exceeds the cost threshold, a point of order is available under UMRA. However, if a private sector mandate exceeds the cost threshold, a point of order is not available. If an intergovernmental or private sector mandate is below the applicable threshold, CBO states that a mandate (intergovernmental or private) exists with costs estimated to be below the threshold. Although this highlights the provision as mandate, it does not provide for a point of order under UMRA. Developing a cost estimate for federal mandates must be feasible, and their direct costs must meet or exceed applicable cost thresholds for CBO to identify them as such under UMRA. However, in some instances, it is not feasible to develop a cost estimate. CBO indicated in its annual report for 2002 that common reasons why a cost estimate may not be feasible include (1) the costs depend on future regulations, (2) essential information to determine the scope and impact of the mandate is lacking, (3) it is unclear whom the bill’s provisions would affect, and (4) language in UMRA is ambiguous about how to treat extensions of existing mandates. If a cost estimate for legislation is not feasible, CBO specifies the kind of mandate it contains, but that the agency cannot estimate its costs. This does not prevent the legislation from moving through the legislative process, but in the case of an intergovernmental mandate, UMRA would still allow a member of Congress to raise a point of order. CBO reported that it could not estimate the costs of mandates in nine bills that ultimately were enacted during 2001 and 2002. Of these nine laws, seven contained intergovernmental mandates and two contained both private sector and intergovernmental mandates. For example, CBO could not estimate the costs of provisions requiring manufacturers of medical devices to comply with certain labeling and notification conventions and to submit their registrations electronically contained in the Medical Device User Fee and Modernization Act of 2002. CBO stated that since many of the requirements in the act would depend on the future actions of the Secretary of Health and Human Services, CBO could not determine whether their direct costs would exceed UMRA’s threshold. Even if costs can be estimated, UMRA focuses only on the direct costs imposed by federal mandates in legislation. According to UMRA, such costs are limited to spending that results directly from the mandates imposed by the legislation, rather than from the legislation’s broad effects on the economy. The direct costs of a federal mandate also include any new revenues that state and local governments are prohibited from raising. While CBO has estimated the indirect costs of some federal mandates, CBO is limited to including only direct costs when determining if the aggregate total costs of federal mandates in legislation meet or exceed the applicable cost thresholds under UMRA. CBO testified in July 2003 that, “federal mandates often have secondary effects, including the effects on prices and wages when the costs of a mandate imposed on one party are passed along to other parties, such as customers or employees.” CBO told us that if it determined that indirect costs (including secondary effects) would be significant, it would include the estimate in its UMRA statement, but that its determination of whether a mandate meets or exceeds the applicable thresholds is based only on direct costs. Therefore, although information on indirect costs may be available, legislation with significant total costs (direct and indirect) on nonfederal parties may not be identified as exceeding the cost thresholds under UMRA. CBO may conclude that legislation contains a federal mandate and is funded because the legislation authorizes funds to be appropriated to carry out or comply with the mandates. However, if the appropriation subsequently provided is less than the amount authorized, the federal mandate’s costs may be at or above the threshold. UMRA contains a mechanism designed to help curtail mandates with insufficient appropriations, but it has never been utilized. UMRA provides language that could be included in legislation that would allow agencies tasked with administering funded mandates to report back to Congress on the sufficiency of those funds. Congress would then have a certain time period to decide whether to continue to enforce the mandate, adopt an alternate plan, or let it expire, meaning the provision comprising the mandate would no longer be enforceable. A CBO official did not recall any legislation ever containing this provision, and our database search has also resulted in no legislation found containing this provision. Although few laws have been identified as containing federal mandates at or above applicable cost thresholds, there is some evidence that UMRA has a discouraging effect on the enactment of intergovernmental mandates and the magnitude of costs to nonfederal parties in proposed legislation. Of 377 laws enacted in 2001 and 2002, CBO identified at least 44 containing a federal mandate under UMRA. Of these 44, CBO identified 5 containing mandates at or above the cost thresholds, and all were private sector mandates (see tables 1 and 2 below). From 1996 to 2000, CBO identified 18 mandates (2 intergovernmental and 16 private sector) with costs at or above cost thresholds that became law. UMRA may have indirectly discouraged the passage of legislation identified as containing intergovernmental mandates at or above UMRA’s cost threshold. Since 1996 only three proposed intergovernmental mandates with annual costs above the applicable threshold had become law (an increase in the minimum wage in 1996, a reduction in federal funding for Food Stamps in 1997, and a preemption of state laws on premiums for prescription drug coverage in 2003). Between 1996 and 2002, CBO reported that 21 private sector mandates with costs over the applicable threshold were enacted. Of these, 8 involved taxes, 4 concerned health insurance, 4 dealt with regulation of industries, 2 affected workers’ take home pay, 1 imposed new requirements on sponsors of immigrants, 1 changed procedures for the collection and use of campaign contributions, and 1 imposed fees on airline travel to fund aviation security. UMRA may have also aided in lessening the costs of some mandates. From 1996 through 2000, CBO identified 59 proposed federal mandates with costs above applicable thresholds. Subsequent to CBO identification, 9 were amended before enactment to reduce their costs below the applicable thresholds, while 18 mandates were enacted with costs above the threshold, and 32 were never enacted. Although CBO has not done an analysis to determine the role of UMRA in reducing the costs of mandates ultimately enacted, it did state in its report that “it was clear that information provided by CBO played a role in the Congress’s decision to lower costs.” There is also some testimonial evidence regarding the effectiveness of UMRA on legislation. CBO stated in its July 2003 congressional testimony that “both the amount of information about the cost of federal mandates and Congressional interest in that information have increased considerably. In that respect, title I of UMRA has proved to be effective.” The Chairman of the House Rules Committee was quoted in 1998 as saying that UMRA “has changed the way that prospective legislation is drafted... Anytime there is a markup , this always comes up.” Although points of order are rarely used, they may be perceived as an unattractive consequence of including a mandate above cost thresholds in proposed legislation. The director of policy and federal relations at the National League of Cities stated, “This is like a shoal out in the water. You know it is there, so you steer clear of it.” Overall, CBO’s annual reports from 2001 and 2002 showed that most proposed legislation did not contain federal mandates as defined by UMRA. Further, most of the proposed legislation with mandates would not have imposed costs exceeding the thresholds set by UMRA. We asked CBO to compile a list of examples from among those laws enacted in 2001 and 2002 that it perceived as having impacts on nonfederal parties but were not identified as containing federal mandates meeting or exceeding UMRA’s cost thresholds. We then analyzed these 43 examples to illustrate the application of UMRA’s procedures, definitions, and exclusions on legislation that was not identified as containing mandates at or above UMRA’s threshold, but might be perceived to have “unfunded mandates” implications. We shared CBO’s list of 43 examples with national organizations representing nonfederal levels of government, and they generally agreed that those laws contained provisions perceived by their members as mandates. For 12 of the 43 examples, an automatic UMRA review was not required of at least some provisions prior to enactment because of the legislative process used to enact the bill, for example, not being reported by an authorizing committee. Out of the remaining 31 laws that did undergo a cost estimate, 24 were found to contain mandates with costs below applicable thresholds, 3 contained provisions that were excluded, 2 contained provisions with direct costs that were not feasible to estimate, 1 contained a provision that did not meet UMRA’s definition of a mandate, and 1 was reviewed by JCT and found not to contain any federal mandates (see fig. 2). It should be noted that the number of laws in any of the categories listed do not necessarily correlate with the magnitude of perceived or actual impact on affected nonfederal parties. Of the 12 examples of laws with provisions that CBO was not required to review prior to enactment, CBO later determined how they would have been characterized under UMRA: 5 laws contained mandates with direct costs below UMRA’s thresholds, 4 laws contained mandates with direct costs that could not be estimated, 1 was excluded under UMRA for national security so would not be reviewed for the presence of mandates, 1 did not meet the definition of a mandate, and 1 had some provisions with costs below the threshold and some provisions excluded (again, for national security). (See app. II for more detailed information on the 43 examples.) Some Legislation Had Potentially Although cost estimates of the full impact (including direct and indirect Significant Impacts on costs) are not available for all 43 examples discussed previously, table 3 Nonfederal Parties describes 10 laws among the 43 that we consider important to highlight and/or have multiple uncertainties surrounding the magnitude of their potential impacts on nonfederal parties. Procedurally, the identification of federal mandates under title II of UMRA is simpler than under title I. Although regulatory agencies generally are to assess the intergovernmental and private sector effects of all their actions, under UMRA title II they only need to publicly identify and prepare UMRA “written statements” on those rules that the agencies believe include a federal intergovernmental or private sector mandate that may result in expenditures of $100 million or more (adjusted for inflation) in any year. However, there are 14 definitional exceptions, exclusions, or other restrictions applicable to the identification of federal mandates in rules, compared to 10 that are applicable to identifying mandates in legislation. Agencies identified 9 of the 122 major and economically significant final rules published in 2001 and 2002 as containing federal mandates as defined by UMRA. However, based on our review of the published rules, we determined that 65 of the remaining rules imposed new requirements on nonfederal parties. Agencies cited, or could have cited, a variety of reasons that these 65 rules did not contain federal mandates under UMRA. Nevertheless, at least 29 of the 65 rules appeared to have significant financial impacts on affected nonfederal parties of $100 million or more in any year. UMRA’s process of identifying and reporting on rules with federal mandates is more straightforward than that for legislation. UMRA generally directs agencies to assess the effects of their regulatory actions on other levels of government and the private sector. However, the agencies only need to identify and prepare written UMRA statements on those rules that the agencies have determined include a federal mandate that may result in expenditures by nonfederal parties of $100 million or more (adjusted for inflation) in any year. Thus, unlike CBO’s reviews of proposed legislation, one cost threshold applies to both intergovernmental and private sector mandates in rules, and there is no public identification of potential federal mandates in rules before agencies determine whether such mandates exceed the threshold. As is the case for legislation, UMRA contains many definitions and exclusions that affect the extent to which agencies’ rules are considered to have federal mandates at or above the threshold. The three definitional provisions and seven general exclusions from UMRA that we previously identified as applicable to legislation also apply to federal rules. However, there are four additional restrictions to the identification of federal mandates in rules (i.e., in an UMRA statement): UMRA’s requirements do not apply to provisions in rules issued by independent regulatory agencies. Preparation of an UMRA statement, and related estimate or analysis of the costs and benefits of the rule, is not required if the agency is “otherwise prohibited by law” from considering such an estimate or analysis in adopting the rule. The requirement to prepare an UMRA statement does not apply to any rule for which the agency does not publish a general notice of proposed rulemaking in the Federal Register. This means that UMRA does not cover interim final rules and any rules for which the agency claimed a “good cause” or other exemption available under the Administrative Procedure Act of 1946 to issue a final rule without first having to issue a notice of proposed rulemaking. UMRA’s threshold for federal mandates in rules is limited to expenditures, in contrast to title I which refers more broadly to direct costs. Thus, a rule’s estimated annual effect might be equal to or greater than $100 million in any year—for example, by reducing revenues or incomes in a particular industry—but not trigger UMRA if the rule does not compel nonfederal parties to spend that amount. Under title I, though, the direct costs of a mandate in legislation also include any amounts that state and local governments are prohibited from raising in revenues to comply with the mandate. However, as in reviews of legislation, indirect costs of rules are not considered when determining whether a mandate meets or exceeds UMRA’s threshold. Two of these restrictions on UMRA’s scope in the regulatory process are essentially procedural. If a rule’s path to issuance was through an independent regulatory agency or a final rule with no prior proposed rule, any “mandate” included in the rule would not be subject to identification and review under UMRA. OIRA is responsible for the centralized review of significant regulatory actions published by executive branch agencies, other than independent regulatory agencies. Under Executive Order 12866, which was issued in September 1993, agencies are generally required to submit their significant draft rules to OIRA for review before publishing the rules. As part of this regulatory review process, OIRA monitors agencies’ compliance with UMRA. In the submission packages for their draft rules, federal agencies are to designate whether they believe the rule may constitute an unfunded mandate under UMRA. According to OIRA representatives, consideration of UMRA is then incorporated as part of these regulatory reviews, and draft rules are expected to contain appropriate UMRA certification statements. OIRA’s guidance to agencies notes that the analytical requirements under Executive Order 12866 are similar to the analytical requirements under UMRA, and thus the same analysis may permit agencies to comply with both analytical requirements. However, OIRA representatives pointed out that UMRA might also require agency consultations with state and local governments on certain rules, and this is something that OIRA will look for evidence of when it does its regulatory reviews. The officials also pointed out that UMRA provides OIRA a statutory basis for requiring agencies to do an analysis similar to that required by the executive order (which can be rescinded or amended at the discretion of the President). Federal agencies identified 9 of the 122 major and/or economically significant final rules that federal agencies published in 2001 or 2002 as containing federal mandates under UMRA (see fig. 3). Only one of the nine rules that agencies identified as containing federal mandates under UMRA—EPA’s enforceable standard for the level of arsenic in drinking water systems—included an intergovernmental mandate. The remaining rules imposed private sector mandates: four Department of Energy rules that amended energy conservation standards for several categories of consumer products, including clothes washers and heat pumps; three EPA rules that adopted emission standards to reduce air pollution from various sources, including paper and pulp mills and heavy-duty highway engines and vehicles; and a Department of Transportation (DOT) rule that established a new federal motor vehicle safety standard that required tire pressure monitoring systems, controls, and displays. In each of these final rules, the agencies addressed the applicable UMRA analytical and reporting requirements. (See app. III for more detailed information on these rules.) The limited number of rules identified as federal mandates during 2001 and 2002 is consistent with the findings in our 1998 report on UMRA and in OMB’s annual reports on agencies’ compliance with title II. Of the 113 major and/or economically significant rules not identified as including federal mandates under UMRA, we determined that 48 contained no new requirements that would impose costs or have a negative financial effect on state, local, and tribal governments or the private sector. Often, these were economically significant or major rules because they involved substantial transfer payments from the federal government to nonfederal parties. For example, the Department of Agriculture (USDA) published a final rule that expanded loans, loan deficiency payments, and working assistance loans for certain agricultural commodities, such as cotton and honey, and was expected to increase federal outlays by about $1.1 billion annually. The Department of Health and Human Services (HHS) published a notice updating the Medicare payment system for home health agencies that was estimated to increase federal expenditures to those agencies by $350 million in fiscal year 2002. However, we determined that 65 of the 113 rules contained new requirements that would impose costs or result in other negative financial effects on state, local, and tribal governments or the private sector. We shared this list of rules with national organizations representing other levels of government affected by these rules.Representatives of those organizations generally confirmed that all of the 65 rules were perceived by their members to have at least some “unfunded mandates” implications. In 41 of the 65 published rules, the agencies cited a variety of reasons for determining that these rules did not trigger UMRA’s requirements (see fig. 4). There were 26 rules in which the agencies stated that the rule would not compel expenditures at or above the UMRA threshold and 10 rules in which the agencies stated that rules imposed no enforceable duty. For 24 of the 65 rules, the agency did not provide a reason. However, independent regulatory agencies, which are not covered by UMRA, published 12 of these 24 rules, and there is no UMRA requirement for covered agencies to identify the reasons that their rules do not contain federal mandates. Our review of the 65 rules indicated that agencies did not cite all of the applicable reasons they could have for determining that the rules did not trigger UMRA’s requirements (see fig. 5). For example, although in only 3 of the 65 rules did the agencies identify the absence of a notice of proposed rulemaking as the reason the rule did not trigger UMRA, this reason applied to another 25. Similarly, although 5 rules cited the exclusion that any enforceable duties would occur as a consequence of participation in a voluntary federal program, another 21 rules could have claimed this exclusion. Between what agencies cited or could have cited, 47 of the 65 rules (72 percent) had more than one applicable reason. (For each of the 65 rules, app. IV identifies the reasons that agencies cited or could have cited for their rules not triggering UMRA.) At least 29 of the 65 rules with new requirements appeared to result in significant costs or other negative financial effects on state, local, and tribal governments or the private sector. In these 29 rules, the agencies either explicitly stated that they expected the rule could impose significant costs or published information indicating that the rule could result, directly or indirectly, in financial effects on nonfederal parties at or above the UMRA threshold. (Appendix V provides more detailed information on each of the 29 rules that were not identified as federal mandates under UMRA, but that could impose significant costs or have other negative financial effects on state, local, and tribal governments or the private sector.) These 29 rules not identified as federal mandates under UMRA, but with significant financial impacts on nonfederal parties, can be roughly categorized as follows: 9 that imposed costs on individuals—a category included in UMRA’s definition of the private sector—exceeding $100 million in any year; 5 that reduced the level of federal payments to nonfederal parties by more than $100 million in any year; 4 with substantial indirect costs or economic effects on nonfederal 4 from independent regulatory agencies that imposed substantial fees or other costs on regulated entities; 3 published by DOT on aviation security in the aftermath of the September 11, 2001, terrorist attacks, which the agency noted “may impose significant costs,” although it did not prepare quantified estimates; 2 with voluntary options that might increase Medicaid costs to states by over $125 million in some years; 1 amending the Federal Acquisition Regulations that could result in nonfederal costs ranging from $92 million to $377 million annually, depending on the “uncertainty of manufacturers to distribute these costs over the general population;” and 1 USDA rule imposing private-sector costs to limit retained water in raw meat and poultry products. Table 4 provides more detailed information about selected examples from among the 29 rules. We determined that 1 of the 29 rules, a USDA rule on retained water in raw meat and poultry products, probably was a federal mandate under UMRA. The rule establishes a requirement of zero retained water, unless the water retention is unavoidable in processes necessary to meet food safety requirements. USDA did not mention UMRA in the rule but estimated that, if extensive modifications to chilling systems were needed throughout the poultry industry, the fixed costs could run to “well over $100 million.” USDA provided only a “lower bound” estimate of $110 million in private- sector costs for the first year of implementation (representing the costs of reducing retained water in the range of 1 percent to 1.5 percent). While that estimate was under the $113 million UMRA threshold (adjusted for inflation) in 2001, the agency did not quantify median or upper bound cost estimates, which reference to a lower bound estimate implies. Because the lower bound estimate was so close to the UMRA threshold, it is reasonable to assume that a median or upper bound estimate would probably have equaled or exceeded the threshold, and the rule would have been a private sector mandate under UMRA. No other UMRA exclusion appeared to apply to this rule. However, to address the requirements of Executive Order 12866 the agency provided an analysis of the costs and benefits of the rule, as well as an analysis of the regulatory alternatives considered. As noted earlier, OIRA guidance points out that the same analysis may permit agencies to comply with both the executive order’s and UMRA’s requirements. For the remaining 36 of the 65 rules, either the agencies provided no information on the potential costs and economic impacts on nonfederal parties or the costs imposed on them were under the UMRA threshold. For example, a Federal Emergency Management Agency interim final rule on a grant program to assist firefighters included some cost-sharing and other requirements on the part of grantees participating in this voluntary program. In return for cost-sharing of $50 million to $55 million per year, grantees could obtain, in aggregate, federal assistance of approximately $345 million. Similarly, USDA’s interim rule on the noninsured crop disaster assistance program imposed new reporting requirements and service fees on producers estimated to cost at least $15 million. But producers were expected to receive about $162 million in benefits. Even when the requirements of UMRA did not apply, agencies generally provided some quantitative information on the potential costs and benefits of the rule to meet the requirements of Executive Order 12866. Rules published by independent regulatory agencies were the major exception because they are not covered by the executive order. In general, though, the type of information that UMRA was intended to produce was developed and published by the agencies even if they did not identify their rules as federal mandates under UMRA. UMRA was intended to restrain the imposition of unfunded federal mandates on state, local, and tribal governments and the private sector, primarily by providing more information and focusing more attention on potential federal mandates in legislation and regulations. There is some evidence that the information provided under UMRA and the spotlight that information places on potential mandates may have helped to discourage or limit federal mandates. CBO’s annual reports indicate that, at least with regard to the legislative process, UMRA sometimes does have such an indirect preventive effect. However, there are multiple ways that both statutes and final rules containing what affected parties perceive as “unfunded mandates” can be enacted or published without being identified as federal mandates with costs or expenditures at or above the thresholds established in UMRA. Our review demonstrated that many statutes and final rules with potentially significant financial effects on nonfederal parties were enacted or published without being identified as federal mandates at or above UMRA’s thresholds. Further, if judged solely by their financial consequences for nonfederal parties, there was little difference between some of these statutes and rules and the ones that had been identified as federal mandates with costs or expenditures exceeding UMRA’s thresholds. Although the examples cited in our review were limited to a 2-year period, our findings on the limited effect and applicability of UMRA are similar to the data reported in previous GAO, CBO, and OMB reports on the implementation of UMRA. The findings raise the question of whether UMRA’s procedures, definitions, and exclusions adequately capture and subject to scrutiny federal statutory and regulatory actions that might impose significant financial burdens on affected nonfederal parties. This report provides descriptive information and analysis regarding UMRA’s implementation, focusing specifically on the coverage and identification of federal mandates under UMRA. We are making no specific recommendations for executive action nor identifying any specific matters for consideration by Congress at this time. As requested, we will be continuing our work on other aspects of UMRA. As agreed with your office, unless you publicly announce the contents of this report earlier, we will not distribute it until 30 days from the date of this letter. We will then send copies of this report to the Director of OMB and will provide copies to others on request. It will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-6806 or daltonp@gao.gov. Key contributors to this report were Curtis Copeland, Naved Qureshi, Michael Rose, and Tim Bober. In this report, you asked us to describe and provide examples of how federal statutes and rules with potentially significant financial implications for state, local, and tribal governments or the private sector may be enacted or issued without being identified as federal mandates under titles I and II of UMRA, which respectively address the legislative and regulatory processes. Our specific reporting objectives were to: 1. Describe the applicable procedures, definitions, and exclusions for identifying federal mandates in statutes and rules under UMRA. 2. Identify statutes and final rules that contained federal mandates under UMRA. 3. Provide examples of statutes and final rules that were not identified as federal mandates, but that affected parties might perceive as “unfunded” mandates, and the reasons these statutes and rules were not federal mandates under UMRA. As agreed with your staff, we focused on statutes enacted and final rules published during 2001 and 2002 to address the second and third objectives. To address the first objective, regarding the procedures, definitions, and exclusions applicable to the identification of federal mandates under titles I and II of UMRA, we reviewed the act and other related guidance documents and reports on the implementation of UMRA. These other related documents included the various annual reports on UMRA prepared by CBO and OMB, materials used in a congressional parliamentary process training seminar on unfunded mandates and points of order, and OMB’s March 1995 guidance to federal agencies on the implementation of title II. We also interviewed persons knowledgeable about the implementation of UMRA in congressional offices, CBO, and OMB. To address the second and third objectives regarding statutes that were and were not identified as federal mandates under title I of UMRA, we consulted with the CBO officials responsible for preparing UMRA statements on individual bills. The CBO officials identified the 5 statutes enacted during 2001 and 2002 that contained federal mandates at or above UMRA’s cost thresholds. At our request, they also identified 43 examples of statutes enacted during that 2-year period that they believed, based on professional judgment, had potential intergovernmental or private sector impacts but had not been identified as containing mandates at or above UMRA’s thresholds. (We did not ask CBO to compile a comprehensive list of all statutes passed by the 107th Congress that may have had intergovernmental or private sector impacts.) To assure that this set of examples was relevant for our purposes and to confirm CBO’s characterization of the potential impacts of these statutes and the reasons why provisions were or were not identified as federal mandates, we reviewed available source material on each of these statutes. In particular, we examined the detailed descriptions and information on each statute that were contained in CBO mandate statements, cost estimates, annual reports, and testimony, as well as other relevant information on each statute from the Legislative Information System of Congress. To address the second and third objectives regarding final rules that were and were not identified as federal mandates under title II of UMRA, we conducted a content analysis of all 122 major and/or economically significant final rules that agencies published in 2001 or 2002 to identify the rules that could have significant financial effects on nonfederal parties and determine why they were or were not considered federal mandates. We chose not to review other rules because, by definition, they were less likely to have significant effects on nonfederal parties, although arguably some could have had a significant effect. To arrive at our final set of 122 rules, we relied primarily on the list of 119 major rules published during the 2- year period, as identified in GAO’s compilation of reports on federal agency major rules. Our Office of General Counsel takes several steps to assure the completeness of the list of major rules; however, to generally corroborate that this list of major rules included those that could have significant effects on nonfederal parties, we also compared GAO’s major rules list to the rules identified as “economically significant” by the Regulatory Information Service Center (RISC). As a result of this exercise, we supplemented our initial list with 3 additional rules. We then reviewed the Federal Register notices that agencies published for all 122 of these rules to confirm that they were major and/or economically significant and to identify whether, and to what extent, they imposed requirements on nonfederal parties. On the basis of our comparisons and reviews, we concluded that these data were sufficiently reliable for our purposes. Because we were asked to identify rules that affected parties might perceive as intergovernmental or private sector mandates, even if not technically identified as such under UMRA, our initial screening used a broader definition of a potential mandate than delineated in UMRA. For this screening, we used the information in the published rules to make a team consensus judgment on whether a nonfederal party (state, local, and tribal governments or the private sector) might consider provisions of the rule to impose requirements or mandates that had at least some costs or negative financial effects. In particular, we focused on identifying rules that imposed new requirements and costs (direct or indirect) on affected parties. For each rule identified as including a potential “mandate,” team members then independently reviewed the text of each rule to code the reasons agencies may have cited that their rules were not federal mandates under UMRA, as well as other reasons available under UMRA that might have applied to these rules. The team members generally concurred in their initial coding, and, based on team discussions, we were able to resolve any differences and determine a team consensus judgment on the appropriate coding for each rule. To provide corroboration that the examples of statutes CBO identified and final rules we identified to address objective three were perceived by affected parties as having “unfunded mandate” implications, we shared our draft lists of examples with national organizations representing other levels of government. These organizations included the National Association of Counties, National Conference of State Legislatures, National Governors Association, the National League of Cities, and the U.S. Conference of Mayors. Their representatives generally concurred that the statutes and rules we focused on were perceived by their members to have “mandate” implications and that we had not left out any major examples from our time period that they believed were important. One limitation of our review was that, in agreement with your staff, we focused on statutes enacted and final rules published during 2001 and 2002. Those statutes and rules may not reveal all of the ways in which provisions with significant cost effects might not be identified as federal mandates. Neither CBO nor we reviewed the many bills that were not enacted and rules that were proposed, but not finalized, during 2001 and 2002. However, our findings and the specific examples we identified were sufficient to illustrate how statutes and rules with potentially significant effects on nonfederal parties might not be identified as federal mandates under UMRA. In addition, our findings for this review were consistent with those in prior GAO, CBO, and OMB reports on the implementation of UMRA. In general, we also recognize that perceived “unfunded mandates” could also result from other nonstatutory, nonregulatory federal actions, such as Homeland Security threat level adjustments. However, UMRA does not cover such nonstatutory or nonregulatory actions, so they were out of the scope of this review. We conducted our review from August 2003 through February 2004 in Washington, D.C., in accordance with generally accepted government auditing standards. On April 22, 2004, we provided a draft of this report to the Director of the Office of Management and Budget (OMB) for his review and comment. On April 29, 2004, an OMB representative notified us that OMB had no comments on our report. We also provided the draft to CBO officials for their technical review. We incorporated their comments and suggestions as appropriate. CBO provided us the following examples of laws enacted in 2001 and 2002 that it believed had impacts on nonfederal parties, but were not identified as federal mandates at or above applicable cost thresholds (see table 5). A number of groups representing nonfederal parties generally agreed that these examples were statutes perceived to have “unfunded mandate” implications. The following table presents information on each of the nine final rules published by federal regulatory agencies during 2001 and 2002 that the agencies identified as federal mandates under UMRA (see table 6). For each rule, we provide (1) GAO’s identification number for the rule, (2) the title of the rule and its date of publication in the Federal Register, (3) the agency that published the rule, (4) summary information about the potential costs or other negative financial effects of the rule on affected nonfederal parties, and (5) the agency’s statement, as it appeared in the Federal Register notice, regarding the applicability of UMRA. The following table provides information on 65 major or economically significant final rules published during that 2001 and 2002 that did not trigger UMRA but that would result in at least some costs or negative financial effects on state, local, and tribal governments or the private sector (see table 7). The table displays the various reasons that agencies cited or could have cited to explain why the rules did not trigger UMRA. Code “A” identifies reasons the agencies cited, and code “O” identifies other reasons that could have applied. Note that only 11 of the 14 possible reasons under UMRA were applicable to any of these rules. The following table presents information on 29 final rules published by federal regulatory agencies during 2001 and 2002 that did not trigger UMRA but that had potentially significant costs or financial effects on state, local, and tribal governments or the private sector (see table 8). For each rule, we provide (1) GAO’s unique identification number for the rule, (2) the title of the rule and its date of publication in the Federal Register, (3) the agency that published the rule, (4) summary information about the potential costs or negative financial effects of the rule on affected nonfederal parties, and (5) the agency’s statement in the Federal Register notice, if any, regarding the applicability of UMRA. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
The Unfunded Mandates Reform Act of 1995 (UMRA) was enacted to address concerns about federal statutes and rules that require state, local, and tribal governments or the private sector to expend resources to achieve legislative goals. UMRA generates information about the nature and size of potential federal mandates to assist Congress and agency decision makers in their consideration of proposed legislation and rules. However, concerns about actual or perceived federal mandates continue. To provide information and analysis regarding UMRA's implementation, GAO was asked to (1) describe the applicable procedures, definitions, and exclusions under UMRA for identifying federal mandates in statutes and rules, (2) identify statutes and final rules that contained federal mandates under UMRA, and (3) provide examples of statutes and final rules that were not identified as federal mandates, but that affected parties might perceive as "unfunded mandates," and the reasons these statutes and rules were not federal mandates under UMRA. GAO focused on statutes enacted and final rules issued in 2001 and 2002 to address the second and third objectives. UMRA generally requires congressional committees and the Congressional Budget Office (CBO) to identify and estimate the costs of federal mandates contained in proposed legislation and federal agencies to do so for federal mandates contained in their rules. Identification of mandates is a complex process with multiple definitions, exclusions, and cost thresholds. Also, some legislation and rules may be enacted or issued via procedures that do not trigger UMRA reviews. In 2001 and 2002, 5 of 377 statutes enacted and 9 of 122 major or economically significant final rules issued were identified as containing federal mandates at or above UMRA's thresholds. Of the other federal actions in those 2 years, at least 43 statutes and 65 rules contained new requirements on nonfederal parties that might be perceived as "unfunded mandates." For 24 of those statutes and 26 of those rules, CBO or federal agencies had determined that the estimated direct costs or expenditures would not meet or exceed applicable thresholds. For the remaining examples of statues, most often UMRA did not require a CBO review prior to their enactment. The remaining rules most often did not trigger UMRA because they were issued by independent regulatory agencies. Despite the determinations made under UMRA, some statutes and rules not triggering UMRA's thresholds appeared to have potential financial impacts on affected nonfederal parties similar to those of the actions that were identified as containing mandates at or above the act's thresholds.
Each weekday, 11.3 million passengers in 35 metropolitan areas and 22 states use some form of rail transit (commuter, heavy, or light rail). Commuter rail systems typically operate on railroad tracks and provide regional service (e.g., between a central city and adjacent suburbs). Commuter rail systems are traditionally associated with older industrial cities, such as Boston, New York, Philadelphia, and Chicago. Heavy rail systems—subway systems like New York City’s transit system and Washington, D.C.’s Metro—typically operate on fixed rail lines within a metropolitan area and have the capacity for a heavy volume of traffic. Amtrak operates the nation’s primary intercity passenger rail service over a 22,000-mile network, primarily over leased freight railroad tracks. Amtrak serves more than 500 stations (240 of which are staffed) in 46 states and the District of Columbia, and it carried more than 25 million passengers in 2004. Figure 1 identifies the geographic location of rail transit systems and Amtrak within the United States. According to passenger rail officials and passenger rail experts, certain characteristics of domestic and foreign passenger rail systems make them inherently vulnerable to terrorist attacks and therefore difficult to secure. By design, passenger rail systems are open (i.e., have multiple access points, hubs serving multiple carriers, and, in some cases, no barriers) so that they can move large numbers of people quickly. In contrast, the U.S. commercial aviation system is housed in closed and controlled locations with few entry points. The openness of passenger rail systems can leave them vulnerable because operator personnel cannot completely monitor or control who enters or leaves the systems. In addition, other characteristics of some passenger rail systems—high ridership, expensive infrastructure, economic importance, and location (e.g., large metropolitan areas or tourist destinations)—also make them attractive targets for terrorists because of the potential for mass casualties and economic damage and disruption. Moreover, some of these same characteristics make passenger rail systems difficult to secure. For example, the numbers of riders that pass through a subway system— especially during peak hours—may make the sustained use of some security measures, such as metal detectors, difficult because they could result in long lines that could disrupt scheduled service. In addition, multiple access points along extended routes could make the cost of securing each location prohibitive. Balancing the potential economic impacts of security enhancements with the benefits of such measures is a difficult challenge. Securing the nation’s passenger rail systems is a shared responsibility requiring coordinated action on the part of federal, state, and local governments; the private sector; and rail passengers who ride these systems. Since the September 11 attacks, the role of federal government agencies in securing the nation’s transportation systems, including passenger rail, have continued to evolve. Prior to September 11, DOT— namely FTA and FRA—was the primary federal entity involved in passenger rail security matters. In response to the attacks of September 11, Congress passed the Aviation and Transportation Security Act (ATSA), which created TSA within DOT and defined its primary responsibility as ensuring security in all modes of transportation. The act also gave TSA regulatory authority for security over all transportation modes. ATSA does not specify TSA’s roles and responsibilities in securing the maritime and land transportation modes at the level of detail it does for aviation security. Instead, the act broadly identifies that TSA is responsible for ensuring the security of all modes of transportation. With the passage of the Homeland Security Act of 2002, TSA was transferred, along with over 20 other agencies, to the Department of Homeland Security. With the creation of DHS in 2002, one of its components, ODP, became primarily responsible for overseeing security funding for passenger rail systems. ODP is the principal component of DHS responsible for preparing the United States for acts of terrorism and has primary responsibility within the executive branch for assisting and supporting DHS, in coordination with other directorates and entities outside of the department, in conducting risk analysis and risk management activities of state and local governments. In carrying out its mission, ODP provides training, funds for the purchase of equipment, support for the planning and execution of exercises, technical assistance, and other support to assist states, local jurisdictions, and the private sector to prevent, prepare for, and respond to acts of terrorism. Through the Urban Area Security Initiative (UASI) grant program, ODP has provided grants to urban areas to help enhance their overall security and preparedness level to prevent, respond to, and recover from acts of terrorism. The DHS Appropriations Act of 2005 appropriated $150 million for rail transit, intercity passenger rail, freight rail, and transit agency security grants. With this funding, ODP created and is administering two grant programs focused specifically on transportation security, the Transit Security Grant Program and the Intercity Passenger Rail Security Grant Program. These programs provide financial assistance to address security preparedness and enhancements for transit (to include commuter, heavy, and light rail systems; intracity bus; and ferry) and intercity rail systems. While TSA is the lead federal agency for ensuring the security of all transportation modes, FTA conducts nonregulatory safety and security activities, including safety and security-related training, research, technical assistance, and demonstration projects. In addition, FTA promotes safety and security through its grant-making authority. FRA has regulatory authority for rail safety over commuter rail operators and Amtrak, and employs over 400 rail inspectors that periodically monitor the implementation of safety and security plans at these systems. State and local governments, passenger rail operators, and private industry are also important stakeholders in the nation’s rail security efforts. State and local governments may own or operate a significant portion of the passenger rail system. Even when state and local governments are not owners and operators, they are directly affected by passenger rail systems that run within and through their jurisdictions. Consequently, the responsibility for responding to emergencies involving the passenger rail infrastructure often falls to state and local governments. Passenger rail operators, which can be public or private entities, are responsible for administering and managing passenger rail activities and services. Passenger rail operators can directly operate the service provided or contract for all or part of the total service. Although all levels of government are involved in passenger rail security, the primary responsibility for securing passenger rail systems rests with the passenger rail operators. In recent years, we, along with Congress (most recently through the Intelligence Reform and Terrorism Prevention Act of 2004), the executive branch (e.g., in presidential directives), and the 9/11 Commission have required or advocated that federal agencies with homeland security responsibilities utilize a risk management approach to help ensure that finite national resources are dedicated to assets or activities considered to have the highest security priority. We have concluded that without a risk management approach, there is limited assurance that programs designed to combat terrorism are properly prioritized and focused. Thus, risk management, as applied in the homeland security context, can help to more effectively and efficiently prepare defenses against acts of terrorism and other threats. A risk management approach entails a continuous process of managing risk through a series of actions, including setting strategic goals and objectives, performing risk assessments, evaluating alternative actions to reduce identified risks by preventing or mitigating their impact, management selecting actions to undertake, and implementing and monitoring those actions. Figure 2 depicts a risk management cycle that is our synthesis of government requirements and prevailing best practices previously reported. Setting strategic goals, objectives, and constraints is a key first step in implementing a risk management approach and helps to ensure that management decisions are focused on achieving a strategic purpose. These decisions should take place in the context of an agency’s strategic plan that includes goals and objectives that are clear, concise, and measurable. Risk assessment, a critical element of a risk management approach, helps decision makers identify and evaluate potential risks so that countermeasures can be designed and implemented to prevent or mitigate the effects of the risks. Risk assessment is a qualitative and/or quantitative determination of the likelihood of an adverse event occurring and the severity, or impact, of its consequences. Risk assessment in a homeland security application often involves assessing three key elements—threat, criticality, and vulnerability: A threat assessment identifies and evaluates potential threats on the basis of factors such as capabilities, intentions, and past activities. A criticality or consequence assessment evaluates and prioritizes assets and functions in terms of specific criteria, such as their importance to public safety and the economy, as a basis for identifying which structures or processes are relatively more important to protect from attack. A vulnerability assessment identifies weaknesses that may be exploited by identified threats and suggests options to address those weaknesses. Information from these three assessments contributes to an overall risk assessment that characterizes risks on a scale such as high, medium, or low and provides input for evaluating alternatives and management prioritization of security initiatives. The risk assessment element in the overall risk management cycle may be the largest change from standard management steps and is central to informing the remaining steps of the cycle. The next step in a risk management approach—alternatives evaluation— considers what actions may be needed to address identified risks, the associated costs of taking these actions, and any resulting benefits. This information is then to be provided to agency management to assist in the selection of alternative actions best suited to the unique needs of the organization. An additional step in the risk management approach is the implementation and monitoring of actions taken to address the risks, including evaluating the extent to which risk was mitigated by these actions. Once the agency has implemented the actions to address risks, it should develop criteria for and continually monitor the performance of these actions to ensure that they are effective and also reflect evolving risk. A number of federal departments and agencies have risk management and critical infrastructure protection responsibilities stemming from various requirements. The Homeland Security Act of 2002, which created DHS, directed the department’s Information Analysis and Infrastructure Protection (IAIP) Directorate to utilize a risk management approach in coordinating the nation’s critical infrastructure protection efforts. This includes using risk assessments to set priorities for protective and support measures by the department, other federal agencies, state and local government agencies and authorities, the private sector, and other entities. Homeland Security Presidential Directive 7 (HSPD-7) defines critical infrastructure protection responsibilities for DHS, sector-specific agencies (those federal agencies given responsibility for transportation, energy, telecommunications, and so forth), and other departments and agencies. The President instructs federal departments and agencies to identify, prioritize, and coordinate the protection of critical infrastructure to prevent, deter, and mitigate the effects of terrorist attacks. The Secretary of DHS is assigned several responsibilities by HSPD-7, including establishing uniform policies, approaches, guidelines, and methodologies for integrating federal infrastructure protection and risk management activities within and across sectors. To ensure the coverage of critical sectors, HSPD-7 designated sector-specific agencies for 17 critical infrastructure sectors. These agencies are responsible for infrastructure protection activities in their assigned sectors, including coordinating and collaborating with relevant federal agencies, state and local governments, and the private sector to carry out their responsibilities and facilitating the sharing of information about vulnerabilities, incidents, potential protective measures, and best practices. Pursuant to HSPD-7 and the National Infrastructure Protection Plan (NIPP), DHS was designated as the sector-specific agency for the transportation sector, a responsibility the department has delegated to TSA. As the sector-specific agency for transportation, TSA is required to develop a transportation sector-specific plan (TSSP) for identifying, prioritizing, and protecting critical transportation infrastructure and key resources that will provide key input to the broader National Infrastructure Protection Plan to be prepared by IAIP. DHS issued an interim NIPP in February 2005 that was intended to serve as a road map for how DHS and stakeholders—including other federal agencies, the private sector, and state and local governments—should use risk management principles for determining how to prioritize activities related to protecting critical infrastructure and key resources within and among each of the 17 sectors in an integrated, coordinated fashion. DHS expects the next iteration of the NIPP to be issued in November 2005, with the sector-specific plans, including the TSSP, being incorporated into this plan in February 2006. HSPD-7 also requires DHS to coordinate with DOT on all transportation security matters. DHS component agencies have taken various steps to assess the risk posed by terrorism to U.S. passenger rail systems. ODP has developed and implemented a risk assessment methodology intended to help passenger rail operators and others enhance their capacity to respond to terrorist incidents and identify and prioritize security countermeasures. As of July 2005, ODP had completed 7 risk assessments with rail operators and 12 others were under way. Further, TSA completed a threat assessment for mass transit and rail and has begun to identify critical rail assets, but it has not yet completed an overall risk assessment for the passenger rail industry. DHS is developing guidance to help these and other sector- specific agencies work with stakeholders to identify and analyze risk. In 2002, ODP began conducting risk assessments of passenger rail operators through its Mass Transit Technical Assistance program. These assessments are intended to help passenger rail operators and port authorities enhance their capacity and preparedness to respond to terrorist incidents involving weapons of mass destruction, and identify and prioritize security countermeasures and emergency response capabilities. ODP’s approach to risk assessment is generally consistent with the risk assessment component of our risk management approach. The agency has worked with passenger rail operators and others to complete several risk assessments. As of July 2005, ODP had completed 7 risk assessments in collaboration with passenger rail operators. Twelve additional risk assessments are under way, and an additional 11 passenger rail operators have requested assistance through this program. The results developed in the threat, criticality, vulnerability, and impact assessments are then used to develop an overall risk assessment in order to evaluate the relative risk among various assets, weapons, and modes of attack. This is intended to give operators an indication of which asset types and threat scenarios carry the highest risk that, accordingly, are likely candidates for early risk mitigation action. According to rail operators who have used ODP’s risk assessment methodology and commented about it to DHS or us, the method has been successful in helping to devise risk reduction strategies to guide security- related investments. For example, between September 2002 and March 2003, ODP’s technical assistance team worked with the Port Authority of New York and New Jersey (PANYNJ) to conduct a risk assessment of all of its assets—its Port Authority Trans-Hudson (PATH) passenger rail system, as well as airports, ports, interstate highway crossings, and commercial properties. According to PANYNJ officials, the authority was able to develop and implement a risk reduction strategy that enabled it to identify and set priorities for improvements in security and emergency response capability that are being used to guide security investments. According to authority officials, the risk assessment that was conducted was instrumental in obtaining management approval for a 5-year, $500 million security capital investment program, as it provided a risk-based justification for these investments. The six other passenger rail operators that have completed ODP’s risk assessment process also stated that they valued the process. Specifically, operators said that the assessments enabled them to prioritize investments based on risk and are already allowing or are expected to allow them to effectively target and allocate resources toward security measures that will have the greatest impact on reducing risk across their system. On the basis of its own experience with conducting risk assessments in the field, and in keeping with its mission to develop and implement a national program to enhance the capacity of state and local agencies to respond to incidents of terrorism, ODP has offered to help other DHS components and federal agencies to develop risk assessment tools, according to ODP officials. For example, ODP is partnering with FRA, TSA, the American Association of Railroads (AAR), and others to develop a risk assessment tool for freight rail corridors. In a separate federal outreach effort, ODP worked with TSA to establish a Federal Risk Assessment Working Group to promote interagency collaboration and information sharing. In addition, in keeping with its mission to deliver technical assistance and training, ODP has partnered with the American Public Transportation Association (APTA) to inform passenger rail operators about its risk assessment technical assistance program. Since June 2004, ODP has attended five APTA conferences or workshops where it has set up information booths, made the tool kit available, and conducted seminars to educate passenger rail operators about the risk assessment process and its benefits. ODP has leveraged its grant-making authority to promote risk-based funding decisions for passenger rail. For example, passenger rail operators must have completed a risk assessment to be eligible for financial assistance through the fiscal year 2005 Transit Security Grant program administered by ODP. To receive these funds, passenger rail operators are also required to have a security and emergency preparedness plan that identifies how the operator intends to respond to security gaps identified by risk assessments. This plan, along with a regional transit security strategy prepared by regional transit stakeholders, will serve as the basis for determining how the grant funds are to be allocated. Risk assessments are also a key driver of federal funds distributed through ODP’s fiscal year 2005 Intercity Passenger Rail Grant Program. This $7.1 million program provides financial assistance to Amtrak for the protection of critical infrastructure and emergency preparedness activities along Amtrak’s Northeast Corridor and its hub in Chicago. Amtrak is required to conduct a risk assessment of these areas in collaboration with ODP, in order to receive the grant funds. A recent review of Amtrak’s security posture and programs conducted by the RAND Corporation and funded by FRA in 2004 found that no comprehensive terrorism risk assessment of Amtrak has been conducted that would provide an empirical baseline for investment prioritization and decision making for Amtrak’s security policies and investment plans. As another condition for receiving the grant funds, Amtrak is required to develop a security and emergency preparedness plan that, along with the risk assessment, is to serve as the basis for proposed allocations of grant funding. According to an Amtrak security official, it welcomes the risk assessment effort and plans to use the results of the assessment to guide its security plans and investments. According to ODP officials, as of July 2005, the Amtrak risk assessment was nearly 50 percent complete. In October 2004, TSA completed an overall threat assessment for both mass transit and passenger and freight rail modes. TSA began conducting a second risk assessment element—criticality assessments of passenger rail stations—in the spring of 2004, but the effort had not been completed at the time of our review. According to TSA, a criticality assessment tool was developed that considers multiple factors, such as the potential for loss of life or effects on public health; the economic impact of the loss of function of the asset and the cost of reconstitution; and the local, regional, or national symbolic importance of the asset. These factors were to be used to arrive at a criticality score that, in turn, would enable the agency to rank assets and facilities based on relative importance, according to TSA officials. To date, TSA has assigned criticality scores to nearly 700 passenger rail stations. In May 2005, TSA began conducting assessments for other passenger rail assets such as bridges and tunnels. TSA officials told us that as of July 2005, they had completed 73 criticality assessments for bridge and tunnel assets and expect to conduct approximately 370 additional assessments in these categories. Once TSA has completed its criticality assessment, a senior group of transportation security experts will review these scores and subsequently rank and prioritize them. As of July 2005, TSA had not established a time frame for completing criticality assessments for passenger rail assets or for ranking assets, and had not identified whether it planned to do so. In 2003, TSA officials stated that they planned to work with transportation stakeholders to rank assets and facilities in terms of their criticality. HSPD-7 requires sector-specific agencies such as TSA to collaborate with all relevant stakeholders, including federal departments and agencies, state and local governments, and others. In addition, DHS’s interim NIPP states that sector-specific agencies, such as TSA, are expected to work with stakeholders—such as rail operators—to determine the most effective means of obtaining and analyzing information on assets. While TSA’s methodology for conducting criticality assessments calls for “facilitated sessions” involving TSA modal specialists, DOT modal specialists, and trade association representatives, these sessions with stakeholders have not been held. According to TSA officials, their final methodology for conducting criticality assessments did not include DOT modal specialists and trade associations. With respect to rail operators, TSA officials explained that their risk assessment process does not require operators’ involvement. TSA analysts said they have access to a great deal of information (such as open source records, satellite imagery, and insurance industry data) that can facilitate the assessment process. However, when asked to comment on TSA’s ability to identify critical assets in passenger rail systems, APTA officials and 10 rail operators we interviewed told us it would be difficult for TSA to complete this task without their direct input and rail system expertise. TSA plans to rely on asset criticality rankings to prioritize which assets it will focus on in conducting vulnerability assessments. That is, once an asset, such as a passenger rail station, is deemed to be most critical, then TSA would focus on determining the station’s vulnerability to attacks. TSA plans to conduct on-site vulnerability assessments for those assets deemed most critical. For assets that are deemed to be less critical, TSA has developed a software tool that it has made available to passenger rail and other transportation operators for them to use on a voluntary basis to assess the vulnerability of their assets. As of July 2005, the tool had not yet been used. According to APTA officials, passenger rail operators may be reluctant to provide vulnerability information to TSA without knowing how the agency intends to use such information. According to TSA, it is difficult, if not impossible, to project any timelines regarding completion of vulnerability assessments in the transportation sector because rail operators are not required to submit them. In this regard, while the rail operators are not required to submit this information, as the sector- specific agency for transportation, TSA is required by HSPD-7 to complete vulnerability assessments for the transportation sector. Figure 3 illustrates the overall progress TSA had made in conducting risk assessments for passenger rail assets as of July 2005. We recognize that TSA’s risk assessment effort is still evolving and TSA has had other pressing priorities, such as meeting the legislative requirements related to aviation security. However, until all three assessments of rail systems—threat, criticality, and vulnerability—have been completed in sequence, and until TSA determines how to use the results of these assessments to analyze and characterize risk (e.g., whether high, medium, or low), it may not be possible to prioritize passenger rail assets and guide investment decisions about protecting them. Finalizing a methodology for assessing risk to passenger rail and other transportation assets and conducting the assessments are key steps needed to produce the plans required by HSPD-7 and the Intelligence Reform and Terrorism Prevention Act of 2004. DHS and TSA have missed both deadlines for producing these plans. Specifically, DHS and TSA have not yet produced the TSSP required by HSPD-7 to be issued in December of 2004, though a draft was prepared in November 2004. DHS and TSA also missed the April 1, 2005, deadline for completing the national strategy for transportation security required by the Intelligence Reform and Terrorism Prevention Act of 2004. In an April 2005 letter to Congress addressing the missed deadline, the DHS Deputy Secretary identified the need to more aggressively coordinate the development of the strategy with other relevant planning work such as the TSSP, to include further collaboration with DOT modal administrations and DHS components. The Deputy Secretary further stated that DHS expected to finish the strategy within 2 to 3 months. However, as of July 31, 2005, the strategy had not been completed. In April 2005, senior DHS and TSA officials told us that in addition to DOT, industry groups such as APTA and AAR would also be more involved in developing the TSSP and other strategic plans. However, as of July 2005, TSA had not yet engaged these stakeholders in the development of these plans. As TSA, other sector-specific agencies, and ODP move forward with risk assessment activities, DHS is concurrently developing guidance intended to help these agencies work with their stakeholders to assess risk. HSPD-7 requires DHS to establish uniform policies, approaches, guidelines, and methodologies for integrating federal infrastructure protection and risk management activities within and across sectors. To meet this requirement, DHS has, among other things, been working for nearly 2 years on a risk assessment framework through IAIP. This framework is intended to help the private sector and state and local governments to develop a consistent approach to analyzing risk and vulnerability across infrastructure types and across entire economic sectors, develop consistent terminology, and foster consistent results. The framework is also intended to enable a federal-level assessment of risk in general, and comparisons among risks, for purposes of resource allocation and response planning. DHS has informed TSA that this framework will provide overarching guidance to sector-specific agencies on how various risk assessment methodologies may be used to analyze, normalize, and prioritize risk within and among sectors. The interim NIPP states that the ability to rationalize, or normalize, results of different risk assessments is an important goal for determining risk-related priorities and guiding investments. One core element of the DHS framework—defining concepts, terminology, and metrics for assessing risk—had not yet been completed. The completion date for this element—initially due in September 2004— has been extended twice, with the latest due date in June 2005. However, as of July 31, 2005, this element has not been completed. Because neither this element nor the framework as a whole has been finalized or provided to TSA or other sector-specific agencies, it is not clear what impact, if any, DHS’s framework may have on ongoing risk assessments conducted by, and the methodologies used by, TSA, ODP, and others, and whether or how DHS will be able to use these results to compare risks and prioritize homeland security investments among sectors. Until DHS finalizes this framework, and until TSA completes its risk assessment methodology, it may not be possible to determine whether different methodologies used by TSA and ODP for conducting threat, criticality, and vulnerability assessments generate disparate qualitative and quantitative results or how they can best be compared and analyzed. In addition, TSA and others will have difficulty taking into account whether at some point TSA may be unnecessarily duplicating risk management activities already under way at other agencies and whether other agencies’ risk assessment methodologies, and the data generated by these methodologies, can be leveraged to complete the assessments required for the transportation sector. In the future, the implementation of DHS’s departmentwide proposed reorganization could affect decisions relating to critical infrastructure protection as new directorates are established, such as the directorates of policy and preparedness, and other preparedness assets are consolidated from across the department. FTA and FRA were the primary federal agencies involved in passenger rail security matters prior to the creation of TSA. Before and after September 11, these two agencies launched a number of initiatives designed to strengthen passenger rail security. TSA also took steps to strengthen rail security, including issuing emergency security directives to rail operators and testing emerging rail security technologies for screening passengers and baggage. Rail industry stakeholders and federal agency officials raised questions about how effectively DHS had collaborated with them on rail security issues. DHS and DOT have signed a memorandum of understanding intended to identify ways that collaboration with federal and industry stakeholders might be improved. Prior to the creation of TSA in November 2001, DOT agencies (i.e., modal administrations)—notably FTA and FRA—were primarily responsible for the security of passenger rail systems. These agencies undertook a number of initiatives to enhance the security of passenger rail systems after September 11. FTA, using an $18.7 million appropriation by the Department of Defense and Emergency Supplemental Appropriations Act of 2002, launched a multipart transit security initiative, much of which is still in place. The initiative included security readiness assessments, technical assistance, grants for emergency response drills, and training. For example, in 2003, FTA instituted the Transit Watch campaign—a nationwide safety and security awareness program designed to encourage the active participation of transit passengers and employees in maintaining a safe transit environment. The program provides information and instructions to transit passengers and employees so that they know what to do and whom to contact in the event of an emergency in a transit setting. FTA plans to continue this initiative, in partnership with TSA and ODP, and offer additional security awareness materials that address unattended bags and emergency evacuation procedures for transit agencies. In addition, FTA has issued guidance, such as its Top 20 Security Program Action Items for Transit Agencies, which recommends measures for passenger rail operators to implement into their security programs to improve both security and emergency preparedness. FTA has also used research and development funds to develop guidance for security design strategies to reduce the vulnerability of transit systems to acts of terrorism. In November 2004, FTA provided rail operators with security considerations for transportation infrastructure. This guidance provided recommendations intended to help operators deter and minimize attacks against their facilities, riders, and employees by incorporating security features into the design of rail infrastructure. FRA has also taken a number of actions to enhance passenger rail security since September 11. For example, it has assisted commuter railroads in developing security plans, reviewed Amtrak’s security plans, and helped fund FTA security readiness assessments for commuter railroads. More recently, in the wake of the Madrid terrorist bombings, nearly 200 FRA inspectors, in cooperation with DHS, conducted multi-day team inspections of each of the 18 commuter railroads and Amtrak to determine what additional security measures had been put into place to prevent a similar occurrence in the United States. FRA also conducted research and development projects related to passenger rail security. These projects included rail infrastructure security and trespasser monitoring systems and passenger screening and manifest projects, including explosives detection. Although DOT modal administrations now play a supporting role in transportation security matters since the creation of TSA, they remain important partners in the federal government’s efforts to improve rail security, given their role in funding and regulating the safety of passenger rail systems. Moreover, as TSA moves ahead with its passenger rail security initiatives, FTA and FRA are continuing their passenger rail security efforts. In response to the March 2004 commuter rail attacks in Madrid and federal intelligence on potential threats against U.S. passenger rail systems, TSA issued security directives to the passenger rail industry in May 2004. TSA issued these security directives to establish a consistent baseline standard of protective measures for all passenger rail operators, including Amtrak. The directives were not related to, and were issued independent of, TSA’s efforts to conduct risk assessments to prioritize rail security needs. TSA considered the measures required by the directives to constitute mandatory security standards that were required to be implemented within 72 hours of issuance by all passenger rail operators nationwide. In an effort to provide some flexibility to the industry, the directives allowed rail operators to propose alternative measures to TSA in order to meet the required measures. Table 1 contains examples of security measures required by these directives. Although TSA issued these directives, it is unclear how TSA developed the required measures contained in the directives, how TSA plans to monitor and ensure compliance with the measures, how rail operators are to implement the measures, and which entities are responsible for their implementation. According to the former DHS Undersecretary for Border and Transportation Security, the directives were developed based upon consultation with the industry and a review of best practices in passenger rail and mass transit systems across the country and were intended to provide a federal baseline standard for security. TSA officials stated to us that the directives were based upon FTA and APTA best practices for rail security. Specifically, TSA stated that it consulted a list of the top 20 actions FTA identified that rail operators can take to strengthen security, FTA-recommended protective measures and activities for transit agencies that may be followed based on current threat levels, and an APTA member survey. While some of the directives correlate to information contained in the FTA guidance, such as advocating that rail personnel watch for abandoned parcels, vehicles, and the like, the source for many of the directives is unclear. For example, the source material TSA consulted does not support the requirement that train cabs or compartment doors should be kept locked. Furthermore, the sources do not necessarily reflect industry best practices, according to FTA and APTA officials. FTA’s list of recommended protective measures and the practices identified in the APTA survey are not necessarily viewed as industry best practices. For example, the APTA member survey that TSA used reports rail security practices that are in use by operators but which are not best practices endorsed by the group or other industry stakeholders. TSA officials have stated that they understood the importance of partnering with the rail industry on security matters, and that they would draw on the expertise and knowledge of the transportation industry and other DHS agencies, as well as all stakeholders, in developing security standards for all modes of transportation, including rail. TSA officials held an initial meeting with APTA, AAR, and Amtrak officials to discuss the draft directives prior to their issuance and told them that they would continue to be consulted prior to their final issuance. However, these stakeholders were not given an opportunity to comment on a final draft of the directives before their release because, according to TSA, DHS determined that it was important to release the directives as soon as possible to address a current threat to passenger rail. In addition, TSA stated that because the directives needed to be issued quickly, there was no public comment as part of the rule-making process. Shortly after the directives were issued, TSA’s Deputy Assistant Administrator for Maritime and Land Security told rail operators at an APTA conference we attended in June 2004 that if TSA determined that there is a need for the directives to become permanent, they would undergo a notice-and-comment period as part of the regulatory process. As of July 2005, TSA had not yet determined whether it intends to pursue the rule-making process with a notice and comment period. APTA and AAR officials stated that because they were not consulted throughout the development of the directives, the directives did not, in their view, reflect a complete understanding of the passenger rail environment or necessarily incorporate industry best practices. For example, APTA, AAR, and some rail operators raised concerns about the feasibility of installing bomb-resistant trash cans in rail stations because they could direct the force of a bomb blast upward, possibly causing structural damage in underground or enclosed stations. DHS’s Office for State and Local Government Coordination and Preparedness recently conducted tests to determine the safety and effectiveness of 13 models of commercially available bomb-resistant trash receptacles. At the time of our review, the results of these tests were not yet available. Amtrak and FRA officials raised concerns about some of the directives, as well, and told us they questioned whether the requirements reflected industry best practices. For example, before the directives were issued, Amtrak expressed concerns to TSA about the feasibility of the requirement to check the identification of all adult passengers boarding its trains because it did not have enough staff to perform these checks. However, the final directive included this requirement, and after they were released, Amtrak told TSA it could not comply with this requirement “without incurring substantial additional costs and significant detrimental impacts to its operations and revenues.” Amtrak officials told us that since passenger names would not be compared against any criminal or terrorist watch list or database, the benefits of requiring such identification checks were open to debate. To resolve its concern, and as allowed by the directive, Amtrak proposed, and TSA accepted, random identification checks of passengers as an alternative measure. FRA officials further stated that current FRA safety regulations requiring engineer compartment doors be kept unlocked to facilitate emergency escapes conflicts with the security directive requirement that doors equipped with locking mechanisms be kept locked. This requirement was not included in the draft directives provided to stakeholders. TSA did call one commuter rail operator prior to issuing the directives to discuss this potential proposed measure, and the operator raised a concern about the safety of the locked door requirement. TSA nevertheless included this requirement in the directives. With respect to how the directives were to be enforced, rail operators were required to allow TSA and DHS to perform inspections, evaluations, or tests based on execution of the directives at any time or location. Upon learning of any instance of noncompliance with TSA security measures, rail operators were to immediately initiate corrective action. Monitoring and ensuring compliance with the directives has posed challenges for TSA. In the year after the directives were issued, TSA did not have dedicated field staff to conduct on-site inspections. When the rail security directives were issued, the former DHS Undersecretary for Border and Transportation Security stated that TSA planned to form security partnership teams with DOT, including FRA rail inspectors, to help ensure that industry stakeholders complied with the directives. These teams were to be established in order to tap into existing capabilities and avoid duplication of effort across agencies. As of July 2005, these teams had not yet been utilized to perform inspections. TSA has, however, hired rail compliance inspectors to, among other things, monitor and enforce compliance with the security directives. As of July 2005, TSA had hired 57 of up to 100 inspector positions authorized by Congress. However, TSA has not yet established processes or criteria for determining and enforcing compliance, including determining how rail inspectors or DOT partnership teams will be used in this regard. Establishing criteria for monitoring compliance with the directives may be challenging because the language describing the required measures allows for flexibility and does not define parameters. In an effort to acknowledge the variable conditions that existed in passenger rail environments, TSA designed the directives to allow flexibility in implementation through the use of such phrases as “to the extent resources allow,” “to the extent practicable,” and “if available.” The directives also include nonspecific instructions that may be difficult to measure or monitor, telling operators to, for example, perform inspections of key facilities at “regular periodic intervals” or to conduct “frequent inspections” of passenger rail cars. When the directives were issued, TSA stated that it would provide rail operators with performance-based guidance and examples of announcements and signs that could be used to meet the requirements of the directives, including guidance on the appropriate frequency and method for inspecting rail cars and facilities. However, as of July 2005, this information had not been provided. Industry stakeholders we interviewed raised questions about how they were to comply with the measures contained in the directives and which entities were responsible for implementing the measures. According to an AAR official, in June 2004, AAR officials and rail operators held a conference call with TSA to obtain clarification on these issues. According to AAR officials, in response to an inquiry about what would constitute compliance for some of the measures, the then-TSA Assistant Administrator for Maritime and Land Security told participants that the directives were not intended to be overly prescriptive but were guidelines, and that operators would have the flexibility to implement the directives as they saw fit. The officials also asked for clarification on who was legally responsible for ensuring compliance for measures where assets, such as rail stations, were owned by freight railroads or private real estate companies. According to AAR officials, TSA told them it was the responsibility of the rail operators and asset owners to work together to determine these responsibilities. However, according to AAR and rail operators, given that TSA has hired rail inspectors and indicated its intention to enforce compliance with the directives, it is critical that TSA clarify what compliance entails for measures required by the directives and which entities are responsible for compliance with measures when rail assets are owned by one party but operated by another—such as when private companies that own terminals or stations provide services for commuter rail operations. The challenges TSA has faced in developing security directives as standards that reflect industry best practices—and that can be measured and enforced—stem from the original emergency nature of the directives, which were issued with limited input and review. TSA told rail industry stakeholders when the directives were issued 15 months ago that the agency would consider using the federal rule-making process as a means of making the standards permanent. Doing so would require TSA to hold a notice-and-comment period, resulting in a public record that reflects stakeholders’ input on the applicability and feasibility of implementing the directives, along with TSA’s rationale for accepting or rejecting this input. While there is no guarantee that this process would produce more effective security directives, it would be more transparent and could help TSA in developing standards that are most appropriate for the industry and can be measured, monitored, and enforced. In addition to issuing security directives, TSA also sought to enhance passenger rail security by conducting research on technologies related to screening passengers and checked baggage in the passenger rail environment. Beginning in May 2004, TSA conducted a Transit and Rail Inspection Pilot (TRIP) study, in partnership with DOT, Amtrak, the Connecticut Department of Transportation, the Maryland Transit Administration, and the Washington Metropolitan Area Transit Authority (WMATA). TRIP was a $1.5 million, three-phase effort to test the feasibility of using existing and emerging technologies to screen passengers, carry-on items, checked baggage, cargo, and parcels for explosives. Figure 4 summarizes TRIP’s three-phased approach. According to TSA, all three phases of the TRIP program were completed by July 2004. However, TSA has not yet issued a planned report analyzing whether the technologies could be used effectively to screen rail passengers and their baggage. According to TSA officials, a report on results and lessons learned from TRIP is under review by DHS. TSA officials told us that based upon preliminary analyses, the screening technologies and processes tested would be very difficult to implement on more heavily used passenger rail systems because these systems carry high volumes of passengers and have multiple points of entry. However, TSA officials stated to us that the screening processes used in TRIP may be useful on certain long-distance intercity train routes, which make fewer stops. Further, officials stated that screening could be used either randomly or for all passengers during certain high-risk events or in areas where a particular terrorist threat is known to exist. For example, screening technology similar to that used in TRIP was used by TSA to screen certain passengers and belongings in Boston and New York during the Democratic and Republican national conventions, respectively, in 2004. APTA officials and the 28 passenger rail operators we interviewed—all who are not directly involved in the pilot—agreed with TSA’s preliminary assessment. They told us they believed that the TRIP screening procedures could not work in most passenger rail systems, given the number of passengers using these systems and the open nature (e.g., multiple entry points) of the systems. For example, as one operator noted, over 1,600 people pass through dozens of access points in New York’s Penn Station per minute during a typical rush hour, making screening of all passengers very challenging, if not impossible. Passenger rail operators were also concerned that screening delays could result in passengers opting to use other modes of transportation. APTA officials and some rail operators we interviewed said that had they been consulted by TSA, they would have recommended alternative technologies to explore and indicated that they hoped to be consulted on security technology pilot programs in the future. FRA officials further stated that TSA could have benefited from earlier and more frequent collaboration with them during the TRIP pilot than occurred, and could have tapped their expertise to analyze TRIP results and develop the final report. TSA research and development officials told us that the agency has begun to consider and test security technologies other than those used in TRIP, which may be more applicable to the passenger rail environment. For example, TSA’s and DHS’s Science and Technology Directorate are currently evaluating infrared cameras and electronic metal detectors, among other things. In response to a previous recommendation we made in a June 2003 report on transportation security, DHS and DOT signed a memorandum of understanding (MOU) to develop procedures by which the two departments could improve their cooperation and coordination for promoting the safe, secure, and efficient movement of people and goods throughout the transportation system. The MOU defines broad areas of responsibility for each department. For example, it states that DHS, in consultation with DOT and affected stakeholders, will identify, prioritize, and coordinate the protection of critical infrastructure. The MOU between DHS and DOT represents an overall framework for cooperation that is to be supplemented by additional signed agreements, or annexes, between the departments. These annexes are to delineate the specific security- related roles, responsibilities, resources, and commitments for mass transit, rail, research and development, and other matters. The annex for mass transit security was signed in September 2005. According to DHS and DOT officials, this annex is intended to ensure that the programs and protocols for incorporating stakeholder feedback and making enhancements to security measures are coordinated. For example, the annex requires that DHS and DOT consult on such matters as regulations and security directives that affect security and identifies points of contact for coordinating this consultation. In addition to their work on the MOU and related annexes, DHS and TSA have taken other steps in an attempt to improve collaboration with DOT and industry stakeholders. In April 2005, DHS officials stated that better collaboration with DOT and industry stakeholders was needed to develop strategic security plans associated with various homeland security presidential directives and statutory mandates, such as the Intelligence Reform and Terrorism Prevention Act of 2004, which required DHS to develop a national strategy for transportation security in conjunction with DOT. Responding to the need for better collaboration, DHS established a senior-level steering committee in conjunction with DOT to coordinate development of this national strategy. In addition, senior DHS and TSA officials stated that industry groups will also be involved in developing the national strategy for transportation security and other strategic plans. Moreover, according to TSA’s assistant administrator for intermodal programs, TSA intends to work with APTA and other industry stakeholders in developing security standards for the passenger rail industry. U.S. passenger rail operators have taken numerous actions to secure their rail systems since the terrorist attacks of September 11, in the United States, and the March 11, 2004, attacks in Madrid. These actions included both improvements to system operations and capital enhancements to a system’s facilities, such as track, buildings, and train cars. All of the U.S. passenger rail operators we contacted have implemented some types of security measures—such as increased numbers and visibility of security personnel and customer awareness programs—that were generally consistent with those we observed in select countries in Europe and Asia. We also identified three rail security practices—covert testing, random screening of passengers and their baggage, and centralized research and testing—utilized by foreign operators or their governments that are not currently utilized by domestic rail operators or the U.S. government. All 32 of the U.S. rail operators we interviewed or visited reported taking specific actions to improve the security and safety of their rail systems by, among other things, investing in new security equipment, utilizing more law enforcement personnel, and establishing public awareness campaigns. Passenger rail operators we spoke with cited the 1995 sarin gas attacks on the Tokyo subway system and the September 11 terrorist attacks as catalysts for their security actions. After the attacks, many passenger rail operators used FTA’s security readiness assessments of heavy and passenger rail systems as a guide to determine how to prioritize their security efforts, as well as their own understanding of their system’s vulnerabilities, to determine what actions to take to enhance security. Similarly, as previously mentioned, the rail systems that underwent ODP risk assessments are currently using or plan to use these assessments to guide their security actions. In addition, 20 of the 32 U.S. operators we contacted or visited had conducted some type of security assessment internally or through a contractor, separate from the federally funded assessments. For example, some assessments evaluated vulnerabilities of physical assets, such as tunnels and bridges, throughout the passenger rail system. Passenger rail operators stated that security-related spending by rail operators was also based, in part, on budgetary considerations, as well as other practices used by other rail operators that were identified through direct contact or during industry association meetings. Passenger rail operators frequently made capital investments to improve security, and these investments often are not part of federal funding packages for new construction unless they are part of new facilities being constructed. According to APTA, 54 percent of transit agencies are facing increasing deficits, and no operator covers expenses with fare revenue; thus, balancing operational and capital improvements with security-related investments has been an ongoing challenge for these operators. Several foreign rail operators we interviewed also stated that funding for security enhancements was limited in light of other funding priorities within the rail system, such as personnel costs and infrastructure and equipment maintenance. Foreign rail operators we visited also told us that risk assessments played an important role in guiding security-related spending for rail. For example, one foreign rail operator with a daily ridership of 2.3 million passengers used a risk management methodology to assess risks, threats, and vulnerabilities to rail in order to guide security spending. The methodology is part of the rail operator’s corporate focus on overall safety and security and is intended to help protect the operator’s various rail systems against, among other things, terrorist attacks, as well as other forms of corporate loss, such as service disruption and loss of business viability. Both U.S. and foreign passenger rail operators we contacted have implemented similar improvements to enhance the security of their systems. A summary of these efforts follows. Customer awareness: Customer awareness programs we observed used signage and announcements to encourage riders to alert train staff if they observed suspicious packages, persons, or behavior. Of the 32 domestic rail operators we interviewed, 30 had implemented a customer awareness program or made enhancements to an existing program. Foreign rail operators we visited also attempt to enhance customer awareness. For example, 11 of the 13 operators we interviewed had implemented a customer awareness program. Similar to programs of U.S. operators, these programs used signage, announcements, and brochures to inform passengers and employees about the need to remain vigilant and report any suspicious activities. Only one of the European passenger rail operators that we interviewed has not implemented a customer security awareness program, citing the fear or panic that it might cause among the public. Increased number and visibility of security personnel: Of the 32 U.S. rail operators we interviewed, 23 had increased the number of security personnel they utilized since September 11, to provide security throughout their system or had taken steps to increase the visibility of their security personnel. In addition to adding security personnel, many operators stated that increasing the visibility of security was as important as increasing the number of personnel. For example, several U.S. and foreign rail operators we spoke with had instituted policies such as requiring their security staff, in brightly colored vests, to patrol trains or stations more frequently, so they are more visible to customers and potential terrorists or criminals. These policies make it easier for customers to contact security personnel in the event of an emergency, or if they have spotted a suspicious item or person. At foreign sites we visited, 10 of the 13 operators had increased the number of their security officers throughout their systems in recent years because of the perceived increase in risk of a terrorist attack. Increased use of canine teams: Of the 32 U.S. passenger rail operators we contacted, 21 had begun to use canine units, which include both dogs and human handlers, to patrol their facilities or trains or had increased their existing utilization of such teams. Often, these units are used to detect the presence of explosives, and may be called in when a suspicious package is detected. Some operators that did not maintain their own canine units stated that it was prohibitively expensive to do so and that they could call in local police canine units if necessary. In foreign countries we visited, passenger rail operators’ use of canines varied. In some Asian countries, canines were not culturally accepted by the public and thus were not used for rail security purposes. As in the United States, and in contrast to Asia, most European passenger rail operators used canines for explosive detection or as deterrents. Employee training: All of the domestic and foreign rail operators we interviewed had provided some type of security training to their staff, either through in-house personnel or an external provider. In many cases, this training consisted of ways to identify suspicious items and persons and how to respond to events once they occur. For example, the London Underground and the British Transport Police developed the “HOT” method for its employees to identify suspicious items in the rail system. In the HOT method, employees are trained to look for packages or items that are Hidden, Obviously suspicious, and not Typical of the environment. Items that do not meet these criteria would likely receive a lower security response than an item meeting all of the criteria. However, if items meet all of these criteria, employees are to notify station managers, who would call in the authorities and potentially shut down the station or take other action. According to London Underground officials, the HOT method has significantly reduced the number of system disruptions caused when a suspicious item was identified. Several passenger rail operators in the United States and abroad have trained their employees in the HOT method. Several domestic operators had also trained their employees in how to respond to terrorist attacks and provided them with wallet-size cards highlighting actions they should take in response to various forms of attack. It is important to note that training such as the HOT method is not designed to prevent acts of terrorism like the July 2005 London attacks, where suicide bombers killed themselves rather than leaving bombs behind. Passenger and baggage screening practices: Some domestic and foreign rail operators have trained employees to recognize suspicious behavior as a means of screening passengers. Eight U.S. passenger rail operators we contacted were utilizing some form of behavioral screening. For example, the Massachusetts Bay Transportation Authority (MBTA), which operates Boston’s T system, has utilized a behavioral screening system to identify passengers exhibiting suspicious behavior. The Massachusetts State Police train all MBTA personnel to be on the lookout for behavior that may indicate someone has criminal intent, and to approach and search such persons and their baggage when appropriate. Massachusetts State Police officers have been training rail operators on this behavior profiling system, and WMATA and New Jersey Transit were among the first additional operators to implement the system. According to MBTA personnel, several other operators have expressed interest in this system. Abroad, we found that 4 of 13 operators we interviewed had implemented forms of behavioral screening similar to MBTA’s system. All of the domestic and foreign rail operators we contacted have ruled out an airport-style screening system for daily use in heavy traffic, where each passenger and the passenger’s baggage are screened by a magnetometer or X-ray machine, based on cost, staffing, and customer convenience factors, among others. For example, although the Spanish National Railway screens passenger baggage using an X-ray machine on certain long- distance trains that it believes could be at risk, all of the operators we contacted stated that the cost, staffing requirements, delay of service, and inconvenience to passengers would make such a system unworkable in highly trafficked, inherently open systems like U.S. and foreign passenger rail operations. In addition, one Asian rail official stated that his organization was developing a contingency plan for implementing an airport-style screening system, but that such a system would be used only in the event of intelligence information indicating suicide bomb attacks were imminent, or if several attacks had already occurred during a short period of time. According to this official, the plan was in the initial stages of development, and the organization did not know how quickly such a system could be implemented. Upgrading technology: Many rail operators we interviewed had embarked on programs designed to upgrade their existing security technology. For example, we found that 29 of the 32 U.S. operators had implemented a form of CCTV to monitor their stations, yards, or trains. While these cameras cannot be monitored closely at all times, because of the large number of staff they said this would require, many rail operators felt the cameras acted as a deterrent, assisted security personnel in determining how to respond to incidents that have already occurred, and could be monitored if an operator has received information that an incident may occur at a certain time or place in their system. One rail operator, New Jersey Transit, had installed “smart” cameras, which were programmed to alert security personnel when suspicious activity occurred, such as if a passenger left a bag in a certain location or if a boat were to dock under a bridge. According to the New Jersey Transit officials, this technology was relatively inexpensive and not difficult to implement. Several other operators stated they were interested in exploring this technology. Abroad, all 13 of the foreign rail operators we visited had CCTV systems in place. As in the United States, foreign rail operators use these cameras primarily as a crime deterrent and to respond to incidents after they occur, because they do not have enough staff to continuously monitor all of these cameras. In addition, 18 of the 32 U.S. rail operators we interviewed had installed new emergency phones or enhanced the visibility of the intercom systems they already had. Passengers can use these systems to contact train operators or security personnel to report suspicious activity, crimes in progress, or other problems. Furthermore, while most rail operators we spoke with had not installed chemical or biological agent detection equipment because of the costs involved, a few operators had this equipment or were exploring purchasing it. For example, WMATA, in Washington, D.C., has installed these sensors in some of its stations, thanks to a program jointly sponsored by DOT and the Department of Energy that provided this equipment to WMATA because of the high perceived likelihood of an attack in Washington, D.C. Also, at least three other domestic rail operators we spoke with are exploring the possibility of partnering with federal agencies to install such equipment in their facilities on an experimental basis. Also, as in the United States, a few foreign operators had implemented chemical or biological detection devices at these rail stations, but their use was not widespread. Two of the 13 foreign operators we interviewed had implemented these sensors, and both were doing so on an experimental basis. In addition, police officers from the British Transport Police— responsible for policing the rail system in the United Kingdom—were equipped with pagers to detect chemical, biological, or radiological elements in the air, allowing them to respond quickly in case of a terrorist attack using one of these methods. The British Transit Police also has three vehicles carrying devices to determine if unattended baggage contains explosives—these vehicles patrol the system 24 hours per day. Access control: Tightening access procedures at key facilities or rights- of-way is another way many rail operators have attempted to enhance security. A majority of domestic and selected foreign passenger rail operators had invested in enhanced systems to control unauthorized access at employee facilities and stations. Specifically, 23 of the 32 U.S. operators had installed a form of access control at key facilities and stations. This often involved installing a system where employees had to swipe an access card to gain access to control rooms, repair facilities, and other key locations. All 13 foreign operators had implemented some form of access control to their critical facilities or rights-of-way. These measures varied from simple alarms on doors at electrical substations on one subway system we visited to infrared sensors monitoring every inch of right-of-way along the track on three of the high-speed interurban rail systems. Rail system design and configuration: In an effort to reduce vulnerabilities to terrorist attack and increase overall security, passenger rail operators in the United States and abroad have been, or are now beginning to, incorporate security features into the design of new and existing rail infrastructure, primarily rail stations. For example, of the 32 domestic rail operators we contacted, 22 of them had removed their conventional trash bins entirely, or replaced them with transparent or bomb-resistant trash bins, as TSA instructed in its May 2004 security directives. Foreign rail operators had taken steps to remove traditional trash bins from their systems. Of the 13 operators we visited, 8 had either removed their trash bins entirely or replaced them with blast-resistant cans or transparent receptacles. Many foreign rail operators are also incorporating aspects of security into the design of their rail infrastructure. Of the 13 operators we visited, 11 have attempted to design new facilities with security in mind and have attempted to retrofit older facilities to incorporate security-related modifications. For example, one foreign operator we visited is retrofitting its train cars with windows that passengers could open in the event of a chemical attack. In addition, the London Underground, one of the oldest rail systems in the world, incorporates security into the design of all its new stations as well as when existing stations are modified. We observed several security features in the design of Underground stations, such as using vending machines that have no holes that someone could use to hide a bomb, and sloped tops to reduce the likelihood that a bomb can be placed on top of the machine. In addition, stations are designed to provide staff with clear lines of sight to all areas of the station, such as underneath benches or ticket machines, and station designers try to eliminate or restrict access to any recessed areas where a bomb could be hidden. In one London station, we observed the use of netting throughout the station to help prevent objects, such as bombs, from being placed in a recessed area, such as beneath a stairwell or escalator. In this station and other stations we visited, Underground officials have installed “help posts” at which customers can call for help if an incident occurs. When these posts are activated, CCTV cameras display a video image of the help post and surrounding area to staff at a central command center. This allows the staff to directly observe the situation and respond appropriately. See figure 5 for a photograph of a help post. Underground officials stated that the incorporation of security features in station design is an effective measure in deterring some terrorists from attacking the system. For example, officials told us that CCTV video recorded the efforts by Irish Republican Army terrorists attempting to place an explosive device inside a station—and when they could not find a suitable location to hide the device, they placed it outside in a trash can instead, thereby mitigating the impact of the explosion. In the United States, several passenger rail operators stated that they were taking security into account when designing new facilities or remodeling older ones. Twenty-two of 32 rail operators we interviewed told us that they were incorporating security into the design of new or existing rail infrastructure. For example, New York City Transit and PATH officials told us they are incorporating security into the design of its new stations, including the redesigned Fulton Street station and the World Trade Center Hub that were damaged or destroyed during the September 11 attacks. In addition, in June 2005, FTA issued guidelines for use by the transit industry encouraging the incorporation of particular security features into the design of transit infrastructure. These guidelines include, for example, increasing visibility for onboard staff, reducing the areas where someone could hide an explosive device on a transit vehicle, and enhancing emergency exits in transit stations. Figure 6 shows a diagram of several security measures that we observed in passenger rail stations both in the United States and abroad. It should be noted that this represents an amalgam of stations we visited, not any particular station. K-9 patrol unit(s) In securing its extensive system, Amtrak faces its own set of security- related challenges, some of which are different from those facing a commuter rail or transit operator. First, Amtrak operates over thousands of miles, often far from large population centers. This makes its route system much more difficult to patrol and monitor than one contained in a particular metropolitan region, and it causes delays in responding to incidents when they occur in remote areas. Also, outside the Northeast Corridor, Amtrak operates almost exclusively on tracks owned by freight rail companies. Amtrak also utilizes stations owned by freight rail companies, transit and commuter rail authorities, private corporations, and municipal governments. This means that Amtrak often cannot unilaterally make security improvements to others’ rights-of-way or station facilities and that it is reliant on the staff of other organizations to patrol their facilities and respond to incidents that may occur. Furthermore, with over 500 stations, only half of which are staffed, screening even a small portion of the passengers and baggage boarding Amtrak trains is difficult. Last, Amtrak’s financial condition has never been strong—Amtrak has been on the edge of bankruptcy several times. Amid the ongoing challenges of securing its coast-to-coast railway, Amtrak has taken some actions to enhance security throughout its intercity passenger rail system. For example, Amtrak has initiated a passenger awareness campaign, similar to those described elsewhere in this report. Also, Amtrak has begun enforcing existing restrictions on carry-on luggage that limit passengers to two carry-on bags, not exceeding 50 pounds. All bags also must have identification tags on them. Furthermore, Amtrak has begun requiring passengers to show positive identification after boarding trains when asked by staff to ensure that tickets have not been transferred or stolen, although Amtrak officials acknowledge their onboard staffs only sporadically enforce this requirement because of the numerous tasks these staff members must perform before a train departs. However, in November 2004, Amtrak implemented the Tactical Intensive Patrols (TIPS) program, under which its security staff flood selected platforms to ensure Amtrak baggage and identification requirements are met by passengers boarding trains. In addition, Amtrak increased the number of canine units patrolling its system, most of which are located in the Northeast Corridor, looking for explosives or narcotics and assigned some of its police to ride trains in the Northeast Corridor. Also, Amtrak has instituted a policy of randomly inspecting checked luggage on its trains. Finally, Amtrak is making improvements to the emergency exits in certain tunnels to make evacuating trains in the tunnels easier in the event of a crash or terrorist attack. To ensure that security measures are applied consistently throughout Amtrak’s system, Amtrak has established a series of Security Coordinating Committees, which include representatives of all Amtrak departments. These committees are to review and establish security policies, in coordination with Amtrak’s police department, and have worked to develop countermeasures to specific threats. According to Amtrak, in the aftermath of the July 2005 London bombings, these committees met with Amtrak police and security staff to ensure additional security measures were implemented. Also in the wake of the London attacks, Amtrak began working with the police forces of several large east coast cities, allowing them to patrol Amtrak stations to provide extra security. In addition, all Amtrak employees now receive a “Daily Security Awareness Tip” and are receiving computer-based security training. Amtrak police officers are also now receiving specialized counterterrorism training. While Amtrak has taken the actions outlined above, it is difficult to determine if these actions appropriately or sufficiently addressed pressing security needs. As discussed earlier, Amtrak has not performed a comprehensive terrorism risk assessment that would provide an empirical baseline for investment prioritization and decision making for Amtrak’s security policies and investment plans. However, as part of the 2005 Intercity Passenger Rail Grant Program, Amtrak is required to produce a security and emergency preparedness plan, which is to include a risk assessment that Amtrak currently expects to finish by December 31, 2005. Upon completing this plan, Amtrak management should have a more informed basis regarding which security enhancements should receive the highest priority for implementation. While many of the security practices we observed in foreign rail systems are similar to those U.S. passenger rail operators are implementing, we encountered three practices in other countries that were not currently in use among the domestic passenger rail operators we contacted as of June 2005, nor were they performed by the U.S. government. These practices are discussed below. Covert testing: Two of the 13 foreign rail systems we visited utilize covert testing to keep employees alert about their security responsibilities. Covert testing involves security staff staging unannounced events to test the response of railroad staff to incidents such as suspicious packages or setting off alarms. In one European system, this covert testing involves security staff placing suspicious items throughout their system to see how long it takes operating staff to respond to the item. Similarly, one Asian rail operator’s security staff will break security seals on fire extinguishers and open alarmed emergency doors randomly to see how long it takes staff to respond. Officials of these operators stated that these tests are carried out on a daily basis and are beneficial because their staff know they could be tested at any moment, and they, therefore, are more likely to be vigilant with respect to security. Random screening: Of the 13 foreign operators we interviewed, 2 have some form of random screening of passengers and their baggage in place. In the systems where this is in place, security personnel can approach passengers either in stations or on the trains and ask them to submit their persons or their baggage to a search. Passengers declining to cooperate must leave the system. For example, in Singapore, rail agency officials rotate the stations where they conduct random searches so that the searches are carried out at a different station each day. Prior to the July 2005 London bombings, no passenger rail operators in the United States were practicing a form of random passenger or baggage screening on a continuing daily basis. However, during the Democratic National Convention in 2004, MBTA instituted a system of random screening of passengers, where every 11th passenger at certain stations and times of the day was asked to provide his or her bags to be screened. Those who refused were not allowed to ride the system. MBTA officials recognized that it is impossible to implement such a system comprehensively throughout the rail network without massive amounts of additional staff, and that even doing random screening on a regular basis would be a drain on resources. However, officials stated that such a system is workable during special events and times of heightened security but would have to be designed very carefully to ensure that passengers’ civil liberties were not violated. After the July 2005 London bombings, four passenger rail operators—PATH, New York Metropolitan Transportation Authority, New Jersey Transit, and Utah Transit Authority in Salt Lake City—implemented limited forms of random bag screening in their system. In addition, APTA, FTA, and the National Academy of Science’s Transportation Research Board are currently conducting a study on the benefits and challenges that passenger rail operators would face in implementing a randomized passenger screening system. The study is examining such issues as the legal basis for conducting passenger screening or search, the precedence for such measures in the transportation environment, the human resources required, and the financial implications and cost considerations involved. National government maintains clearinghouse on technologies and best practices: According to passenger rail operators in five countries we visited, their national governments have centralized the process for performing research and developing passenger rail security technologies and maintaining a clearinghouse on these technologies and security best practices. According to these officials, this allows rail operators to have one central source for information on the merits of a particular passenger rail security technology, such as chemical sensors, CCTVs, and intrusion detection devices. Some U.S. rail operators we interviewed expressed interest in there being a more active centralized federal research and development authority in the United States to evaluate and certify passenger rail security technologies and make that information available to rail operators. Although TSA is the primary federal agency responsible for conducting transportation security research and development, and has conducted the TRIP as previously mentioned, most of the agency’s research and development efforts to date have focused on aviation security technologies. As a result, domestic rail operators told us that they rely on consultations with industry trade associations, such as APTA, to learn about best practices for passenger rail security technologies and related investments. Several rail operators stated that they were often unsure of where to turn when seeking information on security-related products, such as CCTV cameras or intrusion detection systems. Currently, many operators said they informally ask other rail operators about their experiences with a certain technology, perform their own research via the Internet or trade publications, or perform their own testing. No federal agency has compiled or disseminated best practices to rail operators to aid in this process. We have previously reported that stakeholders have stated that the federal government should play a greater role in testing transportation security technology and making this information available to industry stakeholders. TSA and DOT agree that making the results of research testing available to industry stakeholders could be a valuable use of federal resources by reducing the need for multiple rail operators to perform the same research and development efforts, but they have not taken action to address this. Implementing these three practices—covert testing, random screening, and a government-sponsored clearinghouse for technologies and best practices—in the United States could pose political, legal, fiscal, and cultural challenges because of the differences between the United States and these foreign nations. For instance, many foreign nations have dealt with terrorist attacks on their public transportation systems for decades, compared with the United States, where rail transportation has not been specifically targeted during terrorist attacks. According to foreign rail operators, these experiences have resulted in greater acceptance of certain security practices, such as random searches, which the U.S. public may view as a violation of their civil liberties or which may discourage them from using public transportation. The impact of security measures on passengers is an important consideration for domestic rail transit operators, since most passengers could choose another means of transportation, such as a personal automobile. As such, security measures that limit accessibility, cause delays, increase fares, or otherwise cause inconvenience could push people away from transit and into their cars. In contrast, the citizens of the European and Asian countries we visited are more dependent on public transportation than most U.S. residents and therefore, according to the rail operators we spoke with, may be more willing to accept more intrusive security measures, simply because they have no other choice for getting from place to place. Nevertheless, in order to identify innovative security measures that could help further mitigate terrorism-related risk to rail assets—especially as part of a broader risk management approach discussed earlier—it is important to at least consider assessing the feasibility and costs and benefits of implementing the three rail security practices we identified in foreign countries in the United States. Officials from DHS, DOT, passenger rail industry associations, and rail systems we interviewed told us that operators would benefit from such an evaluation. Furthermore, the passenger rail association officials told us that such an evaluation should include practices used by foreign rail operators that integrate security into infrastructure design. Differences in the business models and financial status of some foreign rail operators could also affect the feasibility of adopting certain security practices in the United States. Several foreign countries we visited have privatized their passenger rail operations. Although most of the foreign rail operators we visited—even the privatized systems—rely on their governments for some type of financial assistance, two foreign rail operators generated significant revenue and profits in other business endeavors, which they said allowed them to invest heavily in security measures for their rail systems. In particular, the Paris Metro system is operated by the RATP Corporation (Régie Autonome des Transports Parisiens), which also contracts with other cities in France and throughout the world to provide consulting and project management services. RATP’s ability to make a profit, according to its officials, through its consulting services allows the agency to supplement government funding in order to support expensive security measures for the Paris mass transit system. For example, RATP recently installed a computer-assisted security control system that uses CCTV, radio, and global positioning technology that it says has significantly reduced the amount of time it takes for security or emergency personnel to respond to an incident or emergency, such as a terrorist attack. Because of RATP’s available funding for security, the corporation also purchased an identical system for the Metropolitan Paris Police, so the RATP and the police system would be compatible. In contrast, domestic rail operators do not generate a profit and therefore are dependent on financial assistance from the federal, state, and local levels of government to maintain and enhance services, including funding security improvements. Another important difference between domestic and foreign rail operators is the structure of their police forces. In particular, England, France, Belgium, and Spain all have national police forces patrolling rail systems in these countries. The use of a national police force is a reflection that these foreign countries often have one nationalized rail system, rather than over 30 rail transit systems owned and operated by numerous state and local governments, as is the case in the United States. For example, in France, the French National Railway operates all intercity passenger rail services in the country and utilizes the French Railway police to provide security. According to foreign rail operators, the use of one national rail police force allows for consistent policing and security measures throughout the country. In the United States, in contrast, there is not a national police force for the rail transit systems. Rather, some transit agencies maintain individual polices forces, while others rely on their city or county police forces for security. In conclusion, Mr. Chairman, we are encouraged by the steps DHS components have taken to use elements of a risk management approach to guide critical infrastructure protection decisions for the passenger rail industry. However, enhanced federal leadership is needed to help ensure that actions and investments designed to enhance security are properly focused and prioritized, so that finite resources may be allocated appropriately to help protect all modes of transportation and secure other national critical infrastructure sectors. Leadership on this issue should reflect the shared responsibilities required to coordinate actions on the part of federal, state, and local governments; the private sector; and rail passengers who ride these systems. Specifically, both DHS and TSA could take additional steps to help ensure that the risk management efforts under way clearly and effectively identify priority areas for security-related investments in rail and other sectors. We recognize that TSA has had many aviation security-related responsibilities and has implemented many security initiatives to meet legislative requirements. Notwithstanding, TSA has not yet completed its methodology for determining how the results of threat, criticality, and vulnerability assessments will be used to identify and prioritize risks to passenger rail and other transportation sectors. In order to complete and apply its methodology as part of the forthcoming transportation sector- specific plan, TSA needs to more consistently involve industry stakeholders in the overall risk assessment process and collaborate with them on collecting and analyzing information on critical infrastructure and key resources in the passenger rail industry. Without consistent and substantive stakeholder input, TSA may not be able to fully capture critical information on rail assets—information that is needed to properly assess risk. In addition, as part of the process to complete its risk assessment methodology, TSA needs to consider whether other proven approaches, such as ODP’s risk assessment methodology, could be leveraged for rail and other transportation modes, such as aviation. Until the overall risk to the entire transportation sector is identified, TSA will not be able to fully benefit from the outcome of risk management analysis—including determining where and how to target the nation’s limited resources to achieve the greatest security gains. Once risk assessments for the passenger rail industry have been completed, it will be critical to be able to compare assessment results across all transportation modes as well as other critical sectors and make informed, risk-based investment trade-offs. The framework that DHS is developing to help ensure that risks to all sectors can be analyzed and compared in a consistent way needs to be completed and shared with TSA and other sector-specific agencies. The delay in completing the element of the framework that defines concepts, terminology, and metrics for assessing risk limits DHS’s ability to compare risk across sectors as sector- specific agencies are concurrently conducting risk assessment activities without this guidance. Until this framework is complete, it will not be possible for information from different sectors to be reconciled to allow for a meaningful comparison of risk—a goal outlined in DHS’s interim NIPP. Apart from its efforts to formally identify risks, TSA has taken steps to enhance the security of the overall passenger rail system. The issuance of security directives in the wake of the Madrid bombings was a well- intentioned effort to take swift action in response to a current threat. However, because these directives were issued under emergency circumstances, with limited input and review by rail industry and federal stakeholders—and no public comment period—they may not provide the industry with baseline security standards based on industry best practices. Nor is it clear how these directives are to be measured and enforced. Consequently, neither the federal government nor rail operators can be sure they are requiring and implementing security practices proven to help prevent or mitigate disasters. Collaborating with rail industry stakeholders to develop security standards is an important starting point for strengthening the security of passenger rail systems. While foreign passenger rail operators face similar challenges to securing their systems and have generally implemented similar security practices as U.S. rail operators, there are some practices that are utilized abroad that U.S. rail operators or the federal government have not studied in terms of the feasibility, costs, and benefits. For example, an information clearinghouse for new passenger rail technologies that are available and have been tested might allow rail operators to efficiently implement technologies that had already received approval. In addition, while FTA plans to require rail operators to consider its security infrastructure design guidelines when renovating or constructing rail systems or facilities, opportunities may still exist to further research and evaluate ways of integrating security into design, as some foreign rail operators have done. Another rail security practice—covert testing of rail security procedures— is being used in two foreign rail systems we visited and is considered by them as an effective means of keeping rail employees alert to their surroundings and potential security threats. And finally, random searches of passengers and baggage are being used by two foreign rail operators and this practice has recently been adopted by four domestic rail operators in the wake of the London attacks. Introducing these security practices into the United States may involve cultural, financial, and political challenges, owing to differences between the United States and foreign nations. Nonetheless, as part of the overall risk management approach, there may be compelling reasons for exploring the feasibility, costs, and benefits of implementing any of these practices in the United States. Doing so could enable the United States to leverage the experiences and knowledge of foreign passenger rail operators and help identify additional innovative measures to secure rail systems against terrorist attack in this country. In our recently issued report on passenger rail security, we recommended, among other things, that to help ensure that the federal government has the information it needs to prioritize passenger rail assets based on risk, and in order to evaluate, select, and implement commensurate measures to help the nation’s passenger rail operators protect their systems against acts of terrorism, TSA should establish a plan with timelines for completing its methodology for conducting risk assessments and develop security standards that reflect industry best practices and can be measured and enforced, by using the federal rule-making process. In addition, we recommended that the Secretary of DHS, in collaboration with DOT and the passenger rail industry, determine the feasibility, in a risk management context, of implementing certain security practices used by foreign rail operators. DHS, DOT, and Amtrak generally agreed with the report’s recommendations. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions that you or other members of the Committee may have at this time. For further information on this testimony, please contact Cathleen A. Berrick at (202) 512- 3404 or JayEtta Z. Hecker at (202) 512-2834. Individuals making key contributions to this testimony include Seto Bagdoyan, Amy Bernstein, Leo Barbour, Christopher Currie, Nikki Clowers, David Hooper, Kirk Kiester, and Ray Sendejas. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The July 2005 bombing attacks on London's subway system dramatically highlighted the vulnerability of passenger rail systems worldwide to terrorist attacks, and the need for an increased focus on security for these systems. This testimony provides information on how the Department of Homeland Security (DHS), including the Transportation Security Administration (TSA) and the Office for Domestic Preparedness (ODP), have assessed risks posed by terrorism to the U.S. passenger rail system using risk management principles; actions federal agencies have taken to enhance the security of U.S. rail systems; and rail security practices implemented by domestic and selected foreign passenger rail operators and differences among these practices. Within DHS, ODP has completed numerous risk assessments of passenger rail systems around the country, and TSA has begun to conduct risk assessments as well as establish a methodology for determining how to analyze and characterize risks that have been identified. Until TSA completes these efforts, however, the agency will not be able to prioritize passenger rail assets and help guide security investment decisions. At the department level, DHS has begun developing, but has not yet completed, a framework to help agencies and the private sector develop a consistent approach for analyzing and comparing risks to transportation and other sectors. Until this framework is finalized and shared with stakeholders, it may not be possible to compare risks across different sectors, prioritize them, and allocate resources accordingly. In addition to the ongoing initiatives to enhance passenger rail security conducted by the Department of Transportation's (DOT) Federal Transit Administration and Federal Railroad Administration, such as providing security training to passenger rail operators, TSA issued emergency security directives in 2004 to domestic rail operators after terrorist attacks on the rail system in Madrid and piloted a test of explosive detection technology for use in passenger rail systems. However, federal and rail industry officials raised questions about the feasibility of implementing and complying with the security directives, citing limited opportunities to collaborate with TSA to ensure that industry best practices were incorporated. Domestic and foreign passenger rail operators we contacted have taken a range of actions to help secure their systems. Most, for example, had implemented customer awareness programs to encourage passengers to report suspicious activities, increased the number and visibility of their security personnel, upgraded security technology, and improved rail system design to enhance security. We also observed security practices among certain foreign passenger rail systems or their governments not currently used by the domestic rail operators we contacted, or by the U.S. government, which could be considered for use in the United States. For example, some foreign rail operators randomly screen passengers or utilize covert testing to help keep employees alert to security threats, and some foreign governments maintain centralized clearinghouses on rail security technologies. While introducing any of these security practices into the U.S. rail system may pose political, legal, fiscal, and cultural challenges, they may nevertheless warrant further examination.
The Postal Service’s letter mail automation program was designed to increase productivity, reduce postal costs, and provide postal customers with more consistent delivery service. The program relies on optical character readers and barcode sorters to automate the mechanized and manual sorting of letter mail, and curb the Service’s costs by reducing the number of workhours clerks and letter carriers would need to sort letters. In 1980, the Service’s Board of Governors approved the initial procurement of this equipment, which became operational in 1982 and began the $4.4 billion automation program. These early optical character readers (1) read the last line of the address; (2) verified the city, state, and 5-digit ZIP code against a computer address directory; (3) printed a corresponding barcode on the envelope; and (4) did an initial sort. The companion barcode sorters read the applied 5-digit barcode, enabling the equipment to automatically sort letters to the post offices that were to make delivery. In 1983, the Service introduced the 9-digit, or ZIP+4 code, which enabled the equipment to automatically sort letters not only to the post offices but also to sort down to the carrier routes, post office boxes, buildings, or large business firms. While the 5-digit ZIP code with automation reduced mail processing costs, the 9-digit code further reduced these costs and lowered the number of missorted letters, which improved the consistency of delivery service. During 1987 and 1988, the Service took three key actions regarding letter mail automation. First, the Service began deploying a newer generation of optical character readers that could read and interpret multiple lines of address information and did not need the 9-digit code to print the barcode on the envelope. Second, the Service implemented its first rate incentive to encourage business customers to apply barcodes and improve both the address accuracy and print quality of their letter mail. Third, the Service developed its initial Corporate Automation Plan, which spelled out the letter mail automation goals and strategies for achieving them. The primary goal was to barcode virtually all letter mail by the end of 1995, which was to result in substantial savings. To achieve this goal, the Service’s strategy was that mailers, encouraged by rate incentives, would barcode about 40 percent of the letter mail. The Service would barcode the remainder using its optical character readers and remote barcoding systems, which it began in 1992. Remote barcoding systems provide the Service a means of barcoding letter mail containing addresses that its optical character readers cannot read and barcode because the addresses are either handwritten, poorly printed, or have other readability problems. These systems entail making electronic images of these addresses. The images are initially processed by a remote computer-reading device, which attempts to read these addresses and barcode the corresponding letters. Those images that cannot be read are electronically transmitted to off-site locations where operators read and key in enough address information from the images to allow the equipment to barcode the letters. After the Service had developed the capability to automatically sort letters down to the carrier route level, it began studying the feasibility of automating carriers’ manual sequencing of letters into delivery order. In the office, carriers received their letter mail in random order each morning mail was delivered, manually sequenced this mail by inserting each letter into the appropriate pigeonhole of the letter case, removed the mail from the case, and bundled it for delivery. The Service reported that continued mail volume growth had increased the average carrier’s in-office time from about 2 to 3 hours in 1978 to about 4 hours in 1988. As a result, the time that carriers spent on the street decreased, and the average number of delivery points per route decreased from 520 to 470. Because of these factors, more carriers were needed to deliver the daily mail volume. The Service developed delivery sequencing using an 11-digit barcode that must be applied to letter mail before delivery sequencing will work. The 11-digit barcode combines the 9-digit ZIP code with the last two digits of the street address number, which enables barcode sorters to automatically sequence letters into the order carriers deliver them. Both mailers and the Service can apply the 11-digit barcode. The Service estimated that delivery sequencing would reduce the average time carriers spend in the office preparing mail for delivery by about 80 minutes per day based on standard letter sorting rates and mail volume. This reduction was expected to allow a commensurate increase in the time carriers spend on the street and the number of delivery points per route. Two types of barcode sorters are used to delivery sequence letters. The Delivery Bar Code Sorter is the larger of the two machines and is deployed primarily in mail processing plants. Two clerks operate the larger machine, which is designed to delivery sequence multiple routes at the same time and process 25,000 letters per hour. The Carrier Sequence Bar Code Sorter is the smaller machine and is deployed in delivery units that meet certain minimum floor space and letter mail volume requirements. The smaller machine requires one clerk to operate and is designed to delivery sequence one route at a time and process over 19,000 letters per hour. After letters are delivery sequenced, city carriers are to take them to the street without manually preparing them for delivery within the office. However, all letters cannot be delivery sequenced. As a result, carriers receive and must manually sequence that portion of their letters that were not delivery sequenced. The Service expects that there will always be letters that cannot be delivery sequenced because these letters (1) have characteristics, such as size and shape, that are incompatible with automation equipment; (2) have addresses or barcodes that are incorrect; or (3) originate in, or are destined for, areas with insufficient mail volume to justify investment in automated processing equipment. In addition, carriers receive and must manually sequence flats (large envelopes, magazines, and catalogs), which accounted for about 30 percent of total mail volume in fiscal year 1997. In fiscal year 1997, the Service reported processing about 191 billion pieces of mail, including about 131 billion letters. The Service also reported that its automation equipment sorted about 76.5 billion, or 58 percent, of these letters. In addition, delivery points have grown at the rate of about 1 percent per year; and in fiscal year 1997, the Service delivered mail to 128 million addresses. To determine the Service’s DPS goals and status of its implementation, we analyzed our prior reports and Postal Inspection Service audit reports on DPS. We reviewed the Service’s Decision Analysis Reports, which supported acquisition of DPS related automation equipment and projected automation savings, the Service’s DPS guidance and training materials, and six 1992 Joint Memoranda of Understanding on DPS published by the Service and NALC. We reviewed the Service’s 1990, 1992, and 1996 Corporate Automation Plans, which describe the activities, benchmarks, goals, and associated time frames necessary to complete the Service’s automation program and achieve projected savings. In addition, we interviewed Postal Service headquarters delivery and operations support officials, who are responsible for the overall implementation and management of DPS. We reviewed the Service’s DPS tracking data on DPS implementation, such as number of delivery units and routes that receive DPS letters. We reviewed the Service’s national data on delivery workhours, volume, city and rural carrier routes, and productivity from fiscal year 1993 through 1997. With these data, we compared DPS performance with the Service’s benchmarks and analyzed performance indicators to report trends in workhours, number of deliveries, letter mail volume, and number of carrier routes. However, we did not verify the accuracy of these data. To identify any remaining issues that may affect the Service’s ability to achieve its DPS goals, we reviewed and analyzed the Service’s 1996 Plan, which highlights ongoing and planned actions necessary to meet the 1998 DPS goals. We interviewed Postal Service headquarters officials with lead responsibility for completing ongoing and planned DPS related tasks. We also did some preliminary work, which indicated that the Service was experiencing labor-management relations problems over DPS implementation. On the basis of that work and our knowledge of persistent labor-management relations problems in the Service from our past work, we interviewed national representatives of the Service’s four major labor unions and three management associations to identify whether they were aware of any labor-management relations issues that may affect the Service achieving its DPS goals. To observe any issues that the Service and its unions and management associations identified, we selected a judgmental sample of 3 districts and 6 delivery units in 3 of the 11 Postal Areas, which included Capital Metro Operations. Among other considerations, we selected (1) two districts that had fully implemented DPS on all city routes and one district to obtain additional geographic dispersion because it was located in close proximity to our staff in Denver, which did the field work and (2) two delivery units—within each selected district—with both high-office efficiency and declining street efficiency. We conducted site visits to these locations to observe delivery operations and interviewed responsible area, district, and delivery unit officials. We also judgmentally selected 142 city and rural carriers at the delivery units we visited to obtain their experience and views about DPS implementation. These carriers were selected on the basis of their availability at the time of our visit to the units where they were located. These selected sites, managers, and carriers are not statistically representative; therefore, we cannot generalize from our sample to the universe of all carriers. We do, however, use the results of these interviews to present illustrative examples of DPS-related issues from the points of view of the carriers and managers. We requested comments on a draft of this report from the Postmaster General and the presidents of the seven labor unions and management associations including the American Postal Workers Union (APWU); NALC; National Postal Mail Handlers Union; National Rural Letter Carriers’ Association (Rural Carriers); National Association of Postal Supervisors (NAPS); National Association of Postmasters of the United States; and National League of Postmasters of the United States. The Service and NALC provided written comments, which are reprinted in appendixes IV and V, respectively. APWU and NAPS provided oral comments. The comments of these four organizations are discussed in appropriate sections throughout the report and at the end of the report. The remaining organizations did not provide comments. We conducted our review from June 1997 through February 1998 in accordance with generally accepted government auditing standards (see app. I for additional detail). By November 1997, the Service was making progress toward meeting its fiscal year 1998 goal for completion of letter automation through DPS implementation. Since March 1993, DPS has been implemented at increasing numbers of delivery units as equipment was deployed and DPS volume grew. The letter automation program suffered initial slippages, which caused DPS implementation to fall behind schedule. The 1996 Plan revised the benchmarks set in the 1992 Plan, extending the automation program completion date from fiscal-year-end 1995 to 1998. While the Service has not achieved all its DPS implementation benchmarks, it has deployed all authorized DPS equipment and exceeded the goal for number of delivery zones to receive DPS letters. It is making progress in meeting its benchmarks for numbers of barcoded letters and DPS routes. While the Service did not have complete data to measure total DPS volume or percentage on the routes, it estimated that carriers were receiving an average of about half their letters sorted in delivery sequence compared with the 70 to 85 percent, which the Service expects to achieve by the end of fiscal year 1998 when DPS is scheduled to be fully implemented. DPS implementation has been an ongoing process since it began in March 1993. The 1992 Plan called for implementation of DPS in delivery zones that have an equivalent mail volume of 10 or more city routes and rural routes with city-style addresses. The 1992 Plan did not call for implementation in small offices and in many rural areas; and the 1996 Plan equated to implementing DPS on 154,000 routes, or about 63 percent of the number of city and rural carrier routes existing at fiscal-year-end 1997. As the volume of barcoded letters increased, the Service purchased and deployed automation equipment needed to delivery sequence the letters and gradually increased the number of delivery units and carrier routes that receive a portion of their letters delivery sequenced. DPS implementation is achieved through a team effort among local delivery, processing, address management, and logistics operations to extend DPS to increasing numbers of delivery units, such as post offices, stations, or branches where letter carriers prepare mail for delivery and then deliver it to addresses along regularly scheduled routes. The key DPS implementation steps are as follows: Select delivery units for DPS that generally have 10 or more city routes or rural routes with city-style addressing; Deploy Delivery Bar Code Sorters and Carrier Sequence Bar Code Sorters at mail processing plants and delivery units, respectively, to provide delivery sequenced letters; Before DPS is implemented in each delivery unit, analyze route alignments and plan for future DPS realignments by taking the appropriate actions authorized in the 1992 joint agreements’ training guide; Determine each unit’s target DPS percentage of total letters that, when achieved, triggers DPS route adjustments. Targets are set using either the Unilateral or X-Route process authorized in the Service-NALC 1992 joint agreements. Targets are to be set at 70 to 85 percent under the X-route process and at management discretion under the unilateral process. According to Service guidance, interim adjustments can and should be made when DPS volume reaches 40 percent; Manually sort DPS letters and correct any automated sort errors until 98 percent sort accuracy is achieved for 3 consecutive days, after which, DPS letters are taken to the street without manually sorting them; and Add delivery points and increase street time on routes to capture in-office workhours that are saved by carriers not manually sorting DPS letters prior to delivery. Figure 1 presents highlights of events in the implementation of the letter automation program and DPS, which we will discuss throughout this report. In its 1992 Plan, the Service scheduled DPS implementation for completion by fiscal-year-end 1995. The 1992 Plan included DPS goals and benchmarks for (1) deploying all needed barcode sorters, (2) barcoding virtually all letter mail, and (3) implementing DPS for specific delivery zones and carrier routes. However, the Service was unable to achieve these goals by 1995 as planned, due to several delays in completing the automation program. In August 1992, the Service’s Board of Governors postponed approval of the next phase of automated equipment procurement affecting DPS, pending a thorough review and evaluation of the supporting decision analysis report by the newly appointed Postmaster General. Then, in April 1994, the Postmaster General announced that the barcoding goal would have to slip from 1995 to the end of fiscal year 1997. The initial program slippages were primarily due to a shortfall in volume of barcoded letters caused by a delay in deploying remote barcoding and lower-than-anticipated barcoding performance by Service Optical Character Readers. In its 1996 Plan, the Service extended the DPS completion date to the end of fiscal year 1998 and revised associated goals and benchmarks. In fiscal year 1995, the second full year of DPS implementation, we reported DPS had fallen behind schedule and that the Service would have to overcome difficult obstacles to complete the automation program by the target date, fiscal-year-end 1998. The Postal Inspection Service also found that the Service experienced initial difficulties implementing DPS and capturing projected savings due to, among other things, low DPS volume and carriers’ distrust of sorting accuracy. The Inspection Service also reported that DPS implementation was hindered by, among other things, field units’ noncompliance with the Service’s national DPS guidelines as well as inefficient flow of letters through automated processing operations or letters totally bypassing automation. In addition, carriers did not always gain the efficiencies the Service needed to capture workhour savings. For example, many carriers wanted and were allowed to manually sort DPS letters before delivery, in part because of low percentages of DPS letters, compared with non-DPS and lack of confidence in sort accuracy. Shortly after introducing DPS, the Service also lowered its estimate of the amount of office time each carrier would save by not manually sorting DPS mail. Initially, office time was to decrease from the existing 4 hours per day to 2 hours per day, and street time was to increase from 4 hours per day to 6 hours per day. Office workhours were to decrease as the amount of DPS mail provided to the carriers increased. Theoretically, when DPS volume received by each delivery unit met preestablished targets, the DPS routes were to be adjusted to add deliveries and street time. However, as the Service gained experience with DPS implementation, it became clear that target DPS volumes had been set too high and could not be achieved. As a result, the Service lowered its expectation of in-office savings to 80 minutes per day, based on lower targets and standard sorting rates and volumes. The Service prepared the 1996 Plan to revise automation goals and benchmarks following initial delays in capturing letter automation savings. The 1996 Plan extended the DPS implementation completion date to fiscal-year-end 1998. Achieving the revised implementation benchmarks required that automation equipment be purchased, deployed, and used effectively to achieve barcoded and DPS letter volumes. By November 1997, the Service had deployed all authorized DPS equipment—4,784 Delivery Bar Code Sorters and 3,726 Carrier Sequence Bar Code Sorters—at a total cost of about $1.3 billion. Table 1 shows the letter automation goals that were to be achieved for fiscal years 1995 through 1998, when the program is scheduled to be fully implemented. The 1996 Plan did not include specific goals for DPS volume or percentage of DPS letters on carrier routes. However, the Service’s analyses of projected carrier workhour savings and its 1992 joint agreements with NALC assumed that as DPS was implemented in delivery units at least 70 to 85 percent of letters arriving in these units for carrier routes would be sorted to DPS. After fiscal year 1998, the Service plans to continue with efforts to further increase barcoded and DPS volumes in order to sequence as many letters as possible. By November 1997, the Service was making progress but had not met all the automation and DPS implementation benchmarks designated in its 1996 Plan for fiscal year 1997. Reported barcoded volume and the number of routes on DPS were slightly below the 1997 benchmarks, despite having exceeded the goal for the number of zones where DPS was scheduled to be implemented. Further, the Service did not obtain data from its field offices sufficient to accurately measure total DPS volume or the percentage of DPS letters going to city and rural routes where DPS had been implemented. In lieu of complete DPS volume data, Service officials estimated that over half of letters given to carriers were sorted in delivery sequence. Percentage of total letters barcoded. Nationally, the percentage of total letters barcoded increased from about 52 percent in 1993 to about 81 percent in 1997, or about a 29 percentage-point increase. After achieving its fiscal years 1995 and 1996 benchmarks at the end of fiscal year 1997, the Service reported barcoding 106.8 billion, or 81 percent, of total letters compared with the 85-percent benchmark for that year, as shown in table 2. This 4-percentage point shortfall represents about 4.6 billion letters. However, Service officials said they believe they will reach the 1998 barcoding goal of 88 percent of letters as Classification Reformencourages more customer barcoding and the Service continues its efforts to increase its own barcoding using Optical Character Readers and remote barcoding. Percentage of letters sorted to DPS. Data on actual DPS volume were not aggregated nationally, but Service officials estimated that, on average, carriers received over half their letters sorted to DPS. DPS savings projections are based on achieving at least 70 to 85 percent DPS volume on carrier routes as DPS is implemented in each delivery unit. The Service’s requirements for data to be aggregated nationally resulted in reporting of only a portion of DPS volume. Nationally, data were aggregated only for city routes and letters sorted on Delivery Bar Code Sorters in the processing plants, and excluded DPS letters sorted on Carrier Sequenced Bar Code Sorters deployed in delivery units and all DPS letters on rural routes. The Service also did not aggregate data on total letter volume sent to DPS routes. In the absence of complete data to calculate actual DPS percentage of total letters received by delivery units and routes, Service officials arrived at an estimate using average daily mail volumes on city routes. As of October 1997, these routes received an estimated daily average of 1,700 to 2,000 total letters and 1,000 DPS letters. Thus, the officials estimated that DPS routes received, on average, about 50 to 59 percent DPS letters compared with the Service’s expectation of 70 to 85 percent when implementation is completed. DPS percentages varied among delivery units and carriers we visited, but the average generally appeared to be close to the Service’s estimated average. At the time of our visits, the six delivery units reported DPS percentages ranging from 35 to 80, with an average of 57 percent but did not have data on DPS percentages for their individual routes. Of 139 carriers we interviewed at these units who provided estimates, 81 percent estimated that their DPS volume was 50 percent or more of total letters. For the 139 carriers, individual estimates ranged from a low of 5 percent to a high of 87 percent. DPS Zones and Carrier Routes. The Service has surpassed its goal for the number of zones that will receive DPS letters. Better than anticipated performance of the smaller barcode sorters in delivery units has allowed the Service to deploy equipment to more small zones than originally planned; for example, those with mail volume equivalent to less than 10 routes. The 1996 Plan did not establish yearly goals for the number of zones to receive DPS letters but called for 6,300 zones to receive DPS letters by the end of fiscal year 1998. By the end of fiscal year 1997, 7,632 zones were reported as receiving DPS letters. The Service reported achieving about 96 percent of its fiscal year 1997 benchmark for the number of carrier routes receiving delivery sequenced letters. As shown in table 3, by the end of fiscal year 1997, the Service had reported implementing DPS on 142,557 city and rural routes. Service officials said that as implementation progresses and more addresses are delivery sequenced, they believe they will achieve their goal of 154,000 DPS routes in a fully operational DPS environment. The Service is capturing projected city carrier workhour savings through its budget process. The Service’s decision analyses projected total carrier savings of 27.2 million workhours through fiscal year 1997 and total savings of 56.7 million workhours by the end of fiscal year 2001. Since 1994, the Service has annually reduced city carrier workhour budgets to capture the projected savings. Despite reported budgeted reductions of 26.5 million workhours through fiscal year 1997, actual carrier workhours reported decreased by a total of 22.5 million through fiscal year 1997. However, during fiscal years 1996 and 1997, the Service reported that field offices achieved actual carrier workhour reductions that exceeded their budgeted workhour reductions by 5.8 million. The Service believes these workhour reductions and reductions in number of city routes can be attributed to DPS. However, they acknowledge that some workhour reductions might have been achieved through managers’ efforts to increase efficiency that were not related to DPS. The Service’s projections of carrier in-office workhour savings to be achieved by DPS were established in its decision analysis reports, which were used to justify automation equipment investments. Service officials identified six reports that were used to justify investments totaling over $1.7 billion in barcode sorting equipment (Delivery Bar Code Sorters and Carrier Sequence Bar Code Sorters) needed to implement DPS. These analyses contained assumptions about factors such as equipment deployment and performance, growth in mail volume and delivery points, the pace of DPS implementation, and DPS letter volume. The in-office workhour savings were to reduce overtime on routes, extend street time, and ultimately restrain the rate of growth in routes and carrier positions. The Board approved these investments between fiscal years 1992 and 1996, and equipment deployment proceeded in stages during that period. Together, these analyses projected yearly benchmarks for carrier workhour savings. About 56.7 million carrier workhours are projected to be saved through fiscal year 2001. By the end of fiscal year 1997, the total cost of this investment was $1.3 billion. The Service has budgeted almost all the city carrier workhours that the decision analyses projected would be saved through the end of fiscal year 1997. As shown in figure 2, by the end of fiscal year 1997, budgeted workhour reductions totaled 26.5 million, or over 97 percent of the projected 27.2 million workhour reductions. In fiscal year 1998, DPS is projected to save an additional 14.9 million workhours; and the Service has budgeted 12.6 million. To capture savings in city carrier workhours, Headquarters staff are to adjust the 11 postal areas’ annual budgets by reducing carrier office workhours to reflect the projected savings for the coming year. The areas then are to incorporate the budgeted reductions into their districts’ budgets. By reducing the city carrier workhour budgets in this manner, Headquarters staff said they believe the projected DPS savings will be captured (see figure 2). Actual total reductions in aggregate city and rural carrier workhours fell short of the amount budgeted between fiscal years 1994 and 1997. As shown in figure 2, by the end of fiscal year 1997, actual carrier workhours had been reduced by 22.5 million, or 85 percent of the budgeted reduction of 26.5 million workhours. Service officials said they believe the workhour reductions achieved are due to DPS because there is no other program that could account for them. However, the officials said that some managers might have achieved some workhour reductions through individual initiatives that were unrelated to DPS. For example, one delivery unit we visited was not achieving all the workhour savings expected from DPS because DPS volume was only 60 percent of total letters, but the manager said the unit was able to increase its savings by implementing suggestions made by carriers for changes, not related to DPS, that would make their jobs easier. Although Service managers praised DPS’ ability to save carrier workhours, they said that individual delivery units may not achieve expected savings due to certain conditions—such as volume mix and growth, staffing levels, labor-management relations, and management quality. Even though aggregate actual workhour reductions lagged behind those projected and budgeted for fiscal years 1994 through 1997, actual workhour reductions in each of the last 2 years of this period exceeded those budgeted. In fiscal years 1996 and 1997, as shown in figure 3, actual workhour reductions exceeded the budgeted amounts by 1 and 4.8 million, respectively. These reductions helped offset an initial workhour increase in fiscal year 1994. Service officials said this unanticipated increase in workhours was due to a national level arbitrator finding in favor of NALC in a case regarding the Service’s establishment of city carrier routes that required more than 8 hours to complete, which the arbitrator determined violated the parties’ labor agreement. This decision caused the Service to hire about 18,000 career city carriers between fiscal years 1993 and 1994. As a result, as shown in figure 3, not only were there no workhour reductions in fiscal year 1994, but workhours increased by 5.6 million. Furthermore, Service officials said that growth in volume and delivery points during the period exceeded their expectations, which also affected the field units’ ability to achieve projected savings. Even when allowing for this growth, the officials said that they believe the Service had avoided more costs than was evidenced by their workhour reductions alone. For example, the eight-tenths of 1 percent annual growth in number of delivery points on city routes, without DPS, would require adding 1,300 city routes per year. These officials said that DPS had allowed them to avoid much of the cost of this growth and also reduce the number of city routes that were needed. Factoring in the additional workload resulting from this growth, if DPS had not been implemented, the Service calculated that it would have used 30.4 million more city carrier workhours between fiscal years 1993 and 1997 than it actually used. As a result of this cost avoidance, Service officials reported that city carrier routes increased by 267 in fiscal year 1995 and decreased in fiscal years 1996 and 1997 by 858 and 2,561, respectively, which resulted in an overall decrease of 1.8 percent since fiscal year 1994. These officials estimated that the number of city routes will continue to decline through fiscal year 2000. In contrast, the number of rural routes increased by 5,938, or 11.5 percent, during the same period. The officials said that one reason for this growth is that delivery points on rural routes have grown by an average of 3.84 percent annually since fiscal year 1994. In addition, the cost per delivery is lower for rural routes than for city routes, so when new routes become necessary due to growth, delivery managers tend to establish rural routes where feasible and cost effective. In commenting on a draft of this report, the President of NAPS told us that he believed the Service establishes rural routes over city routes, not because rural routes were less costly, but because rural carriers do not present as many labor relations problems as do city carriers. He also believed that rural routes are not really less costly to the Service than city routes because the rural carrier compensation system is too liberal. Under this system, rural carriers are salaried employees who are paid for a full 8-hour workday or 40-hour workweek with some overtime built into their salaries. However, this system allows rural carriers to go home early and receive a full day’s pay if they complete their work in less than 8 hours. Rural routes reportedly contributed an estimated 4.5 million workhours in direct DPS savings valued at about $100 million. In contrast to city routes, on which carriers are paid by the hour for 8-hour routes plus any authorized overtime, rural carriers bid on their routes and are paid salaries that represent the value of the routes established through annual evaluations of mail volume and time required to manually sort mail and make deliveries. As a result, DPS savings from rural routes are to be captured annually by reducing the value of the routes and carriers’ pay commensurate with the volume of DPS letters that the carriers do not have to sort. Because the savings were already extracted from rural carriers’ salaries, there was no need to manage and track rural carriers’ hourly savings. As a result, the Service allows rural carriers to manually sort DPS letters if they wish. However, city carriers must capture savings each day in hourly increments by not manually sorting DPS letters. As a result, the Service does not allow city carriers to sort DPS letters. The Service has been addressing remaining issues it believes have affected its efforts to achieve its fiscal year 1998 DPS goals and benchmarks and maximize carrier workhour savings resulting from DPS. These issues include both operational and labor-management relations issues. The Service has made considerable progress in its efforts to address operational issues, although it has been less successful with those concerning labor-management relations. Table 4 presents an overview of the operational issues and the Service’s efforts to address them. The operational issues shown in table 4 were identified by the Service as impeding its efforts to achieve DPS goals and benchmarks and maximize DPS savings. The Service has efforts under way to increase barcoded and DPS letter volumes by encouraging business customers to apply barcodes and improve address quality through rate incentives. The Service has also begun efforts to improve the management of mail flow through its automated barcoding operations by providing more training to employees as well as enhancing the capabilities of its optical character readers. Further, to improve management of mail flow through its automated sorting operations, the Service is attempting to determine causes of problems and then resolve them. Also, DPS implementation teams have been designated to, among other things, serve as links between mail processing and delivery unit operations regarding DPS issues. Finally, the Service has developed, and is in the process of implementing, a method to sort in delivery sequence letters that are addressed to units in multioccupancy buildings, which account for about 19 percent of total deliveries. Regarding city carriers’ declining street efficiency, the Service is focusing efforts on improving delivery management to reverse this trend and enhancing its ability to adjust routes to capture DPS savings. To increase workhour savings, additional funds have been provided by the Service so that route inspections can be conducted and carriers’ routes can then be adjusted to capture DPS savings. The use of contractors to perform route inspections has also been authorized by the Service. In addition, it is working to improve supervision of city carriers’ street operations and testing both alternative delivery methods and new city carrier performance standards. For a more detailed discussion of the operational issues and the Service’s actions to address them, see appendix II. Although the Service has made progress toward resolving its operational issues, it has been less successful in resolving those involving labor-management relations. Labor-management relations issues also have been affecting the Service’s efforts to reach its fiscal year 1998 goals and benchmarks. These issues also affect the Service’s ability to maximize DPS savings. Labor-management relations issues include disagreements with NALC over DPS implementation and the need to gain the support of city carriers who are dissatisfied with DPS work methods. Table 5 presents an overview of the labor-management relations issues we identified and the Service’s efforts to address them. The Service has had problematic relations with three of the four major labor unions that represent postal employees, including NALC, which represents city carriers, over a variety of issues for a long period. DPS implementation has been one of the contentious issues between the Service and NALC and its city carriers. The DPS conflicts revolved around three areas: (1) the work methods that should be used by city carriers to implement DPS; (2) the manner in which the Service implemented DPS, which NALC viewed as inconsistent with the agreements it had reached with the Service; and (3) DPS’ effect on city carrier street efficiency. Many city letter carriers said that they believe DPS work methods adversely affected their efficiency and, in some cases, service to their customers. City carriers were particularly concerned about not being able to manually sort DPS letters to combine them with the non-DPS bundle or to identify DPS sort errors and other undeliverable letters before going to the street. Many of the city carriers’ disagreements with DPS resulted in grievances, filed at the national and local levels. Although most grievances were resolved through settlement, three went to national level arbitration. In 1996, a national level arbitrator ruled on one of the cases. The arbitrator found that the Service had not violated the rules relating to transitional employees from prior agreements. During 1997, another national level arbitrator ruled on the two remaining cases and determined that the Service had violated either provisions of existing labor agreements or the 1992 joint agreements. The arbitrator instructed the Service and NALC to jointly determine alternative methods to resolve the problems. In one case, the parties agreed to conduct a joint study of the issues involved and complete the study by April 1998. In the second case, the parties have met to discuss the issue; but as of March 1998, they had not yet reached agreement on how to proceed to resolve the issue. In addition, to improve their overall working relationship, on October 20, 1997, the Service and NALC signed an agreement to test a revised dispute resolution process aimed at narrowing areas of dispute and effectively and constructively resolving their disagreements. Regardless of how one views the Service’s and NALC’s positions, the disagreements between them have resulted in adverse consequences. These consequences include delays in capturing early DPS savings from route adjustments, dissatisfaction among many city carriers, and additional contentions between the Service and NALC. In part, due to the arbitrator’s decisions, the Service and NALC have begun to jointly work on some of the areas of disagreement. Unlike the situation with NALC and city carriers, the Service has not had a contentious relationship with its rural letter carriers or their union, the Rural Carriers. This is largely due to the agreement the Service reached with the Rural Carriers regarding a new manual sorting standard for delivery sequenced letters. For a more detailed discussion of labor-management relations issues and the Service’s actions to address them, see appendix III. We provided a draft of our report to eight organizations for their review and comment. The eight organizations were the Postal Service; the four labor unions, including APWU, NALC, National Postal Mail Handlers Union, and Rural Carriers; and the three management associations, including, NAPS, National Association of Postmasters of the United States, and National League of Postmasters of the United States. We received written comments from the Service and NALC. We obtained oral comments from NAPS and APWU. The remaining organizations said they did not wish to comment on the draft report. Service officials also provided written and oral technical comments to clarify and update some information in the draft report. Overall, the Service and NALC expressed diverse views regarding the effects of DPS and its related labor-management relations issues. The Service said that our report gave an accurate summary of the letter mail automation programs. The Service reiterated the extent of DPS implementation on carrier routes and workhour savings, which it noted was more successful than anticipated. The Service also acknowledged that it and NALC have had numerous disagreements regarding DPS implementation, but that the disputes over DPS have either been resolved or are in the process of being resolved, and that the parties are engaged in a number of cooperative ventures that they expect will have a beneficial effect on labor-management relationships. We have reprinted the Service’s comments in appendix IV. In its comments, NALC reaffirmed its support for DPS and noted that automation would enhance the Service’s long-term viability and employment of the letter carrier craft. NALC criticized the methodology we used to gather information, including our reliance on (1) data provided by the Service without verifying its accuracy, (2) interviews with and observations of a relatively small number of letter carriers, and (3) Service managers’ opinions about the success of DPS. While we recognize and take special care to adhere to the limitations associated with our scope and methodology, we do not agree with NALC’s critical characterization of the report. The report clearly laid out our objectives, scope, and methodology, including the limitations, so as to fully inform the reader of the basis and context surrounding the information in the report. Due to limited resources and the technical difficulties inherent in verifying the Service’s data, which are aggregated from its vast field network, we disclosed in the report that we used the Service’s data on carrier workhours without verifying it. We also clearly disclosed that we interviewed a relatively small number of city carriers in three postal districts to obtain their opinions about DPS issues. We discussed in the report, several types of data that the Service did not have or that were not sufficient to produce accurate measures, such as DPS sort accuracy and percentage of DPS letters on carrier routes. To supplement the available data and to discern the Service’s position on DPS implementation history and labor relations issues, we obtained the views and opinions of Service delivery managers. Further, to provide balance, we obtained views and opinions about these same issues from national leaders of NALC and included both parties’ opinions in the report. NALC also commented on several specific issues discussed in the report. We considered these comments and made changes to the report where appropriate. We also have included a reprint of NALC’s comments and our additional comments on specific issues, where appropriate, as appendix V. The oral comments we received from APWU and NAPS primarily sought clarification of points based on their positions and knowledge of historical events regarding letter mail automation and carrier delivery operations. The Assistant Director of APWU’s Clerk Division told us that postal clerks—which APWU represents—have made various contributions to assist the Service’s letter mail automation efforts, which the report should mention. He pointed out that APWU clerks have always cooperated with the Service to implement automation and entered into agreements with the Service that have facilitated the Service’s capture of savings. We agree that the postal clerks have made contributions in reducing workhours in the Service’s processing plants as automation was implemented. The President of NAPS gave us his views about the Service’s difficulties in managing city carrier delivery operations and the need for city carriers to support DPS. He said that the Service’s curtailment of route inspections between about 1975 and 1990 marked the beginning of the Service’s difficulties in managing city delivery operations. Without route inspections, normal mail volume growth and new addresses resulted in routes that were out of adjustment. These routes had workloads that could not be completed within 8 hours, which led to significant amounts of overtime each day to deliver mail. City carriers serving these routes were required to negotiate daily with their supervisors for overtime. Overall, he said that this condition triggered the conflict between city carriers and their supervisors that continues today. The President of NAPS said that some incentives are needed to encourage carriers to support DPS. For example, he suggested that if routes could be accurately evaluated each day, the daily overtime negotiations would be eliminated and carriers could be allowed to go home after completing their duties, even if they finish in less than 8 hours. The President said that he believed this would be possible when DPS is fully implemented, including the automated sorting of flats. That is, every morning, delivery unit supervisors could obtain exact mail counts from the automation equipment and use these data to evaluate workload requirements on each route. This would allow the supervisors to determine exactly how much time individual carriers would need to sort and deliver their mail on that day. The President believes these daily evaluations could replace the periodic city route inspections now conducted and would be superior to the annual evaluations now conducted on rural routes to determine rural carriers’ salaries. However, he said that if incentives are unsuccessful and carriers do not cooperate, supervisors cannot be expected to watch all the carriers while they deliver mail to ensure they are working efficiently. For this reason, the President said that he would support the use of a global satellite system, which is now being tested, to monitor carriers while they deliver mail. He also provided other comments about the information presented in the draft report, which have been incorporated into the report where appropriate. We are providing copies of this report to the Subcommittee’s Ranking Minority Member; the Chairman and Ranking Minority Member of the Senate Committee on Governmental Affairs, the Postal Service, APWU, NALC, National Postal Mail Handlers Union, Rural Carriers, NAPS, National Association of Postmasters of the United States, and National League of Postmasters of the United States, and other interested parties. We will also make copies available to others on request. Major contributors to the report are listed in appendix VI. If you have any questions, please call me on (202) 512-8387. In a June 9, 1997, letter, the Chairman of the Subcommittee on the Postal Service, House Committee on Government Reform and Oversight, asked us to provide information on the status of the U.S. Postal Service’s efforts to implement Delivery Point Sequencing (DPS). As agreed with the Chairman’s office, our objectives were to (1) determine the U.S. Postal Service goals for DPS implementation, its projected letter carrier workhour savings, and the extent to which the Service has achieved these; and (2) identify any remaining issues that the Service and others believe must be addressed for the Service to achieve its 1998 DPS goals and the actions, if any, that the Service has taken to address these issues. To determine the Service’s goals for DPS implementation, we reviewed the Service’s 1990, 1992, and 1996 Corporate Automation Plans, which among other things described the DPS related activities, annual benchmarks, goals, and associated timeframes for completing the letter mail automation program. We reviewed the Decision Analysis Reports, which justified the DPS-related automated equipment investment, to identify the (1) assumed DPS letter volume that carrier routes were to receive and (2) projected carrier workhour savings that were to be achieved from DPS implementation. To determine the progress the Service has made toward achieving its goals, we obtained the Service’s fiscal years 1993 through 1997 national data on actual carrier workhour savings; barcoded and DPS letter volumes; productivity; and the number of delivery zones, delivery units, and carrier routes that receive DPS letters. We compared these actual data with the appropriate DPS benchmarks, goals, assumptions, and projected workhour savings in the 1996 Corporate Automation Plan (Plan). We discussed with responsible Postal Service headquarters officials these benchmarks, goals, assumptions, and projected carrier workhour savings to assist us in determining the progress the Service has made toward achieving them. We also discussed with these officials how the Service (1) used its budget process to capture carrier workhours savings and (2) determine the cost avoidance associated with DPS implementation. We did not verify the operational and budget data that the Service provided. To identify any remaining issues that may affect the Service’s ability to achieve its 1998 DPS goals, we considered the findings from our prior audit reports and those of the Postal Inspection Service on letter mail automation. We analyzed the ongoing and planned DPS implementation tasks described in the 1996 Plan, which the Service plans to complete to achieve its 1998 DPS goals. We discussed these findings and tasks with responsible Postal Service headquarters officials and asked them to identify the key issues that remain, which the Service must address. We also asked the officials to identify any actions the Service has taken to address any remaining issues and the current status of these actions. Our preliminary work indicated that the Service was experiencing labor-management relations problems with its city carriers over DPS implementation. On the basis of that work and our knowledge of persistent labor-management relation problems in the Service from our past work, we contacted the Service’s four major labor unions and three management associations to identify whether these organizations believed that there were any labor-management relations issues that the Service must address to achieve its 1998 DPS goals. We interviewed national representatives of these organizations located in the Washington, D.C. metropolitan area to obtain their views on the impact DPS implementation has had on postal operations and the working conditions of the postal employees they represent. The four labor unions contacted were (1) the American Postal Workers Union (APWU), (2) the National Association of Letter Carriers (NALC), (3) the National Postal Mail Handlers Union (Mail Handlers), and (4) the National Rural Letter Carriers’ Association (Rural Carriers). The three management associations contacted were (1) the National Association of Postal Supervisors (NAPS), (2) the National Association of Postmasters of the United States (NAPUS), and (3) the National League of Postmasters of the United States (the League). We also discussed the identified labor-management relations issues with responsible Postal Service headquarters officials. To gain an understanding of labor-management relations issues within the Service, we reviewed relevant documents, including our prior reports, Service and NALC 1992 joint agreements, national arbitration cases regarding city carrier grievances associated with DPS, and city carrier DPS training materials. To observe any issues that the Service and its unions and management associations identified, we selected a judgmental sample of 3 districts and 6 delivery units located within 3 of the 11 Postal Areas, which included Capital Metro Operations (Capital Area). We selected the Northern Virginia District in the Capital Area and the Suncoast District in the Southeast Area because these two districts had fully implemented DPS on all city routes, which meant that carriers on these routes were receiving and taking delivery sequenced letters directly to the street. We also selected the Denver District in the Western Area because it gave us additional geographic dispersion and was located in close proximity to our staff in Denver. Within each district, we selected two units that reported both the highest office efficiency and declining street efficiency,compared to the same period last year, and were located within 2 hours driving distance of the district office. We used these efficiency measures as selection criteria because according to the Service (1) office efficiency was expected to increase with DPS implementation and (2) street efficiency had declined on both DPS and non-DPS routes, and the Service reported that this decline had offset some DPS savings. We limited the number of districts and delivery units selected to three and six, respectively, because gathering DPS related information from these offices was a time-consuming effort that involved examining records and interviewing managers, carrier supervisors, and carriers at several geographically dispersed locations. We interviewed responsible Service officials from the three district offices to obtain their views on DPS implementation within the district. We discussed the DPS implementation process, its effect on mail processing and delivery operations, carriers’ concerns with DPS work methods, and ongoing efforts to identify and resolve DPS related problems. In addition, we interviewed responsible Service officials in the three area offices to obtain an area-wide perspective on DPS implementation, capturing DPS savings through the budget process and route adjustments, and carriers’ concerns with DPS work methods. At each of the six delivery units we visited, we interviewed the managers, carrier supervisors, and carriers to obtain their views on DPS implementation and related concerns about DPS work methods. Because carriers generally are to spend most of their workday on the street delivering mail, the best time to interview them is in the morning while they are in the office. To maximize the number of carriers who could be interviewed by our available staff, we arrived at each delivery unit about the time the carriers reported for work and began interviewing them. We judgmentally selected the carriers that were interviewed on the basis of their availability at the time of our visit. In order not to disrupt delivery operations, we interviewed carriers individually while they prepared their mail for delivery. Each interview required 5 to 10 minutes to complete. We continued interviewing the carriers until they departed the office to deliver the mail. In total, we interviewed 111 city and 31 rural carriers at the 6 delivery units. We then met with unit management to discuss the progress and problems associated with DPS implementation within the unit. We reviewed each unit’s operational data, which included detailed information on the carrier workforce, mail volume, possible deliveries, and route adjustments. At two delivery units, one of our staff members accompanied a carrier on the route to observe DPS work methods. The selected units and carriers are not statistically representative; therefore, we cannot generalize from our sample to the universe of all carriers. We do, however, use the results of these interviews to present illustrative examples of DPS-related issues from the carriers’ points of view. While we most likely did not identify every possible DPS related issue that could exist within the universe of delivery units, according to district officials, the units we visited were not atypical of others within the districts. In addition, we visited two mail processing plants in Denver, CO, and Tampa, FL, and a remote barcoding site in Tampa. We toured each facility and observed its DPS-related operations. We met with responsible Service officials at each facility and discussed various DPS-related issues, including DPS equipment deployment, operation, and enhancement; mail flows; barcoding; and problem identification and resolution. We did our work from June 1997 through February 1998 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Postmaster General; the presidents of the four labor unions (APWU, NALC, Mail Handlers, and Rural Carriers); and the three management associations (NAPS, NAPUS, and the League). We received written comments from the Service and NALC, which are reprinted in appendixes IV and V, and obtained oral comments from APWU and NAPS. The comments of these four organizations are discussed in appropriate sections throughout the report and at the end of the report. The remaining organizations did not provide comments. The Service was addressing operational issues that it believed impeded its efforts to achieve DPS goals and benchmarks and maximize DPS savings. These issues include (1) less than expected barcoded letter volume, (2) low DPS letter volume, and (3) declining street efficiency. These issues and the Postal Service’s efforts to address them are discussed as follows. The Service is attempting to achieve a 7-percentage point increase in barcoded letters to meet its 1998 DPS implementation goal of 88 percent. Service officials said they believe they will achieve this goal through better rate incentives for customer barcoding and resolving mail flow and readability problems. In anticipation of implementing better rates for customer barcoding, the Service revised its barcoding strategy from about 40 percent of barcodes to be supplied by customers to 50 percent, by fiscal-year-end 1998. In July 1996, the Service implemented better rates for customer barcoded letters that meet new requirements for barcode quality and address accuracy, which is critical to achieving accurate 11-digit barcoding. The officials believe these better rates were a major factor in customer barcoding increasing over 9-percentage points during fiscal year 1997, compared with increases of about 3.5-percentage points during each of the 2 previous years. The Service is also trying to increase the number of letters it barcodes in its mail processing plants. Since fiscal year 1993, Service data show it has increased its barcoding by only 11-percentage points. Service managers said that mail processing plants were neither barcoding all the letters that could be barcoded nor upgrading all letters that have 5- or 9-digit barcodes to 11-digit barcodes. The managers said that many letters were not being barcoded or upgraded because mail processing personnel did not route these letters to the appropriate optical character readers or remote barcoding systems for processing. According to Service officials, local mail processing managers have taken various actions to correct these mail flow problems. These actions include (1) enhancing mail processing employees’ knowledge of the types and quality of letters that can be barcoded through classroom and on-the-job training, (2) obtaining feedback from delivery unit managers to identify batches of letters that bypassed automation and developing ways to prevent similar batches of letters from bypassing automation in the future, and (3) working with local business mailers to increase their volume of automation-compatible letters. Poor address quality hampers the Service’s barcoding success; and, as customers succeed in barcoding more letters, this problem will be exacerbated because the remaining letters fed to the Service’s optical character readers will be of lower quality and more difficult to successfully barcode. The remaining letters may have addresses that the Service’s optical character readers cannot read due to factors such as poor print quality, style of type, or color or composition of the paper used to make the envelope. To increase barcoding, the Service recently completed the deployment of remote barcoding systems and optical character reader enhancements. One component of this system is a remote computer reader that uses advanced computer technology to read images of problem addresses and determine the appropriate barcode to be applied to the letter. Some of the optical character reader enhancements include updating the address recognition modules to read additional characters and dot-matrix print and installing wide-area barcode readers to locate and read a barcode virtually anywhere on an envelope. As these systems and enhancements become fully operational, the Service expects that its barcoding capability will improve. The Service is also trying to achieve 70 to 85 percent of carriers’ letters delivery sequenced, the percentage assumed in its decision analyses. However, Service officials told us that the Service must overcome various mail flow problems that have impeded increasing the number of delivery sequenced letters. They said that these mail flow problems include not capturing all letters that could have been delivery sequenced, underused mail processing resources, automated equipment not yet deployed, and barcode readability and accuracy problems. In a 1996 report, the Inspection Service identified that mail processing plants were not capturing all of the letters that could be delivery sequenced because mail processing employees were not following standard operating procedures and proper mail flows. For example, employees misdirected letters to operations that bypassed automated equipment; or they did not run letters that were initially rejected a second time through automated equipment, which may have resulted in these letters not being delivery sequenced. The Inspection Service attributed these problems to supervisors not properly monitoring employee work habits and inadequate employee training. Also, the Inspection Service identified a lack of coordination between mail processing plants and delivery units to resolve DPS-related mail processing problems. Service officials at headquarters and the field locations we visited identified other mail flow problems. The officials said that mail processing resources, such as automated equipment, were not always fully utilized. At the time of our review, some mail processing plants had not yet received scheduled deployments of remote barcode systems and optical character reader enhancements, which resulted in some plants not generating sufficient numbers of delivery sequenced letters. However, as of November 1997, all remote barcoding systems and optical character reader enhancements were deployed, which should help alleviate this problem. While deploying automated systems, equipment, and enhancements, the Service has reduced its number of mechanized letter sorting machines from about 850 in 1994 to 100 in the first quarter of fiscal year 1998. A Service official said that removing these machines increases the letter volume available to be processed on automated equipment, which leads to increased DPS volume. Also, the Service reported that plants were experiencing barcode readability or accuracy problems caused by factors such as envelope design, print quality, incorrect barcodes, or mechanical problems, which caused barcode sorters to reject more letters than expected. At the locations we visited, Service officials have taken steps to determine the cause of these problems. For example, one district identified 100 DPS-candidate letters and then tracked the processing of these letters, which district officials believed should have arrived at delivery units in delivery sequence. Of the 100 letters, the officials found that only 72 arrived at the units in delivery sequence. The officials were attempting to identify the reason(s) why the 28 other letters did not arrive at the units in delivery sequence so they could take corrective action. At another district, local officials found that delivery units were not receiving about 5,000 to 10,000 letters early enough each morning for these letters to be delivery sequenced on the units’ carrier sequence barcode sorters. The letters that were not delivery sequenced had to be manually sorted by carriers, which the Service said increased their office time and adversely affected DPS savings. The Service has taken actions to address these mail flow problems. Some of the more significant actions include the establishment of DPS implementation teams to, among other things, serve as links between mail processing and delivery unit operations regarding DPS issues. District managers have scheduled regular meetings between mail processing and delivery unit managers to improve their communication and coordination, resolve problems, and increase DPS volumes. For example, in one district, teams of mail processing and delivery unit managers have discussed operating goals, identified initiatives to achieve these goals, set joint targets for DPS percentages, and raised their percentages over the previous year. Service headquarters officials have developed a training course to familiarize mail processing employees with the new mail flows that remote barcoding systems create. In addition, in 1994, about 23 million delivery points (i.e., apartments, offices, or suites) within multioccupancy buildings, which account for about 19 percent of the total deliveries, could not be delivery sequenced using the current barcoding rules. Since the 11-digit barcode enables the delivery sequencing of letters to the street address of multioccupancy buildings, letters with secondary address information such as apartment, office, or suite number require additional carrier handling to manually sort these letters to the appropriate units within the same building. As a result, DPS has not been implemented on many routes that have high densities of multioccupancy buildings in urban areas such as New York City and Chicago or other areas that have similar style addresses. For example, at a delivery unit we visited, DPS was not implemented on some of the unit’s carrier routes due to the large number of apartment buildings and trailer parks that these routes served. When it initially implemented DPS, the Service deferred the delivery sequencing of letters addressed to units within multioccupancy buildings due to the complexity involved in interpreting secondary address information and because it believed that implementing DPS for these units would not be cost effective. The Service now plans to revise its barcoding rules that business mailers must follow to receive the automation discount. The Service estimates that this change will enable the delivery sequencing of up to 95 percent of the apartments, offices, and suites in multioccupancy buildings in 1998 to 1999. To make the change to enable multioccupancy delivery sequencing, both the Service and mailers will, among other things, need to modify the software that they use to barcode letters. Also, mailers are to be required to update their address files with complete and accurate secondary address information. Although the Service has initiated efforts to implement the revised rules, actual implementation of these rules will depend upon whether mailers accept these rules and potential technical problems associated with these revisions can be resolved. The Service justified its investment in DPS on the basis that the automated sorting of letters in delivery sequence would reduce the time carriers would have to spend in the office manually sorting letters and increase the proportion of their time on the street actually delivering the mail. This increase in street time was to expand the size of carrier routes and ultimately reduce the number of routes that would be needed. The Service recognized that DPS would likely increase the time carriers needed to perform some operations on the street, which were formerly done in the office. However, the Service believed this increase would be minimal and that DPS would not otherwise have a significant adverse effect on carrier street efficiency—the number of deliveries carriers make per hour—and DPS savings. The Service has achieved in-office carrier workhour savings with DPS implementation. However, part of these in-office savings were offset by a nationwide decline in city carrier street efficiency. On DPS routes, the Service believes that the decline in street efficiency was (1) greater than it had anticipated from DPS work methods and (2) at least partially due to route adjustments that were less timely and accurate than expected. While the Service believes that other factors, not related to DPS, have primarily caused declining city carrier street efficiency, NALC officials believe that much of this decline is caused by DPS work methods. The Service has initiated efforts to improve the timeliness and accuracy of route adjustments, and address what it believes to be the causes of declining street efficiency. In some cases, additional street time was needed to handle DPS mail during delivery. Service Headquarters officials said that DPS should have only a minor impact on carriers street time. According to field officials, carriers needed an additional 10 to 15 minutes to deliver DPS mail. The additional street time was needed because of the handling and preparing of DPS letters on the street, tasks that carriers formerly had done in the office. For example, prior to DPS, carriers sorted letters addressed to units within certain multioccupancy buildings in the office. But under DPS, they typically sort these letters while they are on the street. Therefore, some additional time was both accounted for in its projected DPS savings and factored into the carrier route adjustments made to implement DPS. Despite the additional time that was factored in, the Service reported that while all 11 postal areas’ in-office efficiencies increased during fiscal year 1997, their street efficiencies all decreased. This decline occurred on both DPS and non-DPS routes. In June 1997, the Service estimated that nationally for every hour gained in office efficiency due to DPS, about 20 minutes were lost in street efficiency. The Service is concerned about the effects this unexpected decline in street efficiency is having on DPS savings. For example, the Service said that in-office savings are eroded to the extent that carrier street time is not efficiently used delivering mail. Further, the decrease in street efficiency reduces the opportunity to expand the size of DPS routes to offset the growth in deliveries as originally intended. According to the Service, timely and accurate route adjustments have not always been made to city carrier routes to capture the in-office time DPS saves by increasing the number of deliveries. For example, one district manager told us that the lack of timely and accurate route adjustments has been one of the most significant problems affecting the district’s ability to capture DPS savings. The Service attributes this problem to the lack of resources or expertise to perform route inspections, data problems, or lack of management initiative at the local level. According to Service guidance, a route adjustment generally involves changing a carrier’s route workload through proportionate increases or decreases in office and street time to produce an efficient route that has a workday as close to 8 hours as possible. The guidance allows route adjustments to be made with and without route inspections. Route adjustments based on a route inspection normally involve a manager observing the carrier’s in-office and street work for 1 day or more, counting and recording the mail that the carrier handles, and recording the time the carrier uses to perform each function. Service officials said that route inspections are difficult to schedule and perform because these inspections require the skilled personnel who usually must be diverted from their normal duties and take about 30 hours to perform per route. Route adjustments made without route inspections are referred to as minor route adjustments. Service guidance allows managers to make route adjustments as often as necessary to, among other things, provide assistance or add deliveries. Managers make minor route adjustments using in-office and street-time data, numbers of possible deliveries, and the latest route inspection data. The lack of route adjustments prior to DPS implementation was considered a problem. Service officials said that from about 1975 through 1990, the Service performed few route inspections due largely to the unavailability of staff resources. In commenting on our draft report, the President of NAPS did not agree that the lack of resources was the reason why few route inspections were done. Rather, he believed that the Service curtailed route inspections for two reasons. First, multiple position letter sorting machines were being heavily used at that time to process letters and relied on the clerks who operated these machines to memorize carrier-route schemes, which contained a significant amount of address information. Route inspections led to route adjustments and scheme changes. These changes required clerks to relearn portions of the schemes, which were both complicated and expensive. Second, scheme changes were also expensive for business customers who presorted their mail to bypass the Service’s mail processing operations. In preparation for DPS implementation in 1993, the Service and NALC agreed that initial route adjustments would be based on current route inspection data, which were generally collected within the previous 18 months. According to the agreement, route inspections were to be performed on each route to provide data on city carriers’ in-office and street performance and mail volume to prepare for DPS implementation in a delivery zone. However, as DPS implementation proceeded, the Service said that field offices had continuous difficulty performing and funding the required route inspections. The Service acted to address this problem. According to Service officials, in 1995, the Service made funds available to its field offices to perform about 50,000 route inspections, provided more training to managers on performing inspections, and allowed field offices to hire contractors to perform inspections. Although the required inspections were eventually performed and route adjustments were made to implement DPS, according to Service officials, some route adjustments were not accurate. Service officials also said that DPS route adjustments that were made did not always result in accurate assessments of workload requirements because the adjustments were based on potential in-office savings before carriers had experience with DPS. To illustrate this situation, the Service recently gathered preliminary data on many routes, indicating that DPS route adjustments had not added enough deliveries to routes to increase street time and compensate for reductions in office time. A Service official said that these routes with insufficient workloads contribute to the decline in street efficiency as carriers naturally use all the time they have available in delivering the mail. According to the Service, once DPS is implemented within a delivery unit, minor route adjustments are critical in capturing potential DPS savings because as DPS volume increases, route workload should be adjusted by removing office time and proportionally increasing street time by adding deliveries. Area, district, and local managers said that whenever possible, delivery unit managers should take the initiative to make minor route adjustments, which can be made without a route inspection. If this is not done, they said that the benefit of the office savings can evaporate as carriers expand their street time to fill their 8-hour workdays. The Service believes that several factors in addition to route adjustments have contributed to the decline in city carrier street efficiency (the number of deliveries carriers make per hour). However, the Service does not believe that DPS work methods have caused a decline in city carrier street efficiency because the additional time needed to handle and prepare DPS letters on the street was to be factored in when routes were adjusted to implement DPS. According to Service officials, part of the decline in city carrier street efficiency is due to the work habits of many city carriers that have no direct connection with DPS. The officials believe that many carriers are not using the most efficient work methods and need closer supervision. The officials do believe, however, that DPS has had an indirect effect on the decline in street efficiency. In their view, some city carriers, who did not fully support DPS, slowed down their delivery or did not take advantage of opportunities to increase efficiency afforded them by DPS. For example, some city carriers did not use the sorting method that would make mail easier to carry on their individual routes. Further, the officials believe that factors unrelated to carrier work habits, such as increases in the volume of priority packages and longer driving distances to high growth areas, also are contributing to the decline in carrier street efficiency. However, NALC officials do not fully agree with the Service on the extent to which carrier work habits contribute to declining street efficiency. Further, NALC officials believe that much of the decline is attributable to DPS work methods. NALC officials and many city carriers believe that street efficiency is being adversely affected by DPS work methods, such as not being able to manually sort DPS mail in the office and the additional time needed to handle the extra bundle associated with DPS, which slows city carrier delivery. The fact that city carrier street efficiency is declining on both DPS and non-DPS city carrier routes would suggest that factors other than DPS are contributing to the decline. However, definitive data on the causes of the decline are not available to determine whether DPS work methods are adversely affecting city carrier street efficiency greater than the Service initially anticipated. Notwithstanding the NALC’s views, the Service has several efforts under way to deal with city carrier street efficiency. These efforts are intended to increase the street supervision and monitoring of city carriers to ensure that carriers deliver mail at an appropriate pace and do not waste time during delivery. Service officials said that to improve supervision, each accounting period, headquarters delivery managers prepared a list of each area’s post offices with the lowest street efficiency and requested that these be targeted for management attention. The officials also requested that area and district managers implement street management programs to, among other things, identify the most inefficient carriers at each delivery unit and develop corrective action plans. In 1995, the Service initiated the enhanced street performance program to improve delivery service through the use of data collection and communication technologies on the street. Among other benefits, these technologies are to assist in the overall management of street performance for consistency of delivery times and verification of carrier street times. One of the technologies being used is a satellite monitoring system installed in carriers’ delivery vehicles to enable supervisors to track carriers’ locations. In 1996, the Service began testing this program at 11 locations. The Service also began the Delivery Redesign initiative in 1995 to improve delivery efficiency and city carriers’ work environment. One aspect of the initiative is to provide greater incentives for city carriers to work efficiently by changing the way they are compensated. However, the Service is to obtain NALC’s agreement to test compensation alternatives, and NALC has not yet agreed to a test because it considers compensation an issue that is better addressed in the collective bargaining process. Other aspects of the initiative include revising the city carrier delivery process and developing new performance standards for city carriers. According to Service officials, under article 34 of the National Agreement, the Service has the authority to test these changes. Accordingly, in 1997, the Service began testing two approaches: city carrier delivery process changes, such as a team delivery concept that separates the manual sorting and delivering of mail among a group of city carriers. Under this concept, carriers would elect to either sort the mail or deliver it, according to their abilities and preference; and new carrier performance standards that consist of standard time allowances for city carrier office and street activities. The new standards would be used to structure routes and monitor city carrier performance. The Service is conducting these tests at 19 locations and expects them to be completed by the spring of 1999. Like rural carriers, city carriers said that they want the option to manually sort their DPS letters with non-DPS letters and flats while in the office. Of the 111 city carriers we interviewed 57, or about 51 percent, said that they were satisfied with the concept of less sorting, which DPS provides. However, 86 city carriers, or about 77 percent, said that they believed not being allowed to sort DPS letters in the office decreased their street efficiency. NALC officials said that in some situations, especially where DPS volume is no higher than 50 percent, city carriers want to sort DPS letters in the office to improve street efficiency by eliminating the extra bundle and reducing sorting and handling of undeliverable letters while on the street. These officials said that on routes with large numbers of multioccupancy deliveries, carriers’ efficiency was also reduced by having to manually sequence the DPS letters for individual apartments or suites while on the street. These officials also said that substitute carriers, who are not as familiar with the customers and addresses as are the regular carriers on the route, have more of a tendency to incorrectly deliver DPS letters because they do not easily recognize undeliverable letters during delivery. For example, the substitute carrier might not recognize that the addressee on some DPS letters has moved. Service officials said that they believed efficiency would decrease overall if city carriers were allowed to sort DPS letters while in the office. The officials said that they believed many city carriers would not sort DPS letters efficiently because the existing standard for manually sorting random letters requires city carriers to sort only 18 letters per minute and 8 flats per minute. While the officials recognize that many carriers exceed these standards at their own discretion, they are not required to do so. The officials also said that they do not believe DPS should make delivery more difficult for carriers, and if carriers use the most efficient sorting method for their routes and follow standard delivery procedures, they should not have problems. Compared with city carriers, rural carriers are more satisfied with DPS because they are allowed to manually sort DPS letters in the office. Of the 31 rural carriers we interviewed, 29 said that they were satisfied with DPS primarily because sequenced letters are easier and faster to sort or because they like having less sorting to do. However, seven of the rural carriers said that they believed DPS had decreased their street efficiency. Also, the President of the Rural Carriers said that his members were concerned about the reduction in their salaries due to DPS but that the Service has tried to add deliveries to the affected routes to compensate for the office time eliminated. In addition to operational issues, the Service is also addressing those concerning labor-management relations, which also impede its efforts to achieve DPS goals and benchmarks and maximize savings. These issues include poor working relationships with NALC over DPS implementation and insufficient city carrier support for DPS work methods. DPS implementation involved three areas of contention: (1) the work methods that should be used by city carriers to implement DPS; (2) the manner in which the Service implemented DPS, viewed by NALC as inconsistent with the 1992 joint agreements; and (3) DPS’ effect on city carrier street efficiency. Many of the city carriers’ disagreements with DPS resulted in grievances, some of which led to national arbitration cases. These issues and the Postal Service’s efforts to address them are discussed as follows. In September 1992, the Service and NALC jointly reached several agreements to resolve past disputes and implement DPS on city carrier routes. However, some of the 1992 joint agreements became problematic with actual implementation of DPS, and the parties were unable to reach agreement on solutions. The Service subsequently issued instructions to the field, which NALC believed were inconsistent with the 1992 joint agreements. Differences in opinion over the instructions, as well as the meaning of the work methods and transitional employee agreements, generated many grievances at the national and local levels. Although most grievances were resolved through settlement, three went to national level arbitration. During 1996, a national arbitrator ruled on one of the cases and found in favor of the Service. In 1997, another national arbitrator ruled in favor of NALC on the two remaining cases. The arbitrator determined that the Service had violated either provisions of existing labor agreements or the 1992 joint agreements and instructed the Service and NALC to jointly determine alternative methods to resolve their differences. The parties are conducting a study to address the issues involved in one case and are working together to reach agreement on how to proceed to resolve the other case. In addition, to improve their overall working relationship, on October 20, 1997, the Service and NALC signed an agreement to test a revised dispute resolution process aimed at narrowing areas of dispute and effectively and constructively resolving disagreements. In 1992, the Service and NALC published six Memoranda of Understanding, or joint agreements, which were to resolve past disputes and set a joint course for the future. The six agreements are summarized as follows: Case Configuration - Letter Size Mail. Defined letter-sized mail and authorized the use of four- or five-shelf letter cases and route inspections based on these cases. (A letter case is a piece of equipment that contains separations or pigeonholes into which carriers manually sort letters and other mail (e.g., magazines and papers.) Hempstead Resolution. Remanded all pending grievances and selected route adjustments to the local parties for resolution and provided guidance for resolving the grievances. This resolution was based on a national level arbitrator’s finding that the Service improperly established city routes, which required more than 8 hours to complete in anticipation of future DPS route adjustments that would reduce these routes to 8 hours. The Future - Unilateral Process. Provided procedures for management to plan, estimate the impact of, and implement DPS-related route adjustments. The Future - X-Route Process. Provided procedures as an alternative to the unilateral process for local parties to jointly plan to adjust and realign identified routes when the delivery unit had achieved the final DPS target volume. Delivery Point Sequencing Work Methods. Authorized two methods carriers are to use to sort non-DPS letters and “flats” (large envelopes, magazines, and catalogs), and bundle them for delivery. Transitional Employees. Resolved past disagreements regarding the hiring and use of transitional employees within the carrier craft. The agreements stated that successful transition to DPS is the responsibility of local postal managers and union representatives to collaboratively resolve problems. The Service and NALC jointly provided DPS training to field units to prepare carriers and local managers for implementation. In an October 5, 1995, instruction to area vice presidents, the Service reiterated the importance of field compliance with headquarters’ DPS policies and the joint agreements. Managers were cautioned not to enter into local labor agreements that violated the joint agreements or Service policies. However, NALC officials said and the Postal Inspection Service reported that as DPS implementation proceeded, some local agreements and management decisions violated national agreements and policies, causing large numbers of local grievances to be filed by carriers. “As you are aware, we have been unable to reach agreement with the NALC on updating the Memorandums of Understanding concerning DPS implementation. Attached . . . are instructions which explain how to move forward on DPS . . . which are effective immediately.” Headquarters officials, in a subsequent plan concerning DPS implementation, stated that failure to gain a new agreement with the NALC had left delivery units in various stages of development in their plans to capture savings. For example, DPS volumes that remained below the targets set by many units had delayed implementation and allowed carriers to continue sorting DPS letters, delaying capture of workhour savings. The Service said that its instructions to field units mirrored the 1992 joint agreements. One of the instructions advised managers to base calculations of DPS volume for purposes of meeting the targets on weekly averages. The Service believed that this aspect of the joint agreements was not negotiated and was left open for managerial discretion. However, NALC contended that the joint agreements had been reached based on an understanding that target volumes would have to be met for 12 consecutive delivery days. Therefore, NALC said that the Service violated the joint agreements by unilaterally advising its managers to use a method that the parties had not agreed upon. Another disagreement arose when NALC challenged the Service’s interpretation of a 1992 joint agreement involving the DPS work methods carriers were expected to use. Carriers traditionally have used one of several sorting methods to prepare mail for delivery, resulting in either all the mail sorted together and carried as one bundle, or letters sorted separately from larger pieces, called flats (e.g., magazines), and carried as two bundles. Factors—like number and type of deliveries, such as apartments or commercial buildings or amount of walking versus driving between deliveries—can influence which sorting method is chosen. The Service’s DPS instructions required carriers to pick up trays of DPS letters and load them into their vehicles for delivery along with their trays of manually sorted mail. During delivery, carriers select mail from the trays of letters and flats at each delivery point or select and carry letters and flats in their hand as separate bundles while walking portions of the route. The DPS-sorting methods authorized in the joint agreements result in either two or three bundles of mail, in addition to certain types of unaddressed advertising mail delivered to every address. The following are the two authorized DPS sorting methods: Sort non-DPS letters with the flats into the case. Pull down from the case and carry the combined flats/non-DPS letters as one bundle and DPS letters as a second bundle. Sort into and pull down from the case non-DPS letters separately from the flats, and carry DPS as a third bundle. Under the joint agreements, selection of the most efficient method for each route was to be made jointly by local managers and NALC representatives. For example, the parties could agree that carrying two bundles was more efficient on park and loop routes, which require walking between deliveries, or that a third bundle is more efficient for motorized curbside delivery. The Service modeled carrier efficiency using different methods and at different DPS volumes and found that the two-bundle method generally was the most efficient to use with relatively high DPS volume. However, NALC and the Service disagreed about the relative efficiency of the methods and their impact on carriers. NALC officials told us that they know automation including DPS is inevitable and necessary to increase postal efficiency. However, the NALC officials disagreed with the Service’s proceeding to implement DPS using the revised instructions to the field that NALC believe violated the joint agreements. NALC officials also said that both the overly optimistic expectations of high DPS volumes early in the program that did not materialize as well as managers’ efforts to implement DPS and capture savings resulted in lasting disappointment and frustration among some carriers. However, they agreed with Service managers that some of the carriers concerns regarding DPS will diminish if their DPS volume approaches higher percentages of total letters, which the Service expects to achieve. Following is a brief summary of two DPS national arbitration cases related to the 1992 joint agreements and one national arbitration case concerning the Service’s subsequent instructions on calculating DPS volumes. In one case, an arbitrator ruled that, as NALC contended, unaddressed advertising mail, a type of flat mail, constitutes a fourth bundle for carriers who have elected to use the three-bundle sorting method on park and loop routes. The parties’ current labor agreement limits to three, the number of bundles such carriers can be required to carry. The Service had maintained that if unaddressed flats were carried behind the flats bundle, it did not create a fourth bundle. The arbitrator required the parties to reach agreement on an alternative to the authorized three-bundle method when unaddressed flats are present on the affected routes. The parties agreed to conduct a joint study of the DPS work methods to determine which is the most efficient method and how to best handle unaddressed flats. The parties agreed to complete the joint study by April 30, 1998. In a second case, the arbitrator decided that the Service had not violated the agreements on the use of transitional employees. NALC believed that a ceiling existed on the number of hours per week these transitional employees could work and that the Service had ignored the ceiling. NALC also believed that these employees were hired into a particular delivery unit and had been improperly reassigned to work in another unit. The Service maintained that there was no ceiling on workhours once transitional employees had been properly hired, and there was no prohibition against reassigning them as needed. In a third case, an arbitrator concluded that the Service had violated the 1992 joint agreements by not obtaining NALC’s concurrence on revising the method for calculating DPS volumes that the Service advised managers to use in its DPS implementation instructions. The Service began using average weekly—rather than daily—DPS volume because certain fluctuations in daily volume made it impossible to reach DPS percentage targets every day. However, the arbitrator also found that the daily volume method in the original agreement was counter to achieving DPS savings and instructed the parties to work together to determine an alternative method. In the interim, the Service was allowed to use its averaging method. As of March 1998, the parties had not yet reached agreement on an alternative method for calculating DPS volume. In addition to their concerns about the Service’s noncompliance with national NALC-Service labor agreements, many city letter carriers said that they believe DPS work methods—particularly not being able to manually sort DPS letters to (1) combine them with the non-DPS bundle and (2) identify DPS sort errors and undeliverable letters—adversely affected their efficiency and, in some cases, service to their customers. NALC officials agreed with city carriers that the additional bundle of letters created by DPS and the single bundle made up of different sized mail pieces can be awkward for carriers to handle during delivery. Service officials we contacted, however, believe the different work methods are necessary to capture DPS savings and should have only minimal impact on carriers’ ability to deliver mail. The officials do not believe the concerns raised by carriers represent a significant adverse effect on customer service. However, rural carriers we interviewed were more satisfied with DPS work methods than were city carriers because rural carriers were allowed to manually sort DPS letters and combine them with non-DPS mail before leaving the office to deliver the mail. In commenting on our draft report, the President of NAPS said that he believed city carriers’ concerns about DPS work methods are greatly exaggerated because carriers have always been required to check addresses on the mail between delivery stops to identify mail that is undeliverable. Therefore, checking DPS letters to find undeliverables should not be much different from the work methods used prior to DPS. The President said that while carrying more than two bundles of mail has some detrimental effect on carriers’ ability to deliver mail, some carriers are using DPS as an excuse to extend their street time, delay prompt return to the office, and thus avoid having to perform additional work until their 8-hour day ends. From a letter carrier’s standpoint, an important advantage of manually sorting mail is to identify mail that cannot be delivered. Carriers historically take pride in identifying and redirecting such mail for further processing before leaving the office to begin delivery, with the knowledge that they will deliver all the mail they take to the street each day. Since city carriers must take DPS letters to the street without sorting or inspecting them, they must identify and remove any undeliverable letters while making deliveries. Although managers at the delivery units we visited did not know the number of DPS sort errors carriers found each day, they said that errors occur because of incorrect addresses, mechanical problems, or human error. Most carriers we interviewed said that they were concerned about DPS sort errors and their effect on street efficiency and service to customers. However, Service officials said that while some problems do occur, they believe only a small percentage of DPS letters experience sort accuracy problems. Overall officials believed sort accuracy was acceptable. While the Service does not routinely collect nationwide data on DPS sort errors, the units we visited were starting to collect data on sort errors on a daily basis. Service and NALC officials we contacted agreed that DPS technology is highly effective but that errors sometimes occur due to incorrect addresses, mechanical problems, or employee error. Carriers said that they usually get some sort errors. Sort accuracy was the concern most often cited by carriers we interviewed; 55 carriers, or 39 percent of those we interviewed, said that finding sort errors during delivery was a problem. The number and type of sort errors can vary from day to day; and of the 142 carriers we interviewed, 125, 131, and 136 carriers, respectively, estimated they received an average of fewer than 10 letters a day missorted, missequenced, and missent. However, the remaining carriers estimated that they received 11 or more letters a day in at least 1 of the 3 sort-error categories. For example, 14, 9, and 5 carriers estimated that they received an average of 11 to 20 letters each day missorted, missequenced, and missent, respectively. Although it appears that sort errors represented a small proportion of carriers total DPS letters, DPS sort errors might cause carriers to (1) backtrack on their routes to deliver missequenced letters or (2) bring letters back to the office at the end of the day if they cannot be delivered. NALC officials and the carriers said that service to their customers is sometimes delayed by at least 1 day if these letters must be reprocessed for delivery. After the initial 98-percent accuracy threshold was met on DPS routes, there was no formal requirement to track subsequent accuracy; and the Service does not collect nationwide data on DPS-sort accuracy. Rather, it relies on carriers to report sort errors each day so that delivery units can coordinate with mail processing operations to correct them. NALC officials said that errors sometimes occurred despite carriers’ and delivery units’ reporting them. Some of the units we visited were beginning to record the number and category of DPS-sort errors that carriers reported each day, so that corrective action could be taken. One district had analyzed these data collected over several weeks and found less than 1 percent of DPS-sort errors. These errors were often letters missorted to the wrong route caused by mechanical or maintenance problems or necessary changes to computerized sort plans for routes not having been entered. Carriers often receive DPS letters that are not deliverable. Undeliverable mail includes forwards resulting from change-of-address, vacation holds, and mail sorted incorrectly to the wrong route. The Service receives about 40 million change-of-address requests each year and forwards customers’ mail to their new addresses for 12 months. This results in carriers receiving letters for customers who no longer live at addresses on their routes, and these letters must then be reprocessed for delivery to each customer’s new address. The units we visited did not track the number of undeliverable DPS letters carriers brought back to the office each day. The carriers we interviewed said that the number varied from day to day. Of 134 carriers who estimated their average number of undeliverable DPS letters, 60 carriers, or about 45 percent, estimated they had up to 25 undeliverable letters a day; 41 carriers, or about 30 percent, estimated having between 26 and 50 letters a day; and 33 carriers, or 25 percent, estimated having more than 50 such letters a day, which includes 9 carriers, or 7 percent, having more than 100 letters a day. We found that opinions among Service managers, NALC officials, and carriers differed about the impact that undeliverable letters in carriers’ DPS mail have on service. For example, NALC officials said that they and city carriers believed in some cases DPS was delaying delivery of forwarded letters by at least 1 day. NALC officials attributed the delay to the fact that carriers were returning to the office too late in the day for their forwarded letters to be transported to the Service’s Computerized Forwarding System (CFS). Service delivery managers did not believe DPS delayed service and pointed out that the First-Class on-time delivery scores—the Service’s indicator of quality of service to customers—are now higher than they have been in the past. While there was general agreement at headquarters and field units that forwarded letters should be transported to CFS the same day carriers received them, this was not the case at two of the six delivery units we visited because carriers returned to the office from their routes after that day’s final dispatch of forwarded letters to the CFS. NALC and carrier perception that a 1-day delay in forwarding letters constituted delayed service to customers was not shared by Service headquarters delivery and forwarding system managers. The Service said that its First-Class mail delivery standards for 1-, 2-, or 3-day delivery do not technically apply to forwarded letters because they must be reprocessed for delivery. Furthermore, the standard for processing forwards is that CFS staff is to reprocess each forwarded mail piece and send it to the appropriate delivery unit within 24 hours, beginning when they receive it from the original delivery unit. The Service has a system whereby carriers can request that data be entered into sort programs to have certain letters, including forwards, held out of DPS so they can be identified and rerouted before delivery. Of the six delivery units we visited, two allowed carriers to hold out forwards from DPS for 30 days; one allowed forwards to be held out for 2 weeks; one allowed only forwards for temporary moves to be held out, and two did not allow any forwards to be held out of DPS. For example, one delivery unit manager said that his unit did not hold out forwards because the data entry process to do so is difficult; updating sort plans is complicated; and at his unit, managers believe carriers can more efficiently identify forwards while on the street. Likewise, headquarters delivery managers said that they did not believe forwards should necessarily be held out of carriers’ DPS mail and that carriers should adjust to handling forwards during delivery. In contrast, NALC officials said that carriers do not like to handle forwards while on the street and then bring them back to the office for reprocessing. NALC officials said that if the Service could develop an automated system to identify and remove change-of-address mail so that it is not included with carriers’ DPS mail for delivery, most of the problems with DPS would be eliminated. However, these officials recognize that the Service, although attempting to do so, has not yet developed such a system. 1. We do not agree with NALC’s assessment that our report is a repackaging of postal management’s excuses for missing its DPS implementation schedule. As requested by the Subcommittee, the report describes the status of the Service’s efforts to implement DPS, including slippages and reasons for them. The report discusses the Service’s overly optimistic DPS expectations, the changes the Service made to its goals and benchmarks for completion of the program, the current shortfalls compared with the Service’s fiscal year 1998 goals, and the issues the Service will need to address to achieve the goals. NALC also commented that regarding specific statements in the report attributed to Service managers, our review lacked critical scrutiny of the managers’ opinions, with which NALC does not agree. These statements concerned the Service’s assertions that (1) DPS should not cause a decrease in street efficiency, (2) the Service does not have complete data to measure the percentage of letters carrier routes receive in delivery sequence, (3) DPS is responsible for workhour reductions, and (4) DPS does not adversely affect carriers’ efficiency or customer service. We disagree with NALC’s assertion that we accepted Postal Service managers’ opinions without scrutiny. For each area about which NALC expressed concern, we attempted to obtain data addressing the relevant issues. However, sufficient data were not readily available. Therefore, in addition to obtaining and attributing the views of Postal Service headquarters managers, we obtained and attributed the views of managers and letter carriers at the field locations we visited as well as the views of the Service’s major unions and management associations, including NALC. Furthermore, we included a separate section in the report that discusses many of the specific concerns city carriers and NALC officials conveyed to us during interviews so that a balanced view of DPS would be presented. 2. NALC expressed the belief that in our reporting of selected city carrier national level arbitration cases, we casually accepted the independent arbitrators repeated findings that the Service violated its contract with NALC during DPS implementation. We understand that NALC and the Service have been at odds and NALC’s view that the arbitrators’ findings support its position. However, our intent was to objectively present the events that occurred and their effects on DPS implementation, which we believe is reflected in the report. In the report, we noted that the Service has lost two national arbitration cases involving DPS implementation. We explained that the arbitrators affirmed NALC’s position that (1) unaddressed advertising mail constitutes a fourth bundle for carriers, which violates the parties current labor agreement and (2) the Service’s DPS instructions to the field were inconsistent with certain aspects of the 1992 agreements. To further recognize NALC’s concern, we have added language to the report explaining that NALC filed grievances on the Service’s DPS instructions at the national level, and most issues were settled without arbitration. In addition, NALC questioned the draft report language, which it interpreted as indicating that an arbitrator’s decision caused a delay in the Service achieving DPS workhour reductions. NALC stated that the Service’s violation of its contract with NALC was the effective cause of the delay and not the arbitration’s remedy. We have revised the language to clarify this information. 3. NALC suggested that our methodology was flawed because we interviewed and observed the delivery operations of a relatively small number of city carriers. NALC emphasized that the small number of carriers included in our review was inadequate since carriers and their performance were central to measuring the progress of DPS implementation. We believe that our methodology for accomplishing our objectives was sound and point out that evaluating carrier performance was not an objective of our review. As discussed in the report, we judgmentally selected and interviewed as many city carriers as possible given our resource limitations and time constraints. Our intent was not to interview a statistically representative sample of city carriers. Rather, we interviewed these carriers to provide balance and illustrative examples of their views regarding DPS implementation. We also accompanied two city carriers on their routes to observe their handling of DPS letters and other mail and to better understand their views regarding DPS work methods. We supplemented these interviews and observations with the opinions and illustrative information from NALC national level officials. Although our interviews and observations of city carriers are not statistically representative, their views largely mirrored those of NALC officials. 4. NALC stated that the draft report incorrectly asserts that DPS should not cause a decline in street efficiency. We have deleted this language from the report. However, to provide the views of the Service the report notes that, according to the Service, DPS would cause only a minimal increase in the time carriers would need to perform some operations on the street, which were formerly done in the office, and that DPS should not otherwise have a significant adverse effect on street efficiency. The report also notes that Service field managers and supervisors as well as the carriers we interviewed as a part of our review told us that, in general, DPS did cause some decline in carrier street efficiency. Gerald P. Barnes, Assistant Director Hazel J. Bailey, Evaluator (Communications Analyst) Arleen L. Alleman, Senior Evaluator Rudolfo G. Payan, Senior Evaluator Robert E. Kigerl, Staff Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the status of the Postal Service's (USPS) efforts to implement Delivery Point Sequencing (DPS), focusing on: (1) USPS goals for DPS implementation, its projected letter carrier workhour savings, and the extent to which the Service has achieved these; and (2) issues that may affect USPS's ability to achieve its 1998 DPS goals, including any actions that USPS has taken to address these issues. GAO noted that: (1) in its 1992 Corporate Automation Plan, USPS initially scheduled DPS implementation to be completed by fiscal-year-end 1995; (2) the 1992 Plan included DPS goals and benchmarks for: (a) DPS equipment deployment; (b) barcoded letter volume; and (c) delivery zone and carrier route implementation nationwide through fiscal year (FY) 1995; (3) in addition, USPS based its analyses that supported investments in DPS sorting equipment on achieving: (a) a certain DPS letter volume to carrier routes; and (b) specific carrier workhour savings; (4) however, implementation fell behind schedule, and USPS acknowledged that it had been overly optimistic in its DPS expectations; (5) in April 1994, the Postmaster General announced that the barcoding goal had slipped from 1995 to fiscal-year-end 1997; (6) in its 1996 Plan, USPS extended the DPS completion date to the end of FY 1998 and revised associated goals and benchmarks; (7) USPS has identified and was addressing several issues that have affected its efforts to achieve its DPS implementation goals, benchmarks, and carrier workhour savings; (8) to increase volumes of barcoded letters and letters sorted in delivery sequence, USPS has taken several actions; (9) while USPS has achieved some success in addressing issues affecting DPS implementation and achievement of DPS goals, it has been less successful in resolving its disagreements with the National Association of Letter Carriers (NALC), the labor union representing city carriers, regarding DPS implementation; (10) in 1992, USPS and NALC agreed to work together to implement DPS and signed six memoranda of understanding, which were to resolve past disputes and provided a plan for DPS implementation; (11) not long after the memoranda were signed, disagreements developed between USPS and NALC regarding certain aspects of the memoranda; (12) NALC filed national level grievances regarding DPS implementation instructions, and the parties settled most of their disagreements; (13) however, one disagreement went to national arbitration, and the arbitrator decided in NALC's favor and instructed the parties to work together to resolve their differences; (14) in addition, many city carriers GAO spoke with said that although they generally saw benefits in DPS, they were concerned about its effect on their daily work; and (15) in contrast, USPS officials said that while DPS has changed the way carriers deliver mail, the changes have not adversely affected customer service.
Remote barcoding is a part of the Service’s letter mail automation efforts that began in 1982. In the late 1980s, the Postal Service determined that it needed a system for barcoding the billions of letters containing addresses that cannot be read by the Service’s optical character readers. Remote barcoding entails making an electronic image of these letters. The images are electronically transmitted to remote barcoding sites where data entry operators enter enough address information into a computer to permit a barcode to be applied to the letter. The barcode allows automated equipment to sort letters at later stages in the processing and delivery chain. The Service made a decision in July 1991 to contract out remote barcoding based on a cost analysis that showed that contracting out would result in an expected savings of $4.3 billion over a 15-year period. The Service’s analysis was based on the pay levels and benefits that the Service expected to provide at that time, which exceed pay levels currently expected for in-house work. In November 1993, the Postal Service reversed its decision to contract out the remote barcoding function as a result of an arbitration award. The Service expected that agreeing to use postal employees for remote barcoding would improve its relations with APWU. In 1991, the Service had determined that contracting out was appropriate because (1) the remote barcoding workers would not touch the mail and security of the mail was not at risk, (2) much of the work would be part-time employment and result in lower overall costs, and (3) technological advances in optical character recognition would enable equipment to read this mail and eventually phase out the remote barcoding. As detailed in our earlier report on the Service’s automation program, the Postal Service’s plans for remote barcoding have since changed—it now anticipates increased use of the method with no phase-out date. On the basis of the expected total work load equivalent to 23 billion letter images per year and a processing rate of 750 images per console hour, we estimate that the Service will employ the equivalent of at least 17,000 operators for remote barcoding. This is a minimum based on console hours only and does not take into account such other time as supervision, management, and maintenance. In November 1990, the clerk and carrier unions filed national grievances challenging the Service’s plan to contract out remote barcoding services. Subsequent to its July 1991 decision, the Service awarded 2-year contracts (with an option to renew for a 2-year period) to 8 firms for remote barcoding services for 17 sites. In late 1992, additional remote barcoding deployment was put on hold pending the outcome of the grievances, which ultimately went to arbitration. On May 20, 1993, the arbitrator concluded that the Service failed to honor certain contractual rights of postal employees. The decision required the Service to first offer the jobs to those postal employees who were interested in and qualified for the jobs before contracting out for the remote barcoding service. The decision did not require that the jobs be offered to new postal hires, and Postal Service officials believed that an option such as specifying a few sites to be operated by postal employees and contracting out for the remaining ones would have complied with the arbitrator’s decision. On November 2, 1993, the Service agreed with APWU that remote barcoding jobs would be filled entirely by postal employees. In 1994, the Service resumed remote barcoding deployment, opening 14 remote barcoding sites where postal employees are to provide services for 22 mail processing plants. In September 1994, the Service converted two contractor sites serving two plants to in-house centers. It plans to convert the remaining sites by the end of 1996 and to eventually operate up to 75 centers that would serve 268 plants and process the equivalent of about 23 billion letters annually. Based on cost data provided by the Postal Service, we compared costs incurred during a 36-week period from July 23, 1994, through March 31, 1995, for remote barcoding at the 15 contractor facilities (17 until 2 were converted to in-house operation on September 6, 1994) and the Service’s 14 in-house facilities (16 after September 6, 1994). We estimated that the total direct cost of processing 1,000 images averaged $28.18 at the in-house centers compared to $26.61 at the contractor locations, a difference of 6 percent. The cost difference was the greatest at the beginning of the period when the in-house sites were getting started and stabilized at about a 6-percent difference during the last 3 accounting periods (12 weeks). About 2.8 billion images were processed in the Service’s centers during the 36-week period. We estimated that processing these images in the in-house facilities cost the Postal Service about $4.4 million, or 6 percent more than processing them in contractor-operated sites. The 6-percent difference will increase in the future as required changes in the mix of employees staffing the postal remote barcoding centers occur. The Service uses both career and transitional employees, who earn different wages and benefits. Transitional employees receive $9.74 an hour, Social Security benefits, and earn up to one-half day annual leave every 2 weeks. The career employees start at $11.44 an hour and receive health benefits, life insurance, retirement/Social Security benefits, a thrift savings plan, sick leave, and earn up to 1 day of annual leave every 2 weeks. For the postal remote barcoding sites we reviewed, 89 percent of the workhours were generated by transitional employees. By agreement with APWU, no more than 70 percent of the workhours in these centers is to be generated by transitional employees. The Service is working toward this level, and transitional employee workhours are declining while career workhours are increasing as the Service converts and replaces its transitional employees. We estimate that had the required 70/30 ratio of transitional to career employee workhours been achieved for our comparison period, the in-house cost would have been $30.33 per 1,000 images instead of $28.18, for a cost difference of about 14 percent instead of 6 percent. The Service projects that remote barcoding will eventually barcode about 31 billion letters annually. With the remote computer reader expected to reduce the need for keying by about 25 percent, we estimated that remote barcoding centers will eventually process the equivalent of about 23 billion letters annually. If the 6 percent cost differential and the current ratio of 89 percent transitional and 11 percent career workhours were continued, we estimated the in-house cost for this volume would be about $36 million more per year, not adjusted for inflation. If the cost differential we found continues, using postal employees would cost the Service about $86 million more per year, or 14 percent, not adjusted for inflation, when the required ratio of 70 percent transitional and 30 percent career workhours has been achieved. Benefits for transitional employees that are more comparable to those for career employees were at issue in the recent contract negotiations between the Service and APWU. It is reasonable to expect that wage and other cost increases may occur in the future for both in-house and contractor-operated sites. However, if the Service and APWU agree that transitional employees will receive additional benefits, the character of the jobs held by these employees will change, and the transitional employees will become more like career postal employees. Therefore, we also estimated the in-house and contract cost for remote barcoding if the cost of transitional employee benefits were the same as the cost of career employee benefits. On this basis, our estimate is that the differential would be about $174 million, or 28 percent, not adjusted for inflation. Using images per console hour as a measure, we determined that operator speed was similar between the contract sites and the in-house centers during the 36-week period. Contract keyers processed an average of 756 images per console hour, and postal employees processed 729 per hour. Figure 1 shows that differences in keying speed were the greatest at the beginning of the period and were more comparable at the end of it. The number of images per console hour was the best available measure we had for comparing the output of postal and contract employees. However, certain factors that are important to measuring performance were not similarly applied by the Postal Service and contractors. For example, contractors can receive a bonus for exceeding 650 images per hour and incur financial penalties for falling short of 640 images per hour. The Service requires its employees to maintain the standard of 650 images per hour, but no bonuses or penalties are involved. Accuracy standards are similar but involve financial penalties only for contractors. The program that measures errors at contractor sites was not used at postal sites at the time of our review. The Service and the unions are in negotiations over what methods will be used to monitor the accuracy of postal employee operators. Additionally, productivity data of both postal and contractor sites can be skewed if mail processing plants served by the sites do not process enough mail to keep the operators busy and they continue to be paid. The plants can also make operational decisions affecting whether a full or partial barcode is required from the remote barcoding site. Although partial barcodes are quicker to enter, thus increasing productivity in a specific center, this partially barcoded mail will have to be sorted at a higher cost somewhere downstream. The Service did not have data to break out images processed in-house and by contractors by full and partial barcoding. In commenting on a draft of this report, APWU said that the period we used for our comparison is unfair to the postal-operated sites because they were just starting up, and productivity is typically lower during such periods. As shown in figure 1 above, postal images per hour were initially lower than the contractor’s images per hour. For this reason, we did not include data from any Service-operated center during its initial 12-week training period. Figure 1 also shows that postal employee processed images per hour exceeded the contractor’s images per hour in accounting period 4. The cost difference from accounting period 4 until the end of the period was smaller than during the entire period. However, the difference did not consistently decrease throughout the period. As indicated in table 1, the difference was greater in the last accounting period than the average for both the period we used and the period recommended by the union during which the images processed per hour had leveled off. We believe that our comparison of costs over the nine accounting periods is preferable because it minimizes the effects of one-time and short-term fluctuations in cost and performance. For example, we are aware that contractor costs in the data included nonrecurring, extraordinary payments by the Postal Service of $888,000 (or 0.87 percent of contractor costs) for workers’ compensation claims at two sites. The claims covered a period beginning before our 36-week comparison period, but the Postal Service recorded the full cost in the period paid. Time did not permit us to analyze the cost data to identify and allocate all such extraordinary costs to the appropriate accounting periods. The Service’s use of transitional employees substantially reduced the difference expected earlier between contract and in-house costs. In its original decision in 1990 on obtaining remote barcoding services, the Postal Service estimated that over a 15-year period it could save about $4.3 billion by using contract employees. That estimate was based on using existing career level 6 pay scale employees with full pay and benefits. Under the November 1993 agreement with APWU, only 30 percent of the workhours are to be generated by career employees. This mix of transitional and career employees at the level 4 pay scale makes the Postal Service’s cost closer to the cost of contracting out. The return on investment was estimated at 35.7 percent to contract out. The Service’s cost comparison showed that the 70-30 mix of transitional and career workhours lowered the return on investment to 20.6 percent. Postal officials said this was still considered an acceptable return. The Service estimated that using level 4 pay scale career employees only would reduce the rate of return to 8 percent. In commenting on a draft of this report, APWU pointed out that an important reason for having postal employees do this work is that the remote barcoding program, originally considered temporary, is now a permanent part of mail processing operations, and thus eliminates a reason for having contractors do it. This same rationale could be put forth by APWU and/or the Service to eliminate the reason for having temporary or transitional employees do the barcoding. If this occurred, the cost of in-house barcoding would increase significantly. We estimate that if all of the in-house workhours had been generated by career employees at the pay and benefit level for the period under review, in-house keying costs would have exceeded contracting costs by 44 percent, or $267 million annually, based on a full production rate of 23 billion images per annum. Service and APWU officials we contacted believed that a principal advantage of bringing the remote barcoding in-house was anticipated improved working relationships. Contractor representatives we contacted believed there were a number of advantages to contracting out, including lower cost, higher productivity, and additional flexibility. The decision to bring the remote barcoding in-house was not primarily an economic one since the Postal Service recognized it would cost more than contracting out. Postal officials expected that using postal employees for remote barcoding would improve their relations with APWU. On November 2, 1993, when the Service decided to use postal employees for remote barcoding, the Service and APWU signed a memorandum on labor-management cooperation. This memorandum was in addition to an agreement signed by the Service’s Vice President for Labor Relations and the President of APWU the same day for the use of postal employees to do remote barcoding in full settlement of all Service-APWU issues relating to implementing remote barcoding. The cooperation memorandum included six principles (see app. I) of mutual commitment to improve Service-APWU relationships throughout the Postal Service. It specified that the parties “must establish a relationship built on mutual trust and a determination to explore and resolve issues jointly.” The Postal Service’s Vice President for Labor Relations and the President of APWU said that relations improved somewhat after the November 1993 agreements. The Vice President said that the decision to use postal employees for remote barcoding was “a very close call,” but the agreements seemed to have the effect of improving discussions during the contract negotiations that had begun with the Service in 1994. He also said that APWU initially made offers in contract negotiations that looked good to the Postal Service. Subsequent to the negotiations, however, the Vice President told us that he no longer believed that the experiment in cooperation with APWU was going to improve relations. According to the Vice President, APWU seemed to have disavowed the financial foundation for the remote barcoding agreement by proposing to (1) increase transitional employees’ wages by more than 32 percent over the life of the new contract and (2) provide health benefits for transitional employees. The Postal Service believes these actions would destroy the significance of the 70/30 employee workhour mix. Further, the Vice President said that APWU continues to be responsible for more than 75 percent of pending grievances and related arbitrations, which had increased substantially from the previous year. The President of APWU said that having the remote barcoding work done by postal employees was allowing the Service and the union to build new relations from the “ground up.” He said that the cooperation memorandum mentioned above was incidental to the more fundamental agreement of the same date for postal management and the union to establish and maintain remote barcoding sites, working together through joint committees of Service and union officials. Poor relations between postal management and APWU and NALC, including a strike, were a factor prompting Congress to pass the Postal Reorganization Act of 1970. We reported in September 1994 that relations between postal management and labor unions continued to be acrimonious. When negotiating new wage rates and employee benefits, the Service and the clerks and carriers have been able to reach agreement six out of nine times. However, for three of the last four times, the disputes proceeded to binding arbitration. Our September 1994 report detailed numerous problems on the workroom floor that management and the labor unions needed to address. We recommended that, as a starting point, the Service and all the unions and management associations negotiate a long-term framework agreement to demonstrate a commitment to improving working relations. Our follow-up work showed that the Postal Service and APWU are still having difficulty reaching bilateral agreements. Following the 1993 cooperation agreement, the Postal Service and APWU began negotiations for a new contract to replace the 4-year contract that expired in November 1994. No final and complete agreement could be reached on all subjects in the negotiations, and the parties mutually agreed to engage in a period of mediation. The Postal Service and APWU did not reach agreement for a new contract, and the dispute has now been referred to an arbitrator as provided for in the 1970 act. Further, the Postal Service and APWU, as well as two of the three other major unions, have been unable to agree to meet on an overall framework agreement that we recommended to deal with longstanding labor-management problems on the workroom floor detailed in our September 1994 report. In response to our report, the Postmaster General invited the leadership of all unions and management associations to a national summit to begin formulating such an agreement. APWU, NALC, and the National Postal Mailhandlers Union did not accept the invitation, saying that the negotiation of new contracts needed to be completed first. Service officials, union officials, and contractor representatives we contacted cited other advantages and disadvantages of using postal employees rather than contractors for remote barcoding. The Vice President for Labor Relations said that the mix of transitional and career employees may create some management problems. He said the different types of employees receiving different wage rates and benefits, but working side by side doing the same work at remote barcoding sites, may create employee morale problems. However, he also said that the career-transitional mix provided the Service with the advantage of offering transitional employees opportunities for career postal jobs. APWU officials said that remote barcoding is an integral part of mail processing and relies upon rapidly evolving technology, which they believed should not be separated into in-house and contractor operations because of a potential loss of management control and flexibility. They also said that the decision to use postal employees for remote barcoding was justified on the basis of cost studies by the Service showing a favorable return on investment. Contractor representatives cited a number of advantages to using contract employees. They said that, for a variety of reasons, contractor sites are less costly than postal sites. They believed that contract employees operate at higher productivity rates because contractors, unlike the Postal Service, can provide incentive pay that results in higher keying rates. They also said that contractors can exercise more flexibility in handling variations in mail volume levels because of procedures for adjusting staffing levels on 2-hour notice, as provided in the contracts. However, Service officials pointed out that under the 1993 agreement with APWU, transitional employees can be sent home without notice if work is not available, but the career employees can not. Our objectives were to (1) compare, insofar as postal data were available, the direct costs of contracting out remote barcoding with the direct costs of having the work done by postal employees; and (2) identify possible advantages and disadvantages of using postal employees rather than contractors to do the work. At Postal Service headquarters, we interviewed Service officials responsible for remote barcoding implementation and contracting, as well as those responsible for the Service’s labor relations and financial management. We met on two occasions with the President of the American Postal Workers Union and other union officials and with three representatives of remote barcoding contractors to obtain their views on the advantages and disadvantages of using postal employees for remote barcoding services. We visited two remote barcoding sites: the contractor site in Salem, VA, and the Lynchburg, VA, site, which recently converted to in-house operation. We also reviewed, but did not verify to underlying source records, Postal Service data on costs associated with remote barcoding done by contract and postal employees. Further, we confirmed our understanding of remote barcoding and verified some of our information by reviewing the results of related work done in March and April 1995 by the Postal Inspection Service. The Inspection Service did its work at five remote barcoding sites (three Service-operated, including one recently converted from contractor-operated, and two contractor-operated) to compare and contrast certain administration and management practices followed at the sites. Details on our cost comparison methodology are contained in appendix II. A draft of this report was provided to heads of the Postal Service, APWU, and the Contract Services Association of America for comment in April 1995. Subsequent to the initial distribution of the draft, the Postal Service provided us with revised cost data. We provided a revised draft to the three organizations prior to completion of the comment process, and the comments received were based on the second draft. We did our work from March through June 1995 in accordance with generally accepted government auditing standards. The Postal Service, APWU, and the Contract Services Association of America provided written comments on a draft of this report. The Postal Service concurred with the information contained in the report regarding the costs of remote barcoding in contractor and postal operated sites and the reasons for bringing the work in-house. The Service said that it had hoped that bringing the remote barcoding work in-house would foster better relations with APWU. The Service expressed disappointment that APWU continued to maintain an adversarial posture that hindered progress toward improving their relationship. (See app. III for the text of the Postal Service’s comments.) APWU characterized our draft report as being inaccurate and substantially biased. It also expressed the opinion that a report on this subject is premature because the data necessary for adequate evaluation are not yet available. More specifically, APWU said that the draft report (1) overstated the cost of in-house barcoding, (2) understated the costs of contracting out, (3) ignored important considerations that favor doing the work in-house, and (4) understated the significance of improvements in labor relations made possible by the APWU/Postal Service agreement to do remote barcoding in-house. APWU criticized the draft report as being premature because we used data from a period when postal remote barcoding facilities were just beginning operations, while contractor facilities represented mature operations, thereby overstating the cost of in-house operations. It said that this mature versus start-up comparison imparted a serious bias to our estimate of the cost differential. While we agree that the longer the period of comparison the more preferable, a longer period did not exist for the comparison we were asked to perform. It is also important to note that we excluded from the 36-week time period we used for our cost comparison the initial 12-week training period that each in-house site experienced before becoming operational. In response to APWU’s comments, we clarified our text to more clearly convey that our comparison excluded the 12-week training period for the in-house sites. We also further analyzed the data to identify variances in costs during the 36-week period, especially the later part of the 36-week period, when in-house sites were more mature. This analysis showed that in-house operations were consistently more expensive than contractor operations. We noted that the in-house operations will become more expensive if the workforce mix changes to include more career employees and fewer transitional employees as is presently planned, and/or if the transitional employees receive increased benefits. We also qualified our estimates of future costs by pointing out that circumstances could change and discussing how that might happen. APWU asserted that the draft report understated the cost of contracting for remote barcoding because we ignored such potential costs as overruns by government contractors and future strikes by contract employees. We did not ignore the possibility of increased contractor costs. We limited our cost analysis to actual costs because we had no basis for assigning dollar values to possible future events, such as employee strikes and potential cost overruns by contractors. Instead, we provided a narrative discussion of such factors. We expanded our discussion of these factors in response to APWU’s comments. APWU also said that the draft report ignored important considerations favoring in-house operations, such as the importance to postal managers of maintaining full integration and control of the barcoding effort. APWU asserted that in-house operations are inherently preferable from a management point of view. We do not believe that this necessarily holds true. A broad body of work we have done in other areas shows some successes and economies that have resulted from contracting out certain activities by various federal, state, and local governments. APWU also said that the draft report understated the significance of improvements in labor relations made possible by the agreement between APWU and the Postal Service to perform remote barcoding in-house. APWU characterized the agreement as a cornerstone of the parties’ efforts to build a constructive and productive relationship and cited some examples that it considered to be representative of positive progress in efforts to improve the relationship between the parties. After receiving APWU’s comments, we revisited with Postal Service officials the issue of the effect of the agreement on labor management relations to assure ourselves that we had correctly characterized the Postal Service’s position. The officials confirmed that we had, explaining that while the Postal Service believed at the time that the agreement was reached it would have a positive effect, the Service now believes that its relationship with APWU has deteriorated since the 1993 agreement. We added language to further ensure that the final report presents a balanced discussion of the differing views of the affected parties. (See app. IV for the text of APWU’s comments and our detailed response to these comments.) The Contract Services Association of America believed we should have put more information into our report regarding what the Association said was a complete breakdown in the Postal Service’s labor-management relations. In view of our previous extensive work evaluating the state of labor-management relations in the Postal Service, we did not evaluate labor-management relations; but at various places in the report, we describe the various parties’ perceptions of the labor-management relationship. The Contract Services Association of America also offered other comments and technical clarifications, which we incorporated in the report where appropriate. (See app. V for the text of the Contract Services Association of America’s comments.) We are providing copies of this report to Senate and House postal oversight and appropriation committees, the Postmaster General, the Postal Service Board of Governors, the Postal Rate Commission, the American Postal Workers Union, and other interested parties. Major contributors to the report are listed in appendix VI. If you have any questions, please call me on (202) 512-8387. “1. The APWU and the Postal Service hereby reaffirm their commitment to and support for labor-management cooperation at all levels of the organization to ensure a productive labor relations climate which should result in a better working environment for employees and to ensure the continued viability and success of the Postal Service. “2. The parties recognize that this commitment and support shall be manifested by cooperative dealings between management and the Union leadership which serves as the spokesperson for the employees whom they represent. “3. The parties recognize that the Postal Service operates in a competitive environment and understand that each Postal Service product is subject to volume diversion. Therefore, it is imperative that management and the Union jointly pursue strategies which emphasize improving employee working conditions and satisfying the customer in terms of service and costs. A more cooperative approach in dealings between management and APWU officials is encouraged on all issues in order to build a more efficient Postal Service. “4. The Postal Service recognizes the value of Union involvement in the decision making process and respects the right of the APWU to represent bargaining unit employees. In this regard, the Postal Service will work with and through the national, regional, and local Union leadership, rather than directly with employees on issues which affect working conditions and will seek ways of improving customer service, increasing revenue, and reducing postal costs. Management also recognizes the value of union input and a cooperative approach on issues that will affect working conditions and Postal Service policies. The parties affirm their intent to jointly discuss such issues prior to the development of such plans or policies. “5. The APWU and the Postal Service approve the concept of joint meetings among all organizations on issues of interest to all employees, but which are not directly related to wages, hours or working conditions, such as customer service, the financial performance of the organization and community-related activities. In this regard, the APWU will participate in joint efforts with management and other employee organizations to address these and other similar issues of mutual interest. “6. On matters directly affecting wages, hours or working conditions, the Postal Service and the APWU recognize that separate labor-management meetings involving only the affected Union or Unions are necessary. The parties are encouraged to discuss, explore, and resolve these issues, provided neither party shall attempt to change or vary the terms or provisions of the National Agreement.” The Postal Service’s fiscal year is made up of 13 4-week accounting periods. The time period we selected for comparing the cost of contract and in-house remote barcoding included nine accounting periods (36 weeks) from July 23, 1994, through March 31, 1995. We selected the July 23, 1994, date because this was the first day of the first accounting period after the Service-operated remote barcoding centers completed the 12-week training period for the first system. We then included data on each in-house center for the first full accounting period following the period in which the 12-week training period was completed. We did not include two centers (Lumberton, NC, and Laredo, TX) for the accounting period in which they were converted to in-house sites. We determined direct costs incurred by the in-house centers as reflected by the Postal Service Financial Reporting System and contract records for the selected accounting periods. This included all significant costs, such as the pay and benefits for employees and on-site supervisors and managers (about 94 percent of the direct cost), equipment maintenance, communication lines, travel, training, rent, utilities, and supplies. To this we added factors for Service-wide employee compensation not charged directly to any postal operations. These included the Postal Service’s payments for certain retirement, health and life insurance, and workers compensation costs, and increases in accrued leave liability due to pay raises. According to Postal Service data, these additional compensation costs ranged between 1.3 and 8.9 percent of direct pay and benefits for transitional and career employees in 1994 and 1995. Except for contract administration personnel, we did not allocate any headquarters costs to the in-house or contractor sites. This was because these costs were unlikely to be significantly different regardless of whether the sites were contracted out or operated in-house. Postal Service area offices incurred some cost for remote barcoding. Some area offices had appointed remote barcoding system coordinators, who spent some time assisting and overseeing the postal sites. Their level of involvement in the centers varied from area to area, and data on the amount of involvement were not readily available centrally. We did not attempt to estimate this cost because of the lack of data and because we do not believe it would have been large enough to materially affect our results. For the contractor sites, we used the actual contract cost to the Postal Service, which included the full cost of the remote barcoding services, except for equipment maintenance. We added the contract cost of maintenance for the equipment at the contractor sites, which was provided by the Postal Service to the contractors. We also added the cost of Postal Service personnel involved in administering the contracts, both at headquarters and at the facilities serviced by the coding centers. The estimate of this cost was provided by the Postal Service. The following are GAO’s comments on the letter dated July 14, 1995, from the American Postal Workers Union. 1. In light of APWU’s view that the 36-week period we used was not representative, we included an additional analysis in the report covering shorter and more recent time periods. This analysis shows that the cost difference varies depending on the period selected. Using the most recent 4-week period, the cost for in-house keying was greater than for the full 36-week period. However, because costs for any given period can contain extraordinary payments, we believe comparison periods should be as long as feasible to minimize the effects of those nonrecurring costs. 2. APWU suggested that our analysis failed to recognize some of the direct costs associated with the entire remote barcoding program, including capital costs. The total cost of the remote barcoding program was not the focus of our review. Our objective was to compare the direct cost of performing remote keying services in-house versus under contract. Where the cost to the Postal Service was the same whether the work was to be done in-house or by contract, we did not include such cost in our comparison. This methodology is consistent with the Service’s Guidelines for the Preparation of Cost Data for Comparison With Contracting Out Proposals. Using this approach, we did not include such costs as video display terminals, keyboards, and computers, for example, that were provided as government-furnished equipment to the contractors and also used at postal-operated sites. Our report discloses in appendix II the cost elements that we considered in our comparison and identifies cost elements not considered. 3. APWU asserted that the draft report understated the cost of contracting for remote barcoding because we ignored such potential costs as overruns by government contractors. It is true that we have reported on cost overruns incurred by government contractors. However, our reports citing contractor overruns were based on after-the-fact evaluations of actual contract costs compared to estimated contract costs. In addition, many instances of cost overruns occur when the scope of work is not well defined and deals with advanced technologies. This does not appear to be the case in remote barcoding where the scope of work is well defined. In addition, it would not be appropriate for us to speculate about the future cost that might be incurred by the Service’s remote keying contractors. 4. APWU said that our draft report ignored important reasons for having postal employees do remote barcoding, citing as one reason that the remote barcoding program is no longer considered temporary. While the point that the remote barcoding program is no longer considered a temporary program would be a valid consideration in a decision on whether to contract out, it was not cited by Postal Service officials in any records we reviewed or in our discussion with Service officials as a reason for having postal employees do the work. Rather, the reasons were related primarily to anticipated improvements in the Service’s relations with APWU. We estimate that if all of the in-house workhours had been generated by career employees at the pay and benefit level for the period under review, in-house keying costs would have exceeded contracting costs by 44 percent, or $267 million annually, based on a full production rate of 23 billion images per annum. 5. APWU said that our analysis did not take into consideration several contractor costs that could be passed on to the Postal Service. APWU said that it and several other unions were prepared to organize contractor employees and that even moderate organizing success would change the results of our cost analysis. As an example, APWU pointed to one contractor site where the contractors’ employees received health benefits. APWU apparently did not understand that we had in fact included these health benefit costs in our comparison. We agree that potential future costs could affect the cost differential if they occur; however, we have no basis for anticipating what the dollar value of such costs might be. Thus, we used actual cost data when available and discussed in narrative fashion possible changes in circumstances that might affect future costs. 6. APWU said that while our draft report observed that contract employees can receive a bonus for exceeding 650 images per hour, we did not estimate the cost impact of these potential bonuses. The costs for contracting out that we used in our estimates included the cost of actual bonuses paid to contractors for exceeding the standard of 650 images per hour and thus include the cost impact of this factor. We had no basis for estimating how bonuses may change in future periods. 7. APWU stated that the draft report failed to analyze barcoding error rates. The cost for contracting out that we used included penalties assessed against contractors for exceeding the maximum 3-percent error rate. We revised the text to clarify the reason that we could not compare error rates of postal employees and contract employees. 8. We recognize in the report that APWU believes that the agreement to bring the remote barcoding in-house has improved labor relations. However, the report also recognizes that this view does not agree with the Postal Service’s view. Moreover, the Postmaster General has recently said that it is clear that the collective bargaining process is broken. We deleted the word rarely and revised the text to reflect that the union has gone to interest arbitration three out of nine times. We made no judgments about the attitudes of postal employees. Rather, our report attributes to a Postal Service official the comment that a potential employee morale problem could result from the mix of transitional and career employees. 9. APWU said that the draft report was a biased document requested by a Subcommittee of the Committee on Appropriations for political reasons, including pressure to affect collective bargaining positions. The Subcommittee has not suggested to us in any way what the results of our analysis should be. We approached this assignment like all others, attempting to meet our customer’s legitimate oversight needs in an objective, independent, and timely manner. 10. APWU stated that our initial draft was flawed. As explained in our Objectives, Scope, and Methodology section of this report (see p. 12), subsequent to the initial distribution of a draft of this report, the Postal Service provided us with revised cost data. We provided a revised draft to APWU prior to completion of the comment process. We considered the comments of APWU in preparing this report. We received APWU comments in two meetings, both of which were attended by the APWU President, other APWU officials, and outside legal and economic advisers to APWU. APWU also provided written comments on a draft of this report, which are included in full. 11. APWU stated that the draft is still flawed, biased, and largely invalid. We believe that the data included in our report provide a fair (and best available) representation of the actual cost of operating remote barcoding sites by the Postal Service and by contractors for the periods indicated. As stated in the report, future cost differentials will depend on the circumstances at that time. 12. APWU believed that our use of a Postal Service analysis performed in prior years was misleading. We included the Service’s 1990 cost estimate because it led to the decision, followed until 1993, to use contractors for all remote barcoding services. We revised the text to reflect that the original Postal Service estimate was based on level 6 employees and that currently level 4 employees do the work at in-house sites. 13. In summary, APWU said that our draft report was inaccurate and substantially biased. APWU urged us to ensure that the final report is sufficiently balanced and appropriately qualified. We reviewed the draft report to further ensure that it presented the results of our analysis clearly and with a balanced tone. As discussed in our preceding comments, we added information and language where we thought it helped to clarify the report’s message or the positions of the affected parties. James T. Campbell, Assistant Director Anne M. Hilleary Leonard G. Hoglan Loretta K. Walch The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO compared the direct costs to the U.S. Postal Service of contracting out for remote barcoding services versus having the work done by postal employees, focusing on the advantages and disadvantages of using postal employees for these services. GAO found that: (1) in-house barcoding would cost an estimated 6 percent more than using contractors, based on a mix of 89 percent transitional and 11 percent career employee workhours; (2) the cost differential is expected to increase to 14 percent annually to process 23 billion letters, based on an union agreement of 70 percent transitional and 30 percent career employee workhours; (3) if transitional employees receive benefits similar to career employees, as the union has requested, the cost differential would increase to 28 percent or $174 million annually; (4) using postal employees for barcoding would result in improved relations with the union; and (5) the postal union believes that using postal employees for barcoding provides the opportunity for the Postal Service and the union to cooperate in establishing and operating remote barcoding sites.
Overall, NTSB has fully implemented or made significant progress in following leading management practices in all eight areas that our recommendations addressed in 2006 and 2008—communication, strategic planning, IT, knowledge management, organizational structure, human capital management, training, and financial management. We made 15 management recommendations in these areas based on leading agency management practices that we identified through our governmentwide work. Although NTSB is a relatively small agency, such practices remain relevant. Figure 1 summarizes NTSB’s progress in implementing our management recommendations. NTBS had fully implemented three of our management recommendations as of our last report in April 2008—our recommendations to (1) facilitate communication from staff to management, (2) align organizational structure to implement a strategic plan, and (3) correct an Antideficiency Act violation related to purchasing accidental death and dismemberment insurance for employees on official travel. In addition, NTSB has made further progress on seven of our management recommendations since 2008. First, it started reporting to Congress on the status of our recommendations by including the actions it has taken to address them in its Annual Report to Congress. In addition, NTSB has taken steps to implement all three of our IT-related recommendations: NTSB has fully implemented an IT strategic plan that addresses our comments. Moreover, in compliance with the Federal Information Security Management Act of 2002 (FISMA), NTSB has undergone annual independent audits, hiring outside contractors to perform security testing and evaluation of its computer systems. We performed limited testing to verify that NTSB has implemented our recommendation to install encryption software. Agency officials confirmed, however, that while encryption software is operational on 410 of the agency’s approximately 420 laptop computers, the remaining laptops do not have encryption software installed because they do not include sensitive information and are not removed from the headquarters building. NTSB has made significant progress in limiting local administrator privileges while allowing for employees to add software and print from offsite locations as necessary. NTSB has also drafted a strategic training plan that, when finalized, would address GAO guidance on federal strategic training and development efforts and establish the core competencies needed for investigators and other staff. In addition, two modal offices have developed core curricula that relate specifically to their investigators. In addition, NTSB obligated $1.3 million in September 2009 to the National Business Center—an arm of the Department of the Interior that provides for-fee payroll services to federal agencies—to develop a full cost accounting system for NTSB based on a statement of work. NTSB officials said that the first phase of the cost accounting system will be implemented late in fiscal year 2010. When completed to permit recording time and costing of investigations and other activities, including training, this action will fully implement our recommendation. The remaining five management recommendations have not yet been fully implemented. However, NTSB has initiated actions that could lead to the full implementation of the remainder of the recommendations. For example, GAO offered suggestions in 2008 for improving NTSB’s agencywide strategic plan, and NTSB is in the final stages of updating its strategic plan, which may address our comments by incorporating all five agency mission areas in its goals and objectives and obtaining comments from Congress or other external stakeholders potentially affected by or interested in the plan. In addition, NTSB has continued to improve its knowledge management by developing a plan to capture, create, share, and revise knowledge, and the agency is deploying Microsoft SharePoint® to facilitate sharing useful information within NTSB. In April 2008, we reported that NTSB had made significant progress in implementing our human capital planning recommendation by issuing a human capital plan that incorporated several strategies on enhancing the recruitment process but was limited in some areas of diversity management. As we have previously reported, diversity management is a key aspect of strategic human capital management. Developing a workforce that includes and takes advantage of the nation’s diversity is a significant part of an agency’s transformation of its organization to meet the challenges of the 21st century. The most recent version of NTSB’s human capital plan establishes goals for recruiting, developing, and retaining a diverse workforce, and NTSB provided diversity training to 32 of its senior managers and office directors in May 2009. Table 1 compares the diversity of NTSB’s fiscal year 2008 workforce with that of the federal government and the civilian labor force. As the table shows, the percentages of NTSB’s fiscal year 2008 workforce that were women and minorities were lower than those of the federal government. Under the Office of Personnel Management’s regulations implementing the Federal Equal Opportunity Recruitment Program, agencies are required to determine where representation levels for covered groups are lower than for the civilian labor force and take steps to address those differences. Additionally, as of fiscal year 2008, 9 percent of NSTB’s managers and supervisors are minorities and 24 percent are women (see fig. 2). Furthermore, according to NTSB, none of NTSB’s current 15-member career Senior Executive Service (SES) staff were members of minority groups, and only 2 of them were women. As we have previously reported, diversity in SES, which generally represents the most experienced segment of the federal workforce, can strengthen an organization by bringing a wider variety of perspectives and approaches to policy development and decision making. NTSB has undertaken several initiatives to create a stronger, more diverse pool of candidates for external positions. These initiatives include the establishment of a Management Candidate Program that has attracted a diverse pool of minority and female candidates at the GS 13/14 level. NTSB’s Executive Development Program focuses on identifying candidates for current and future SES positions at the agency. Despite these efforts, NTSB has not been able to appreciably change its diversity profile for minority group members and women. NTSB’s current workforce demographics may present the agency with an opportunity to increase the diversity of its workforce and management. According to NTSB, in 3 years, more than 50 percent of its current supervisors and managers will be eligible to retire, as will over 25 percent of its general workforce. Furthermore, 53 percent of its investigators and 71 percent of those filling critical leadership positions are at least 50 years of age. Although actual retirement rates may be lower than retirement eligibility rates, especially in the present economic environment, consideration of retirement eligibility is important to workforce planning. We previously made four recommendations to NTSB to improve the efficiency of its activities related to investigating accidents, such as selecting accidents to investigate and tracking the status of its recommendations, and increasing its use of safety studies (see fig. 3). NTSB is required by statute to investigate all civil aviation accidents and selected accidents in other modes—highway, marine, railroad, pipeline, and hazardous materials. Since our April 2008 report, NTSB has fully implemented our recommendation to develop transparent policies containing risk-based criteria for selecting which accidents to investigate. The recently completed highway policy assigns priority to accidents based on the number of fatalities, whether the accident conditions are on NTSB’s “Watch List” or whether the accidents might have significant safety issues, among other factors (see fig. 4). For marine accidents, NTSB has a memorandum of understanding (MOU) with the U.S. Coast Guard that includes criteria for selecting which accidents to investigate. In addition, NTSB has now developed an internal policy on selecting marine accidents for investigation. This policy enhances the MOU by providing criteria to assess whether to launch an investigation when the Coast Guard, not NTSB, would have the lead. In April 2008, we reported that NTSB had also developed a transparent, risk-based policy explaining which aviation, rail, pipeline, and hazardous materials accidents to investigate. The remaining three recommendations have not yet been fully implemented. However, NTSB has initiated actions that could lead to closure of the recommendations. NTSB is deploying an agencywide electronic information system based on Microsoft SharePoint that will streamline and increase NTSB’s use of technology in closing out its recommendations and in developing reports. When fully implemented, this system should serve to close these two recommendations. NTSB has also made significant progress in implementing our recommendation to increase its use of safety studies, which are multiyear efforts that result in recommendations. They are intended to improve transportation safety by effecting changes to policies, programs, and activities of agencies that regulate transportation safety. While we, the Department of Transportation, and nongovernmental groups, like universities, also conduct research designed to improve transportation safety, NTSB is mandated to carry out special studies and investigations about transportation safety, including studies about how to avoid personal injury. Although NTSB has not completed any safety studies since we made our recommendation in 2006, it has three studies in progress, one of which is in final draft, and it has established a goal of developing two safety study proposals and submitting them to its board for approval each year. NTSB officials told us that because the agency has a small number of staff, it has difficulty producing large studies in addition to processing many other reports and data inquiries. NTSB officials told us they would like to broaden the term “safety studies” to include not only the current studies of multiple accidents, but the research done for the other smaller safety-related reports and data inquiries. Such a term, they said, would better characterize the scope of their efforts to report safety information to the public. NTSB also developed new guidelines to address its completion of safety studies. Congressional reauthorization is an ideal time to obtain stakeholder input on whether a change in terminology like this would meet NTSB’s legislative requirement. We made two recommendations for NTSB to increase its own and other agencies’ use of the Training Center and to decrease the center’s overall operating deficit (see fig. 5). The agency increased use of the center’s classroom space from 10 percent in fiscal year 2006 to 80 percent in fiscal year 2009. According to NTSB, it has sublease agreements with agencies of the Department of Homeland Security (DHS) to rent approximately three- quarters of the classroom space located on the first and second floors. The warehouse portion of the Training Center houses reconstructed wreckage from TWA Flight 800, damaged aircraft, and other wreckage. The Training Center provides core training for NTSB investigators and trains others from the transportation community to improve their practice of accident investigation. Furthermore, NTSB has hired a Management Support Specialist whose job duties include maximizing the Training Center’s use and marketing its use to other agencies or organizations. The agency’s actions to increase the center’s use also helped increase total Training Center revenues from about $635,000 in fiscal year 2005 to about $1,771,000 in fiscal year 2009. By reducing the center’s leasing expenses— for example, by subleasing classrooms and office space at the center to other agencies—NTSB reduced the Training Center’s annual deficit from about $3.9 million to about $1.9 million over the same time period. NTSB has made significant progress in achieving the intent of our recommendation to maximize the delivery of its core investigator curriculum at the Training Center by increasing the number of NTSB- related courses taught at the Training Center (fig. 6). For example in 2008, 49 of the 68 courses offered at the Training Center were solely for NTSB employees. NTSB has fully implemented our recommendation to increase use of the Training Center. NTSB subleased all available office space at its Training Center to the Federal Air Marshal Service (a DHS agency) at an annual fee of $479,000. NTSB also increased use of the Training Center’s classroom space and thereby increased the revenues it receives from course fees and rents for classroom and conference space. From fiscal year 2006 through fiscal year 2009, NTSB increased other agencies’ and its own use of classroom space from 10 to 80 percent, and increased revenues by over $1.1 million. For example, according to NTSB it has a sublease agreement with DHS to rent approximately one-third of the classroom space. NTSB considered moving certain staff from headquarters to the Training Center, but halted these considerations after subleasing all of the Training Center’s available office space. NTSB decreased personnel expenses related to the Training Center from about $980,000 in fiscal year 2005 to $507,000 in fiscal year 2009 by reducing the center’s full-time equivalent positions from 8.5 to 3.0 over the same period. As a result of these efforts, from fiscal year 2005 through fiscal year 2009, Training Center revenues increased 179 percent while the center’s overall deficit decreased by 51 percent. (Table 2 shows direct expenses and revenues for the Training Center in fiscal years 2004 through 2009.) However, the salaries and other personnel-related expenses associated with NTSB investigators and managers teaching at the Training Center, which would be appropriate to include in the Training Center’s costs, are not included. NTSB officials told us that they believe the investigators and managers teaching at the Training Center would be teaching at another location even if the Training Center did not exist. Once NTSB has fully implemented its cost accounting system, it should be able to track and report these expenses. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions you or other Members of the Subcommittee may have at this time. For further information on this testimony, please contact Gerald L. Dillingham, Ph.D. at (202) 512-2834 or by e-mail at dillinghamg@gao.gov or Gregory C. Wilshusen at (202) 512-6244 or wilshuseng@gao.gov. Individuals making key contributions to this testimony include Keith Cunningham, Assistant Director; Lauren Calhoun; Peter Del Toro; George Depaoli; Elizabeth Eisenstadt; Fred Evans; Steven Lozano; Mary Marshall; Kiki Theodoropoulos; Charles Vrable; Jack Warner; and Sarah Wood. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The National Transportation Safety Board (NTSB), whose reauthorization is the subject of today's hearing, plays a vital role in advancing transportation safety by investigating accidents, determining their causes, issuing safety recommendations, and conducting safety studies. To support the agency's mission, NTSB's Training Center provides training to NTSB investigators and others. From 2006 through 2008, GAO made 21 recommendations to NTSB that address management, information technology (IT), accident investigation criteria, safety studies, and Training Center use. This testimony addresses NTSB's progress in implementing recommendations that it (1) follow leading management practices, (2) conduct aspects of its accident investigations and safety studies more efficiently, and (3) increase the use of its Training Center. This testimony is based on GAO's assessment from July 2009 to October 2009 of plans and procedures NTSB developed to address these recommendations. NTSB provided technical comments that GAO incorporated as appropriate. NTSB hasfully implemented or made significant progress in adopting leading management practices in all areas in which GAO made prior recommendations. For example, as GAO recommended in 2006, NTSB issued agencywide plans for human capital management and IT management, as well as a strategic plan. In 2008, GAO identified opportunities for improvement in those plans, and NTSB has since issued revised human capital and IT plans and drafted a revised agencywide strategic plan and a new strategic training plan. NTBS has taken steps to improve its diversity management. However, the percentages of NTSB's fiscal year 2008 workforce that were women and minorities were lower than those of the federal government. In addition, no members of a minority group are part of NTSB's 15-member career Senior Executive Service. Since GAO's 2008 report, NTSB has continued to improve information security by installing encryption software on agency laptops and appropriately restricting users' access privileges. NTSB has obligated money to implement a full cost accounting system consistent with a prior GAO recommendation, but NTSB officials said that the system will not be implemented until late in fiscal year 2010. In 2008, GAO reported that NTSB had made significant progress in articulating risk-based criteria for selecting which accidents to investigate. Specifically, NTSB had established such criteria for identifying which rail, pipeline, hazardous materials, and aviation accidents to investigate at the scene. Since then, NTSB has adopted the remaining highway and marine criteria, and NTSB is streamlining and increasing it use of technology in closing-out recommendations. NTSB has three safety studies in progress and would like to broaden the term "safety studies" to include not only its current studies of multiple accidents, but also the research it does for other smaller safety-related reports and data inquiries. NTSB has continued to increase the use of its Training Center--from 10 percent in fiscal year 2006 to 80 percent in fiscal year 2009. As a result, revenues have increased and the center's overall deficit has declined from about $3.9 million in fiscal year 2005 to about $1.9 million in fiscal year 2009.
The decision to provide substantial amounts of funding to the auto industry—more than 12 percent of all authorized TARP funds—and to accept equity in the companies in return for some of the assistance reflects Treasury’s view of the automotive industry’s importance to the U.S. economy. According to Treasury officials, Treasury provided assistance not simply because of the industry’s importance, but because of the severity of the crisis and the desire to prevent significant disruption to the economy that would have resulted from uncontrolled liquidations of Chrysler and GM. To help stabilize the industry and avoid economic disruptions, Treasury disbursed $79.7 billion through AIFP from December 2008 through June 2009. The majority of the assistance was used to support two automakers, Chrysler and GM, during restructuring, along with their automotive finance companies, Chrysler Financial and GMAC. In July 2009, Treasury outlined guiding principles for the investments made to the auto industry, including: exiting its investments as soon as practicable in a timely and orderly manner that minimizes financial market and economic impact; protecting taxpayer investment and maximizing overall investment returns within competing constraints; improving the strength and viability of GM and Chrysler so that they can contribute to economic growth and jobs without government involvement; and managing its ownership stake in a hands-off, commercial manner, including voting its shares only on core governance issues, such as the selection of a company’s board of directors and major corporation events or transactions. GM is one of the world’s largest automotive companies and does business in more than 120 countries worldwide. As of December 31, 2012, it employed 213,000 workers worldwide and marketed vehicles through a network of independent retailers totaling 20,754 dealers. In North America, GM manufactures and markets the following brands: Buick, Cadillac, Chevrolet, and GMC. Treasury provided a $13.4 billion loan in December 2008 to the Old GM to fund working capital.the purchase of additional ownership interests in a rights offering by GMAC. In April 2009, Treasury loaned an additional $6 billion to fund Old GM as it worked to submit a viable restructuring plan (working capital loan). These funds, along with loans from the Canadian government and concessions from nearly every stakeholder, including the company’s primary labor union—the International Union, United Automobile, Aerospace and Agricultural Implement Workers of America (UAW)—were intended to give the companies time to restructure to improve their competitiveness and long-term viability. Treasury also lent $884 million to the Old GM for As a condition of receiving this assistance, Old GM was required to submit a plan to Treasury that would, among other things, identify how it intended to achieve and sustain long-term financial viability. GM’s initial viability plan submitted in February 2009 was rejected. The plan established targets for addressing some of the company’s key challenges to achieving viability, including reducing debt, numbers of brands and models, dealership networks, and production costs and capacity. In 2009, Old GM filed a voluntary petition for reorganization under Chapter 11 of the U.S. bankruptcy code. Subsequently, in June 2009, Treasury provided Old GM with $30.1 billion under a debtor-in-possession financing agreement to assist during the restructuring. The newly organized GM was able to purchase most of the operating assets of the former company through a sale under Section 363 of the bankruptcy code. When the sale was completed on July 10, 2009, Treasury converted most of its loans into 60.8 percent of the common equity in GM and $2.1 billion in preferred stock. In addition, $6.7 billion of the TARP loans remained outstanding after the bankruptcy. In spring 2011, the bankruptcy was completed, and Old GM’s remaining assets and liabilities were transferred to liquidating trusts. As we concluded in our past work, the federal assistance allowed GM to restructure its balance sheets and obligations through the bankruptcy code and tackle key challenges to achieving viability. Ally Financial, previously known as GMAC, formerly served as GM’s captive auto finance company. The primary purpose of auto financing is to provide credit to consumers so that they can purchase automobiles. In determining how to finance their purchases, consumers have many financial institutions from which to choose, including banks, credit unions, and auto finance companies, all of which may offer loans or other credit accommodations for the purchase of new and used automobiles. In addition to consumer financing, auto dealers have also traditionally used manufacturers’ finance companies to finance their purchase of the automobile inventory that they sell (known as floor-plan financing). Prior to the financial crisis, GMAC’s subsidiaries expanded into other areas of financial services, such as auto insurance and residential mortgages, but GMAC remained a wholly owned subsidiary of the Old GM. In 2006, Cerberus Capital Management purchased 51 percent of the company. GM retained a 49 percent ownership stake. As the housing market declined in the late 2000s, the previously profitable GMAC mortgage business unit began producing significant losses. For example, the company’s Residential Capital LLC (ResCap) subsidiary—which by 2006 had grown to be the country’s sixth-largest mortgage originator and fifth-largest mortgage servicer—lost approximately $17 billion from 2007 through 2009. During the same time period, automobile sales in the U.S. dropped from 16.4 million to 10.4 million, negatively affecting the company’s core auto financing business. According to Treasury, Treasury determined that without government assistance GMAC would be forced to deny or suspend financing to creditworthy dealerships, leaving them unable to purchase automobile inventory for their lots. Without orders for automobiles from dealerships, GM would have been forced to slow or shut down its factories indefinitely to match the drop in demand. Given its significant overhead, a slow-down or stoppage of this magnitude would have caused GM to topple, according to Treasury. However, GMAC was not initially eligible for assistance under the TARP Capital Purchase Program (CPP). To become eligible for federal financial assistance, GMAC sought to convert GMAC Bank’s charter from an industrial loan company into a commercial bank in 2008 and applied to the Federal Reserve for bank holding company status. GMAC also submitted an application to participate in the Capital Purchase Program, conditional upon becoming a bank holding company. The Federal Reserve approved GMAC’s bank holding company application in December 2008. Although GMAC originally applied for participation in CPP, in late December 2008, as a part of AIFP, Treasury agreed to purchase $5 billion in senior preferred equity from GMAC and received an additional $250 million in preferred shares through warrants that Treasury exercised immediately. Treasury subsequently provided GMAC with additional assistance through TARP. In May 2009, Treasury purchased $7.5 billion of mandatory convertible preferred shares from GMAC. In December 2009, Treasury purchased additional shares—$2.5 billion of trust preferred securities and approximately $1.3 billion of mandatory convertible preferred shares. Also, in December 2009, Treasury converted $3 billion of existing mandatory convertible preferred shares into common stock, increasing its common equity stake from 35 percent to 56.3 percent. In December 2010, Treasury converted preferred stock in Ally Financial that had a liquidation preference of $5.5 billion into common stock. This stock conversion resulted in Treasury’s owning approximately 74 percent of Ally Financial. In addition, as of September 2013, Treasury continues to hold $5.9 billion of Ally Financial’s mandatory convertible preferred shares. Ally Financial pays a 9.0 percent fixed dividend annually to Treasury on these preferred shares. As will be discussed later in this report, Ally Financial has entered into an agreement with Treasury to repurchase the mandatory convertible preferred shares with possible completion of the transaction sometime later this year. As of June 2013, Ally Financial was the 20th largest U.S. bank holding company, with total assets of $150.6 billion. Its primary line of business is auto financing—both consumer financing and leasing and dealer floor- plan financing. As a bank holding company, Ally Financial is regulated and supervised by the Federal Reserve. Ally Bank, an Internet and-telephone-based, state-chartered nonmember bank that is supervised by the FDIC and the Utah Department of Financial Institutions. Ally Bank has over $92 billion in assets and $50.8 billion in total deposits, as of June 30, 2013. 12 U.S.C. § 1844(b)-(c). Since receiving federal assistance, GM has shown increasingly positive financial results. For each of the last 3 years, GM has reported profits, a positive and growing operating cash flow, and a stable liquidity position. This improved financial performance has been reflected in GM’s credit ratings, with each of the three largest credit rating agencies increasing GM’s long-term credit rating. Although Moody’s upgraded GM’s rating to investment grade on September 23, 2013, Standard and Poor’s and Fitch Ratings rate GM as below investment grade as of the same month. Furthermore, GM’s market share of vehicles sold in North America was smaller than in 2008, and it continued to carry large pension liabilities. Based on our analysis of GM’s reported financial data, the company’s financial performance has improved since 2008. We assessed GM’s financial performance by examining its reported net income, operating income, and operating cash flow. Net income (loss): Net income (net profit or loss) is the difference between total revenues and expenses and represents the company’s income after all expenses and taxes have been deducted. For 2010, 2011, and 2012, GM reported net income of $6.5 billion, $9.3 billion, and $6.1 billion, respectively (see table 1). In 2008, prior to the federal government’s assistance and Old GM’s bankruptcy, GM reported a net loss of $30.9 billion. As we found in our prior work, a key result of the restructuring was that GM lowered its fixed costs by reducing the number of employees, plants, and dealerships, among other things. Reduced fixed costs allow GM to be profitable with fewer sales, thereby lowering its “break-even” level. Operating income (loss): Operating income (loss) describes a company’s profit and loss from its core operations. It is the difference between the revenues of a business and the related costs and expenses, excluding income from sources other than its core business (e.g., income derived from investments). GM reported operating income of $5.1 billion and $5.7 billion in 2010 and 2011, respectively (see table 1). In 2012, GM took an approximate $27 billion goodwill impairment charge that significantly reduced its operating income, contributing to a $30.4 billion operating loss. Operating cash flow: Operating cash flow refers to the amount of cash generated by a company’s core business operations. Operating cash flow is important because it indicates whether a company is able to generate sufficient positive cash flow to maintain and grow its operations, or requires external financing. In its 2010, 2011, and 2012 annual reports, GM reported operating cash flow of $6.6 billion, $7.4 billion, and $9.6 billion, respectively. Further, in 2010 GM reported total available liquidity from automotive operations of $32.5 billion, including $5.9 billion from credit facilities. This amount had increased slightly by 2012, when GM reported total available liquidity of $37.2 billion, including $11.1 billion from credit facilities. Finally, GM reported current assets greater than current liabilities from 2010 through 2012, indicating that it could meet all current liabilities without additional financing. These improvements in GM’s financial performance have been reflected in its credit ratings. A credit rating is an assessment of the credit worthiness of an obligor as an entity or with respect to specific securities or money market instruments. Credit ratings are important because investors and financial institutions may consider them when making investment and lending decisions. The three largest credit rating agencies have each increased GM’s long-term credit rating two steps in the past 3 years. Fitch Ratings and Standard and Poor’s raised their long-term rating on GM from BB- to BB+. Moody’s raised its long-term corporate family rating on GM from Ba2 to Baa3, an investment grade rating (i.e., the issuer or bond has a relatively low risk of default). Comparatively, Standard and Poor’s also rates Ford Motor Company as one step below investment grade, with a positive outlook. Fitch and Moody’s upgraded Ford to an investment grade rating in April and May 2012, respectively. Toyota, another competitor of GM, maintains an investment grade rating with all three of these credit rating agencies. Although GM’s financial performance has improved significantly since the company initially received federal assistance, questions remain about competitiveness and costs. One of the factors in GM’s improved financial condition has been increased sales of automobiles generally, including GM’s, over the last 3 years. Overall, North American vehicle sales increased more than 23 percent from 2010 to 2012, rising from 14.4 million to 17.8 million. Over this same period, sales of GM automobiles in North America increased 15 percent, from 2.6 million to 3 million. However, GM’s North American market share generally has declined over the past 5 years (see fig. 1).percent of the North American market, compared with 16.9 percent in 2012. GM reported that its North American market share was 17.2 percent through the second quarter of 2013. Treasury invested over $51 billion in GM through AIFP. In exchange for this assistance, Treasury received 60.8 percent of the common equity in GM, $2.1 billion in preferred stock, and $7.1 billion in notes from GM. Through September 18, 2013, Treasury had recovered about $35.21 billion of its investments in GM and reduced its ownership stake to 7.32 percent through three major actions. As of September 18, 2013, Treasury has recouped $37.75 billion from its GM and Ally Financial investment. First, Treasury participated in GM’s IPO in November 2010, selling about 412.3 million shares at an average price per share of approximately $32.75. This sale generated $13.5 billion for Treasury and reduced its ownership share to 32 percent. As we found in our 2011 report, by participating in GM’s IPO, Treasury tried to fulfill two goals—to maximize taxpayers’ return and to exit the company as soon as practicable. Second, GM and Treasury entered into an agreement that allowed GM to repurchase 200 million shares in December 2012. According to GM, as a general matter, GM was interested in reducing Treasury’s interest in GM and facilitating Treasury’s eventual complete exit from GM ownership. GM purchased the shares at $27.50 per share, about 7.8 percent over the market price of about $25.50. This generated about $5.5 billion in revenue for Treasury and further reduced its ownership interest to just over 22 percent. GM officials said that they determined the premium price based on an arms-length negotiation between GM and Treasury. According to GM officials, the decision to agree to a premium price reflected the benefits GM and other stakeholders received from the transaction, including increased knowledge as to when and how Treasury was going to exit its holdings thereby eliminating the perceived “overhang” on GM’s common stock price; mitigation of the stigma associated with the “Government Motors” moniker that negatively impacted customer perceptions; and the fact that the transaction was accretive to earnings. Furthermore, as part of the transaction, Treasury agreed to remove certain governance and reporting requirements. Treasury announced in December 2012 that it planned to divest fully of its GM common equity stake within 12-15 months, subject to market conditions. Third, to achieve its goal of fully divesting by 2014, Treasury has developed and is implementing an exit strategy to sell its shares in tranches. Similar to the process Treasury used to divest its ownership in Citigroup, Treasury began placing its GM shares on the market in tranches, or “dribbles,” for a specific time period, beginning in January 2013. Treasury reports these sales after the end of each period for selling a particular tranche. According to Treasury officials, in the case of GM, the dribble approach is a better divestment method than discrete large offerings given the remaining size of its equity holdings and time frame in which it is planning to exit. Furthermore, the dribble approach helps (1) secure the highest possible prevailing market price for taxpayers, (2) limit the impact of additional supply in the market, and (3) ensures that Treasury has flexibility to average the proceeds over time and make adjustments if necessary. As of September 2013, Treasury has sold over 811 million shares for more than $25 billion, leaving it with 101,336,666 shares which represent a 7.32 percent ownership stake in GM. On September 26, 2013, Treasury announced it was launching a third pre-defined written trading plan for its GM common stock. Although Treasury has implemented a plan to divest itself of its ownership stake in GM, it does not anticipate fully recouping its investments. In September 2013, Treasury projected approximately at least a 19 percent loss on its GM investment. The extent of the loss, however, will depend on GM’s stock price. As shown in figure 6, the price of GM’s stock is not at the level needed for Treasury to fully recoup its investment. Nevertheless, according to Treasury officials, Treasury has continued to sell its shares in line with its guiding principle of exiting its TARP investments as soon as practicable while maximizing return to taxpayers. Doing so, however, has increased the break-even price—that is, the price the stock must reach for Treasury to fully recoup its investment—for its remaining shares. Based on our analysis, we estimate that GM’s stock price would have to reach $156 per share for Treasury to fully recoup its investment as of September 16, 2013. At the beginning of September 2013, GM’s stock was trading at around $36 per share. Although Treasury’s ownership stake in Ally Financial has remained unchanged for the last 3 years at about 74 percent, the company has recently announced planned actions that will facilitate Treasury’s exit, pending regulatory approval. As of September 2, 2013, Treasury has recovered about $2.5 billion of the $16.3 billion invested in Ally Financial from a sale of trust preferred shares. Treasury’s remaining investment in Ally Financial consists of common stock and $5.9 billion in mandatory convertible preferred shares. In August 2013, Ally Financial announced plans, discussed below, to repurchase the mandatory convertible preferred shares. Treasury has stated that it would like to divest itself of its ownership stake in Ally Financial in a manner that balances the speed of recovery with maximizing returns for taxpayers. Furthermore, Treasury has stated that it will unwind its remaining common stock investment through a sale of stock (either public or private sale) or through future sales of assets. As of September 2013, Treasury has announced the plan for unwinding its preferred stock investments in Ally Financial, though not for its common stock investment. According to Treasury officials, Treasury will announce its precise plans for the common stock investment once it is ready to take action. To accelerate repayment of Treasury’s investment and strengthen its longer term financial profile, Ally Financial announced in May 2012 that it was undertaking two strategic initiatives. These initiatives were (1) the discontinuation of providing financial support to its subsidiary ResCap pursuant to the ResCap bankruptcy and (2) the sale of its international auto finance business. ResCap’s mortgage business created significant uncertainty for Ally Financial and thus an impediment to Ally Financial’s ability to repay Treasury’s investment. Also, Ally Financial sought to sell its international operations as a means to accelerate repayment plans to Treasury. At the same time as Ally Financial’s announcement last year, Treasury stated that the company’s two strategic initiatives would put taxpayers in a stronger position to continue recovering their investment in Ally Financial. As previously noted, Ally Financial achieved a settlement agreement with the ResCap creditors, which was approved by the bankruptcy court in June 2013. In addition, during the second quarter of 2013, Ally Financial completed the sale of its international auto finance business in Europe and the majority of its finance operations in Latin America. Ally Financial plans to complete the sale of its remaining international assets—its operations in Brazil and its joint venture in China—in late 2013 and 2014, respectively. However, in exiting its Ally Financial investment, Treasury faces challenges and considerations, including Ally Financial’s failure to meet Federal Reserve capital requirements and competition from other institutions, which may ultimately affect the price of Ally Financial stock once the company is publicly traded. In contrast to GM, Ally Financial is a regulated bank holding company that must receive the Federal Reserve’s approval before it can repurchase its preferred shares from Treasury. However, Ally Financial’s initial plan to repurchase the mandatory convertible preferred shares stalled after the Federal Reserve objected to its proposed capital plan in the spring 2013 Comprehensive Capital Analysis and Review (CCAR). The Dodd-Frank Wall Street Reform and Consumer Protection Act requires the Federal Reserve to conduct an annual supervisory stress test of bank holding companies with $50 billion or more in total consolidated assets to evaluate whether the companies have sufficient capital to absorb losses resulting from adverse economic conditions. As part of the stress test for each company, the Federal Reserve projects revenue, expenses, losses, and resulting post-stress test capital levels, regulatory capital ratios, and the tier 1 common ratio under three scenarios (baseline, adverse, and severely adverse). In March 2013, the Federal Reserve reported the results of its most recent supervisory stress test and of the CCAR exercise. The Federal Reserve found that Ally Financial’s tier 1 common capital ratio fell below the required 5 percent under the severely adverse scenario. Ally Financial was the only one of the 18 bank holding companies tested that fell below this required level. Further, as previously indicated, the Federal Reserve objected to Ally Financial’s capital plans during the 2013 CCAR. CCAR is an annual exercise the Federal Reserve conducts to help ensure that financial institutions have robust, forward-looking capital planning processes that take into account their unique risks and sufficient capital to continue operating during periods of economic and financial stress. As part of the CCAR process, the Federal Reserve evaluates institutions’ capital adequacy; internal processes for assessing capital adequacy; plans to make capital distributions, such as dividend payments or stock repurchases; and other actions that affect capital. The Federal Reserve may object to a capital plan because of significant deficiencies in the capital planning process or because one or more relevant capital ratios would fall below required levels under the assumption of stress and planned capital distributions. If the Federal Reserve objects to the proposed capital plan, the bank holding company is only permitted to make capital distributions if the Federal Reserve indicates in writing that it does not object and must resubmit the capital plan to the Federal Reserve following remediation of these deficiencies. Of the 18 bank holding companies reviewed in 2013, the Federal Reserve objected to Ally Financial’s and one other company’s capital plans. According to the Federal Reserve, Ally Financial’s capital ratios did not meet the required minimums under the proposed capital plan. Specifically, the Federal Reserve reported that under stress conditions Ally Financial’s plan resulted in a tier 1 ratio of common capital of 1.52 percent, which is below the required level of 5 percent under the capital plan rule. According to the Federal Reserve report, these results assumed that Ally Financial remained subject to contingent liabilities associated with ResCap. The Federal Reserve required Ally Financial to resubmit a capital plan. Ally Financial resubmitted its new capital plan to the Federal Reserve in September 2013, and in accordance with federal regulation, the Federal Reserve will have 75 days to review the plan. On August 19, 2013, Ally Financial announced that it had entered into agreements with certain accredited investors to issue and to sell to them an aggregate of 166,667 shares of its common stock (private placement securities) for an aggregate purchase price of $1 billion. Ally Financial did not identify the investors. According to Ally Financial, the agreement would strengthen the company’s common equity base and support its capital plan resubmission to the Federal Reserve. The agreement requires that the private placement close no later than November 30, 2013. Also on August 19, Ally Financial and Treasury entered into an agreement under which Ally Financial is to repurchase all of the mandatory convertible preferred shares. The agreement is conditioned on Ally Financial receiving a non-objection by the Federal Reserve on its resubmitted CCAR capital plan and the closing of the private placement securities transaction. Ally Financial faces growing competition in both consumer lending and dealer financing from Chrysler Capital, GM Financial, and other large bank holding companies. This competition may affect the future profitability of Ally Financial, which could influence the share price of Ally Financial once the company becomes publicly traded and thus the timing of Treasury’s exit. Similar to its GM investment, the eventual amount of Treasury’s recoupment on its Ally Financial investment will be determined by the share price of Ally Financial stock. Chrysler Capital: In February 2013, Chrysler announced that it had entered into an agreement with Santander Consumer USA Inc., a subsidiary of Banco Santander, S.A., that specializes in subprime auto lending, to provide a full spectrum of auto financing services to Chrysler Group customers and dealers under the name of Chrysler Capital. Under the 10-year, private-label agreement, Santander Consumer USA was to establish a separate lending operation dedicated to providing financial services under the Chrysler Capital name, including financing for retail loans and leases, new and used vehicle inventory, dealership construction, real estate, working capital, and revolving lines of credit. The agreement grants Santander Consumer USA the right to a minimum percentage of Chrysler’s subvention volume and the right to use the Chrysler Capital name for its auto loan and lease offerings. Santander Consumer USA will also provide loans to Chrysler dealers to finance inventory, working capital, and capital improvements. On May 1, 2013, Chrysler Capital started its lending operations. GM Financial: In 2009, GM acquired AmeriCredit Corporation (AmeriCredit), a subprime automobile finance company, to serve its subprime customers. AmeriCredit was renamed GM Financial and made a wholly owned subsidiary of GM. Its target lending market is lending to consumers who have difficulty securing auto financing from banks and credit unions. According to GM officials, the purpose of GM Financial is to drive incremental GM automobile sales by providing solid and stable funding for GM dealers and consumers. GM Financial would serve as a captive lender for GM, much as GMAC did. This year, GM Financial increased its overall assets by purchasing Ally Financial’s international assets in Europe and Latin America, including the dealer financing arrangements in these countries. Other bank holding companies: We compared the amount of Ally Financial consumer auto lending with four large bank holding companies (Bank of America Corporation, Capital One Financial Corporation, JPMorgan Chase & Company, and Wells Fargo & Company) that reported consumer automobile loans. These data do not include all types of automobile financing, such as automobile leasing and dealer financing, only retail consumer automobile loans for the time period. As shown in figure 7, the dollar amount of consumer auto loans Wells Fargo & Company and Capital One Financial Corporation made increased from March 2011 through June 2013. However, Ally Financial remained the leader among the four institutions for the same time period. Treasury officials noted that such competition could also be a benefit because Ally Financial’s assets could be viewed as valuable to the other competitors. Treasury officials noted that the value of Ally Financial can be demonstrated by the recent private placement agreement. Specifically, if Treasury sells its Ally Financial common stock at the share price agreed to in the private placement agreement—$6,000 per share—Treasury would receive a significant profit on its Ally Financial investment. We provided a draft of this report to FDIC, the Federal Reserve and Treasury for their review and comment. In addition, we provided excerpts of the draft report to Ally Financial, GM, and Chrysler Capital to help ensure the accuracy of our report. Treasury provided written comments which are reprinted in appendix II. Treasury agreed with the report’s overall findings. In its written comments, Treasury describes the auto industry’s recovery and the progress Treasury has made in unwinding its investments in Ally Financial and GM. Treasury also noted that it expects to complete the exit from GM by the first quarter of 2014 and wind down the remaining Ally Financial investment either by selling stock in a public or private offering, or through future asset sales. The Federal Reserve, Treasury, Ally Financial, GM, and Chrysler Capital provided technical comments that we incorporated as appropriate. In its technical comments, GM highlighted what third parties have suggested could have happened had Treasury not provided assistance to the auto industry, including the potential adverse effects on unemployment levels and tax receipts of all levels of government. FDIC did not provide any comments. We are sending copies of this report to the appropriate congressional committees. This report will also be available at no charge on our website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-8678 or clowersa@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. This report is based on our continuing analysis and monitoring of the U.S. Department of the Treasury’s (Treasury) activities in implementing the Emergency Economic Stabilization Act of 2008 (EESA), which provided GAO with broad oversight authorities for actions taken under the Troubled Asset Relief Program (TARP). Under TARP, Treasury established the Automotive Industry Financing Program, through which Treasury committed $51 billion to help General Motors Company (GM) and $16.3 billion to GMAC LLC, a financial services company that provides automotive financing and that later became Ally Financial, Inc. (Ally Financial). This report examines (1) the financial condition of GM and Ally Financial and (2) the status of Treasury’s investments in the companies as well as its plans to wind down those investments. To assess the financial conditions of GM, we analyzed net income, operating income, operating cash flow, operating income, sales of automobiles, GM’s share of the North American market, credit ratings, and pension obligations and pension plan funding for GM’s U.S. employees from 2008 through the second quarter (June 30) of 2013. For Ally Financial, we reviewed the institution’s capital ratios, net income, operating income, net interest spread, return on assets, nonperforming assets ratio, liquidity ratio, bank deposits, operating cash flow, and credit ratings, generally from 2008 through the second quarter (June 30) of 2013. To obtain information on the financial ratios and indicators used in their analyses of GM’s or Ally Financial’s financial condition, we interviewed staff from Treasury, the Board of Governors of the Federal Reserve System (Federal Reserve), Federal Deposit Insurance Corporation (FDIC), GM, Ally Financial, and analysts from the three largest credit rating agencies as well as investment firms. To select analysts from investment firms to interview, we identified analysts who covered GM. We identified them using GM’s investor relations webpage (http://www.gm.com/company/investors/analyst-coverage.html) and selected four to contact based on an electronic search for automotive equity analysts cited in reputable trade and business publications. We reached out to four analysts and interviewed two. We also interviewed analysts responsible for covering GM from one of the credit rating agencies and analysts responsible for covering Ally from all three of the credit rating agencies. The views of these analysts cannot be generalized to all analysts from investment firms and credit rating agencies. We reviewed past GAO reports, information from GM’s and Ally Financial’s annual 10-K filings with the Securities and Exchange Commission, reports and documentation from Treasury and the companies, and data from SNL Financial from 2008 through the second quarter (June 30) of 2013. For both GM and Ally Financial, we collected information, generally from 2008 through the second quarter (June 30) of 2013, the most recent information that was publicly available. We have relied on SNL Financial data for past reports, and we reviewed past GAO data reliability assessments to ensure that we, in all material respects, used the data in a similar manner and for similar purposes. For each data source we reviewed the data for completeness and obvious errors, such as outliers, and determined that these data were sufficiently reliable for our purposes. We also reviewed the credit ratings from three rating agencies for each of these companies. Although we have reported on actions needed to improve the oversight of rating agencies, we included these ratings because they are widely used by GM, Ally Financial, Treasury, and market participants. To examine the status of Treasury’s investments and its plans to wind down those investments, we reviewed Treasury’s TARP reports, which included monthly 105(a) and daily TARP updates on AIFP program data for the time period from 2008 through September 2013. We have used Treasury’s data on AIFP in previous GAO reports. We determined that the AIFP program data from Treasury were sufficiently reliable to assess the status of the program. For example, we tested the Office of Financial Stability’s internal controls over financial reporting as they related to our annual audit of the office’s financial statements and found the information to be sufficiently reliable based on the results of our audit of the TARP financial statements for fiscal years 2009, 2010, 2011, and 2012. AIFP was included in these financial audits. Using the AIFP program data, we analyzed Treasury’s equity ownership and recovery of funds in GM and Ally Financial for the time period from January 2009 to September 2013. We reviewed the data for completeness and obvious errors, such as outliers, and determined that these data were sufficiently reliable for our purposes. For the divestment of GM equity, we interviewed Treasury and GM officials on the December 2012 repurchase of GM shares and the “dribble” strategy developed by Treasury. For analyzing Treasury’s exit from Ally Financial, we reviewed Treasury and Federal Reserve documentation, such as Treasury’s monthly reports to Congress, Treasury’s contractual agreements for the mandatory convertible preferred shares, and the proposed capital plan that Ally submitted to the Federal Reserve. We also reviewed two publicly available reports from the Federal Reserve on the Dodd-Frank Wall Street Reform and Consumer Protection Act and capital plan analysis, Dodd-Frank Act Stress Test 2013: Supervisory Stress Test Methodology and Results and Comprehensive Capital Analysis and Review 2013: Assessment Framework and Results.Treasury’s Office of Financial Stability, Federal Reserve, Federal Reserve Bank of Chicago, FDIC, GM, and Ally Financial. In addition, we interviewed officials from We conducted this performance audit from March 2013 to October 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings based on our audit objectives. In addition to the individual named above, Raymond Sendejas (Assistant Director), Bethany Benitez, Emily Chalmers, Nancy Eibeck, Matthew Keeler, Risto Laboski, Sara Ann Moessbauer, Marc Molino, and Roberto Pinero made key contributions to this report.
As part of its Auto Industry Financing Program (AIFP), funded through the Troubled Asset Relief Program (TARP), Treasury committed $67.3 billion to automaker GM and to Ally Financial, a large bank holding company whose primary business is auto lending. TARP's authorizing legislation mandates that GAO report every 60 days on TARP activities. This report examines (1) the current financial condition of the two companies and (2) the status of Treasury's investments in the companies and its plans to sell those investments. To examine the financial condition of GM and Ally Financial, GAO reviewed industry, financial, and regulatory data for the time period from the beginning of 2008 through the second quarter of 2013. GAO also reviewed Treasury reports and documentation detailing Treasury's investments in GM and Ally Financial and its proposed strategies for divesting itself of the investments, as well as both companies' financial filings and reports. In addition, GAO interviewed officials from Treasury, the Board of Governors of the Federal Reserve (Federal Reserve), GM, Ally Financial, and financial analysts who study GM and Ally Financial. In its written comments on a draft of this report, Treasury describes the auto industry's recovery and the progress Treasury has made in unwinding its investments in GM and Ally Financial. Treasury, the Federal Reserve, GM, and Ally Financial also provided technical clarifications, which were incorporated, as appropriate. Since receiving federal assistance, General Motors Company (GM) has shown increasingly positive financial results. For each of the past 3 years, GM has reported profits, positive and growing operational cash flow, and a stable liquidity position. This improved financial performance has been reflected in GM's credit rating, as each of the three largest credit rating agencies has increased GM's long-term credit rating. However, GM faces continued challenges to its competitiveness. For instance, its market share of vehicles sold in North America remains smaller today than in 2008. Furthermore, GM continues to carry large pension liabilities. With Treasury's investments in Ally Financial, the company's condition has stabilized. For example, Ally Financial's capital and liquidity positions have stabilized or improved over the last 4 years. Such improvements have been noted by the three largest credit rating agencies, each of which has upgraded Ally Financial's credit rating. However, Ally Financial's credit rating remains below investment grade and its mortgage unit--Residential Capital LLC--impacted the company's financial performance. The mortgage unit filed for bankruptcy in May 2012, and these proceedings are ongoing. Analysts with whom GAO spoke indicated that the resolution of its mortgage unit's bankruptcy will be a positive development for Ally Financial's future financial performance. As of September 18, 2013, the Department of the Treasury (Treasury) has recovered about $35.21 billion of its $51 billion investment in GM and reduced its ownership stake from 60.8 percent to 7.32 percent. By early 2014, Treasury plans to fully divest its GM common shares through installments and estimates that it will lose at least 19 percent of its original investment. Treasury is working to exit from Ally Financial with a recent agreement to sell all of its preferred stock to the company for approximately $6 billion, but Treasury faces challenges. As a regulated bank holding company, Ally Financial must be well capitalized to receive its regulator's approval to repurchase shares from Treasury. Earlier this spring, Ally Financial's tier 1 common ratio fell below the required 5 percent in the Federal Reserve's "stress test," and the Federal Reserve objected to the company's capital plan. Ally Financial also faces growing competition in the consumer lending and dealer financing sectors that could impact its financial performance in the future. The extent of Treasury's recoupment on its Ally Financial investment will depend on the ongoing financial health of the company.
Federal, state, and local governments share the financing of our nation’s public schools. The federal share is the smallest, averaging about 7 percent of public school funding in the 1991-92 school year. Nationwide, the other 93 percent of funding was about evenly split between state and local funding (see fig. 2). However, the state share of total public funding varied by state from about 9 percent in New Hampshire to about 76 percent in New Mexico in the 1991-92 school year. Localities raise revenue for education mainly through property taxes, and the amount of local funds depends on both property values and local tax rates. This has produced local funding disparities because school districts’ property tax bases vary widely. Localities with high property values can generally raise large amounts of local revenue per pupil even with relatively low tax rates; localities with low property values usually raise less local revenue per pupil even with higher tax rates. In an earlier report, we found that poorer districts in 35 states tended to make a greater tax effort than wealthier districts in school year 1991-92, but this effort was not sufficient to eliminate the funding gaps between poor and wealthy districts in 23 of these states. When allocating revenue to districts, states typically consider these tax base differences as well as educational need factors. States use various equalization strategies to address the funding gaps that arise from tax base differences. Such strategies include targeting more funds to districts with lower tax bases and increasing the state’s share of total education funds. Meanwhile, states typically consider districts’ educational needs, including the number of pupils in a district, the cost of educating different types of pupils (for example, students with disabilities), and other educational cost factors beyond the districts’ control such as costs related to sparsity or enrollment growth. The educational needs of poor students are one of the states’ considerations when making funding decisions. Poor students risk academic failure because their homes or communities lack the resources to prepare them academically and because, among other factors, they have considerable health and nutrition problems. Children living below the poverty level are more likely than nonpoor children to have learning disabilities and developmental delays. As a result, poor students’ academic achievement tends to be low, and they have high rates of dropping out of high school. To help low-achieving poor students, 28 states funded compensatory programs in school year 1993-94, according to a national study on school finance programs. States may have funded such programs by directly allocating funds for this purpose, incorporating the funding into other programs, or including weights in their basic support formulas that provide funds for school districts’ daily operations. Funding for state compensatory programs that directly targeted poor students represented up to 11 percent of total state school aid. The total spent for compensatory programs ranged from about $1 million in Wyoming to about $785 million in Texas in school year 1993-94. The lawsuits filed since the early 1970s challenging the constitutionality of state school finance systems based on the inequitable distribution of education revenues between districts within states demonstrate that ensuring a fair distribution of funds is a complex and difficult undertaking. States are under constant pressure from both poor and wealthy districts, education interest groups, and anti-tax groups to modify their state school finance systems. For example, in our 1995 study of three states involved in equity lawsuits, we found that the state remedies for improving the equity of their school finance systems had to respond to citizens’ concerns about increased taxes and to concerns of wealthy districts that want to maintain spending levels. In fiscal year 1997, the federal government spent about $37 billion on elementary and secondary education. The Department of Education provides most of these funds. The states’ education agencies receive most of the funds and then allocate them to local districts. Programs funded this way include those for disadvantaged children, children with disabilities, drug-free schools, math and science, vocational education, and migratory education. Department of Education funding provided directly to districts included impact aid, bilingual education, and Indian education. Among other federal agencies that spend substantial amounts on elementary and secondary education are USDA through its child nutrition programs and HHS through its Head Start and other programs. Most federal funding for elementary and secondary education is targeted to disadvantaged and poor children. For example, the Department of Education’s title I grants that provide compensatory services for disadvantaged students accounted for about $7.2 billion of the federal funding for education in fiscal year 1997, and USDA’s child nutrition programs for low-income students accounted for about $8.3 billion. Title I has anchored the Elementary and Secondary Education Act since it was first enacted in 1965. USDA began providing child nutrition programs with the enactment of the National School Lunch Act of 1946 and later expanded its effort under the Child Nutrition Act of 1966. Members of the Congress have recently considered more flexible approaches for funding federal education programs as a way to possibly consolidate duplicative programs and eliminate regulations seen as unnecessarily limiting local flexibility. Education programs serving disadvantaged students, including title I, are possible candidates for this approach. The Congressional Research Service recently noted that among the unresolved issues concerning such approaches are the amount of flexibility that would be allowed states or localities in using these federal funds and the extent to which recipients would be held accountable for achieving certain outcomes. Most states targeted more funds to districts with large numbers of poor students, although the amount of such funding varied widely. In most states, federal funds were more targeted than state funds, which resulted in increasing the overall amount of additional funding for each poor student. Regardless of whether a state’s school finance system explicitly targeted poor students, the effect was to target more state funds to poor students in 43 of the 47 states in our analysis. State school finance systems may have targeted poor students either directly through compensatory programs or indirectly through other programs, such as bilingual education, which may serve a high proportion of poor students. The amount of extra state funding districts received for each poor student varied widely. On average, for every $1 a state provided in education aid for each student in a district, the state provided an additional $.62 per poor student. At the high end, New Hampshire provided an extra $6.69 per poor student; at the low end, four states provided no additional funding per poor student. Federal funding was more targeted to poor students than state funding in 45 of the 47 states. On average, for every $1 of federal funding districts received for each student, they received an additional $4.73 in federal funding per poor student. The amount of additional federal funding districts received for each poor student varied widely. At the high end, districts in Alaska received an additional $9.04 in federal funding; at the low end, districts in West Virginia received an additional $2.59. In general, the greater federal targeting had the effect of raising the additional funding for poor students from the state-only average of $.62 to a combined state and federal average of $1.10, a 77-percent increase. This increase reflects that most states’ relatively small share of federal funds was highly targeted. Again, states varied widely in the amount of combined targeting that occurred, ranging from an additional $7.41 for poor students in Missouri to an additional $.27 in West Virginia. In three states, the addition of federal funding increased funding for poor students but did not enhance the state targeting effort. In one state (New Hampshire), the addition of federal funding yielded less combined targeting for a poor student. The other two states (Nevada and New York) did not target poor students, and the addition of the relatively small amount of targeted federal funding did not raise the combined targeting effort above zero. Table 1 shows each state’s amount of state and federal targeting and the amount of targeting when state and federal funding are combined. The addition of state and federal funds had the effect of reducing or eliminating the local funding gap between high- and low-poverty districts in most states. In 37 states, high-poverty districts had less local funding per weighted pupil than low-poverty districts. State funding eliminated this funding gap in 7 states and reduced it in the remaining 30 states. The addition of the more targeted federal funds eliminated the funding gap in another 9 states and further reduced it in the 21 states that still had funding gaps. A substantial number of poor students lived in these 21 states, however. Although targeting poor students helped reduce the total funding gap, the percentage of total education funding provided by state and federal governments was more important in reducing the gap. States with a greater state and federal share of education funding had smaller total funding gaps. High-poverty districts had less local funding per weighted pupil than low-poverty districts in 37 states. Separating all school districts into five groups on the basis of increasing poverty rates reveals the size of the gaps (see fig. 3). The average local funding per weighted pupil in the lowest poverty districts was $3,739 compared with $1,751 in the highest poverty districts. The lowest poverty districts nationwide had about 114 percent more local funding than the highest poverty districts. This gap occurred even though the highest poverty districts in 30 states made a greater tax effort than the lowest poverty districts. Combined state and federal funding had the effect of eliminating the funding gap in 16 of the 37 states where the local funding per weighted pupil was less in high- than low-poverty districts. Combined state and federal funding reduced the funding gap in the remaining 21 states. The arrows in figure 4 show the effects of combined state and federal funding on closing the local funding gap between high- and low-poverty districts. The light arrow indicates the effect of state funding on closing the funding gap; the dark arrow indicates the effect of federal funding. The zero line (0) of the figure indicates no funding gap between high- and low-poverty districts. States whose funding gaps are represented by negative values are those where higher poverty districts had less funding per weighted pupil; states whose funding gaps are represented by positive values are those where higher poverty districts had more funding per weighted pupil. The further the value is from the zero line, the greater the funding gap. The legend in figure 4 describes three points that mark the progress of state and federal funds in closing the funding gap. The tail end of the state arrow represents the size of the local funding gap. The second and third points measure the size of the gap that remains after state funds and then federal funds are added. For example, New York’s local funding gap of about –.65 indicates that local funding levels were less in high-poverty than in low-poverty districts. Moving to the zero line, the addition of state funds reduced the gap to –.40. The addition of federal funds reduced the gap further to about –.35. (See app. V for more information on each state’s points and the statistical significance of each.) Figure 4 shows that most states had a funding pattern like New York’s, with state funding favoring high-poverty districts. In some cases, state funds favored high-poverty districts so much that the resulting distribution of funds favored high-poverty districts (it passed the zero line). In some states, the local funding levels in high-poverty districts already exceeded those in the low-poverty districts. In these states, state funding offset or reduced this imbalance. Finally, in all states, federal funding favored the high-poverty districts regardless of the distribution of state and local funds. The national distribution of education resources shown in figure 5 provides another perspective on the size of the gaps nationwide and the effect of state and federal funding in closing them. (See also table 2.) When we compared the distribution of local funds of the lowest and highest poverty districts (see fig. 3), the lowest poverty districts had about 114 percent more local funding per weighted pupil than the highest poverty districts nationwide. States helped considerably in closing this funding gap, reducing it to 25 percent. The addition of federal funds had the greatest effect on the highest poverty districts and reduced the gap to about 15 percent. A relatively high combined state and federal share of total funding enhanced state and federal targeting efforts to close the funding gap. Figures 6 and 7 illustrate this point. Both California and Virginia had about the same average total funding per weighted pupil and the same combined state and federal targeting rate per poor student. However, California’s state and federal share was much larger (about 71 percent) than Virginia’s (about 39 percent). This difference reduced California’s funding gap to one smaller than Virginia’s. The highest poverty districts in California received $237 less in total funding per weighted pupil than the lowest poverty districts; in Virginia, the highest poverty districts received $970 less than the lowest poverty districts. The size of the combined state and federal share of total funding was more important in closing the funding gap than the extent to which these funds were targeted to poor students. This proved to be the case in an analysis we conducted to assess the effect of factors that influence the size of the funding gap. Many states eliminated their funding gaps even though they had relatively low targeting efforts in part because they had higher than average state and federal shares of total funding. Conversely, many states did not close their funding gaps even though they had relatively high targeting efforts in part because they had relatively low state and federal shares of total funding. Despite state and federal efforts to close the funding gap, the most important factor determining the size of the gap was the tax effort of high-poverty districts compared with low-poverty districts. Although state and federal funding closed or eliminated the funding gaps between high- and low-poverty districts, the gaps that remained in the 21 states affected a significant portion of the nation’s poor students. Nearly two-thirds of the students in our study were in these 21 states, and about 64 percent of the nation’s poor students attended public schools in these states. Information on the state and federal targeting amounts and the effect of state and federal funding on the funding gaps in school year 1991-92 appears in the state profiles in this report (see apps. VI through LIII). Each profile also provides the amount of local, state, and federal funding available for districts in five groups of approximately equal numbers of students arranged in increasing proportions of poor students as well as other demographic information. We contacted state school finance officials in the 47 states to determine whether school finance systems had changed in ways that would affect the funding patterns of school year 1991-92. By school year 1995-96, only 16 states reported making changes that would target more funds to high- poverty districts. Ten states reported no change in targeting, 19 states reported no targeting of high-poverty districts, and 2 states reported targeting less funding to high-poverty districts. Eight of the 47 states reported increasing their state share of education funding by 6 percentage points or more. Appendix LIV summarizes the changes states made between school years 1991-92 and 1995-96. Although greater targeting has been limited to a minority of states since school year 1991-92, federal funding formulas have changed, which would result in a continued pattern of greater targeting. Federal education officials have reported increased targeting to high-poverty districts resulting from changes in title I legislation and regulations that went into effect in July 1995. Title I, the largest federal education program, provides funding for disadvantaged students. In addition, other federal programs allocate funds on the basis of title I formulas. The changes in title I would increase the relative funding for high-poverty districts from these other programs. Appendix LV discusses changes in federal funding to the states since school year 1991-92 in greater detail. Federal funding for education, which primarily serves the needs of poor and disadvantaged students, is generally more highly targeted to poor students than more multipurpose state funding. In allocating funds to districts, state officials must balance the needs of poor students with those of many other high-cost student groups such as special education students. States also generally try to offset differences in localities’ ability to raise education revenues. The states’ wide range in targeting to poor students indicates that different states balance the needs of poor students with all other needs in different ways. Furthermore, the many lawsuits alleging inequities in state school finance systems illustrate that states are under constant pressure to meet the needs of many and often conflicting interest groups. In this context, any proposal to consolidate federal education funding into grants that give more discretion to states would need to consider that the targeting of those federal funds might become more like that of the state funds. That is, the federal funds—and the combination of federal and state funds—might become less targeted to poor students. In assessing states’ performance in financing the education needs of poor students, policymakers need to look beyond state efforts to target poor students and consider the combined state and federal share of total education funds. A low state targeting effort does not necessarily mean that a large funding gap exists between a state’s high- and low-poverty districts, according to our analysis. Rather, a relatively high overall share of state and federal funding can reduce the gap. The Department of Education reviewed a draft of this report and agreed with our finding that federal funding was generally more targeted to poor students than either state funding or combined state and federal funding. As suggested by the Department, we clarified the wording used to describe federal targeting and incorporated technical comments as appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies to appropriate congressional committees and members of the Congress, the Secretary of Education, and other interested parties. Please contact me on (202) 512-7014 or Eleanor L. Johnson, Assistant Director, on (202) 512-7209 if you or your staff have any questions. GAO contacts and staff acknowledgments appear in appendix LVI. The objectives of this study were to determine (1) the extent to which state and federal funding is targeted to poor students and (2) the effect of state and federal funding on the amount of funds available to high-poverty compared with low-poverty districts. To help answer these questions, we used school year 1991-92 district-level data from the Department of Education, the most recent available, supplemented by data from the 1990 census and directly from states. We used standard school finance measures and accounted for geographic differences in education costs and student need among school districts. We supplemented our analysis by contacting federal and state education officials to determine the extent to which federal funding patterns and the states’ school finance systems had changed since 1991-92. We conducted our work between March and December 1997 in accordance with generally accepted government auditing standards. For this study, we conducted a district-level analysis of all states except Hawaii, Vermont, and Wyoming. We wanted our analysis to examine state funding for regular school districts with students in kindergarten through twelfth grade, so we excluded from the analysis administrative districts and districts serving unique student populations, such as vocational or special education schools. We also excluded from our analysis a number of small districts that had extreme outlying values of income per pupil. Finally, we excluded districts that lacked data for critical variables such as poverty level. The final database used in our analysis of the 47 states contained 14,140 districts with a total of 41,011,102 students, representing 99.2 percent of the public school students in the 47 states. We based this study mainly on revenue and demographic data obtained from the Department of Education’s Common Core of Data (CCD) for the 1991-92 school year, the most current data available for a national set of districts. Data for the CCD were submitted by state education agencies and edited by the Education Department. We obtained district per capita income and population data directly from the 1990 census because they were not available in the CCD. We used revenue data from all sources in the analysis, including funding for capital expenditures and debt service. Federal revenue included funding from all Department of Education sources, although we considered federal impact aid to be local revenue in our analysis because states typically consider funding from this program as part of a district’s local education resources. We also included federal revenue from other departments that had revenue reported in the CCD, including funding for Head Start from HHS and for USDA’s child nutrition programs. For variables in our analysis that had missing or incomplete data, we obtained the data directly from state education offices. For example, we obtained district-level data for students with disabilities for school year 1991-92 directly from the state education offices for nine states because the CCD either did not report the number of these students in the states or reported a number substantially different from another Education Department source. We also obtained district-level data on federal revenue from seven states for similar reasons. We made further edits on the basis of consultations with Department of Education experts. In some cases, we imputed critical data when they were missing and not available from other sources. We imputed income per pupil data for 199 districts in California because the per capita income data needed to compute this control variable were not reported by these districts. We also imputed cost index data for 310 districts, including 18 in Alaska and 72 in New York (mainly Suffolk County). The imputation method we used to impute cost index data was based on the recommendation of the school finance expert who developed the cost index. We conducted structured telephone interviews with state school finance officials to determine the extent to which states had changed their school finance systems since school year 1991-92. We did not, however, verify the accuracy of the officials’ statements. We also interviewed federal officials and reviewed supporting documentation about changes in federal funding programs since school year 1991-92. Education costs vary by school district in a state (and nationwide) because of geographic differences in the cost of educational resources. For example, some districts have a lower cost of living, which may reduce the cost of their education resources. As a result, we used a district-level teacher cost index developed for the National Center for Education Statistics to develop a method to adjust for statewide geographic differences in resource costs. Districts with high proportions of students with special needs, such as those with disabilities and the poor, generally have higher education costs than average because such students require additional educational services. When adjusting our analysis for statewide differences in student need, we made adjustments that weighted students with disabilities and poor students according to their need for additional services. We gave students with disabilities a weight of 2.3 because the cost of educating these children is generally 2.3 times the cost of educating children who do not need such services. We gave poor students a weight of 1.6 because this was the median state weight in our analysis. Using these weights, we developed a district-level need index adjusted for statewide differences. To measure the extent to which state funding was targeted to districts on the basis of the number of poor students, we estimated the additional state funding per poor student a district received for every dollar of state funding received for each student. To estimate the additional funding per poor student, we developed a statistical model of the distribution of state funding to local school districts. The model describes the distribution of state funds as if a fixed percentage of the funds was allocated to districts on the basis of the number of poor students and the remaining percentage was allocated on the basis of the total number of students. The model also describes the targeting of state funds to districts with low tax bases. Thus, the model accounts for both student needs-based targeting related to the number of poor students and targeting to low tax base districts. By modeling the distribution of state funds this way, we measured the additional state funds that districts received per poor student compared with every dollar received for each student, while statistically removing any additional funding districts may have received due to the size of their local tax base. In addition, the model allowed us to measure state targeting policies that either directly targeted funding to districts on the basis of the number of poor students or indirectly targeted funding on the basis of other student needs such as limited-English proficiency programs that may serve poor students. We used the same general model to determine the extent to which federal funding and combined state and federal funding were targeted to districts on the basis of the number of poor students. However, when determining the targeting of just federal funding, we did not control for differences in local tax bases because none of the federal funds included in our analysis were allocated on the basis of local tax bases. In appendix II, we describe the statistical model used to estimate state and federal targeting based on the number of poor students. In appendix III, we use the model to estimate the additional funding districts received that was targeted directly or indirectly to poor students but controlling for the additional funding districts may have received as a result of tax base targeting. Appendix IV analyzes how each state’s estimate of poor student targeting would change if we controlled for funding that indirectly targeted poor students as in states that target students with disabilities. Throughout these analyses we adjusted state and federal funding for statewide differences in geographic costs and used income per student to measure local tax bases, also adjusted for geographic costs. In estimating these targeting amounts, we weighted each observation by the district’s size to allow districts with larger enrollments to have more effect on the results. To measure the effect of state and federal funding on closing the funding gap between high- and low-poverty districts, we estimated the elasticity of each state’s districts’ per pupil funding with respect to districts’ poverty rate, that is, the proportion of a district’s total enrollment that is poor.We estimated separate elasticities for local funding only; local and state funding combined; and local, state, and federal funding combined. Observing the change in the elasticity as state funding and then federal funding were added to local funds quantitatively measures the effect of state and federal funding on funding gaps between high- and low-poverty districts. We adjusted these analyses for differences in statewide geographic costs and student need. In estimating these elasticities, we weighted each observation by the district’s size to allow districts with larger enrollments to have more effect on the results. Appendix V details this process. To determine the factors most closely associated with the nationwide differences in states’ funding gaps between high- and low-poverty districts, we used multiple regression techniques. We estimated several models that used each state’s elasticity of districts’ pupil funding to districts’ poverty rate as the dependent variable. We included the following state-level variables as possible explanatory variables (see app. V): the combined additional state and federal funding targeted to districts on the basis of the district’s number of poor students, the extent to which state funding is targeted to low tax base districts, combined state and federal funding as a percentage of total funding, the tax effort of low-poverty compared with high-poverty districts in a state, and the tax base of low-poverty compared with high-poverty districts in a state (as measured by income per pupil). Whenever we included more than one independent variable in a regression routine, all the variables were entered into the analysis at the same time. Appendixes VII through LIII provide profiles of each state’s school finances in school year 1991-92. The profiles provide information on state and federal funding, the targeting of additional state and federal funding for poor students, differences in the tax effort of high- and low-poverty districts, and the effect of funding on closing the funding gap. Appendix VI is a detailed guide to the state profiles. Because we relied on funding data from the 1991-92 school year, we telephoned states’ school finance officials to determine how state school finance systems had changed from school years 1991-92 through 1995-96. We specifically asked about changes that would affect the amount of funding provided to districts with high proportions of poor students. We also telephoned federal officials responsible for the major education programs and reviewed program documents to determine how changes in regulations or legislation may have changed federal funding to poor students. Appendixes LIV and LV present the results of these efforts. State school finance systems typically provide funds to school districts to account for differences in student needs and ability to raise education revenues. Because poor students are generally recognized as having special education needs that increase the cost of their education, many states try to offset these costs by targeting additional funds to districts with high numbers of poor students. Meanwhile, states try to compensate for the limited ability of districts with low tax bases to raise education revenues by targeting additional state funds to such districts. Poor students reside in poor and wealthy districts alike. Therefore, when estimating a state’s effort to target poor students, accounting for state policies that target additional funds to low tax base districts is also important. This appendix describes the statistical model we used to estimate state efforts to target additional funding to poor students, while controlling for targeting to low tax base districts. In the first section, we describe how we modeled a district’s total student need on the basis of the number of all students and poor students. In the second section, we incorporate the total student need of districts into a more general model of state funding that also compensates for differences in districts’ tax bases. This general model assumes that states target additional funds to low tax base districts to equalize the funding among poor and wealthy school districts. States typically distribute state funds to school districts on the basis of school district enrollment. To allow for policies that provide additional funding to districts on the basis of the number of poor students, we introduce an implicit cost weight (w ) that reflects the additional state funding provided to each poor student in the district. A district’s total student need (total enrollment with additional weight given to the number of poor students) is represented by the following equation: N = a district’s student enrollment count P = a district’s count of poor students = the state’s implicit cost weight associated with a poor student. The implicit cost weight (w ) can be interpreted as follows: If the state expense for educating an average student were normalized to $1, then w represents the additional state expense for educating a poor student. For example, if a poor student were 50 percent more expensive for the state to educate than an average student, then w would equal $.50. Our research objective was to estimate the implicit cost weight associated with poor students. We refer to this cost weight as an implicit weight because it must be inferred from data on the actual distribution of state funds to local districts. In addition, it should be noted that poor students represent the model’s only student need factor other than total enrollment. By not controlling for other student need factors (for example, limited-English proficiency or gifted students), our estimate of the implicit cost weight reflects any additional state funding systematically related to the number of poor students in local school districts. To estimate the implicit cost weight from data on the distribution of state funding to local school districts, we first express student needs as a grant formula. If state funds were distributed simply on the basis of enrollment with additional funding for poor students, then a district’s share of total state funding (Grant Share) could be expressed as its share of the state’s total student need (Need Share) as in equation II.2: To estimate w district’s share of the state’s total student need (Need Share) as a linear function of enrollment and the number of poor students. We accomplished this by first expanding equation II.2 to equation II.3: using linear regression methods, we had to express a Multiplying the first term in equation II.3 by S N/S N and the second by S P/S P does not change the value of the expression and allows us to express a district’s share of total student need as a weighted sum of the district’s share of enrollment (N/S N) and its share of poor children (P/S P): Note that the terms in parentheses sum to one and can be interpreted as formula weights applied to a district’s share of total enrollment and its share of poor students. Equation II.4 can now be written more simply as follows: where w = the formula weight relative to the district’s share of total state enrollment, and w = the formula weight relative to the district’s share of the state’s total number of poor students, and Dividing both sides of equation II.5 by each district’s share of total enrollment (N/S N) yields an expression for each district’s relative per pupil grant that also serves as an index of student needs. The following equation demonstrates that, in our simple model of student needs, a district’s per pupil grant would be linearly related to the proportion of poor students in the districts: g = a district’s per pupil grant (g = the state average per pupil grant) n = a district’s student need index (an index value of 1.0 would indicate that the district’s proportion of poor students equals the state’s average proportion of poor students) r = a district’s proportion of poor students (P/N); (r = the state average proportion of poor students (S P/S N)). Because the implicit cost weight (w ) depends on the formula weights w in equation II.6 for the implicit cost weight w r = the state average proportion of poor students (S P/S N). The implicit cost weight associated with poor students (w ) could be estimated by applying linear regression techniques to the model in equation II.6. However, this would not account for state targeting to poor students that may also offset differences in local tax bases. Consequently, the model in equation II.6 could bias the estimate of the implicit cost weight because it would not control for the effects of tax base targeting that coincide with poor student-based targeting. To obtain an unbiased estimate of the implicit cost weight, we modeled the distribution of state funding using a foundation equalization model like that used in our earlier report. With foundation equalizing grants, states seek to enable all school districts to finance a minimum amount of total funding per pupil (the foundation funding level) with a uniform minimum tax effort. An important implication of this standard is that state funding must be targeted to school districts with low tax bases per student. States can adjust the foundation level to account for differences in geographic and student need-related costs. Our modeling approach is to use the state’s average total funding per pupil as a benchmark against which to estimate the foundation funding level that a state’s school finance policies can implicitly support. A state’s average total funding level represents the highest level of total funding per pupil that a state can support with the amount of both state and local resources it devotes to education (it is impossible to guarantee all districts an above average funding level). Under a foundation equalizing system that supports an implicit foundation level equal to the average total funding per pupil (e), districts would receive a total state equalizing grant (G) according to the following: G = total state funding in a district g = a scalar that ensures that the total sum of state funding equals the total amount of state funds available for distribution N = a district’s enrollment count c = a district’s input cost index that reflects geographic differences in, for example, the cost of teachers n = a district’s student need index defined in equation II.6 e = a state’s average total funding per pupil (the maximum funding level that can be attained by equalizing all available state and local funding) = the locally financed share of total funding = equalization factor (b =1 signifies maximum equalization and b =0 implies none) v = a district’s tax base per pupil. The state average total funding per pupil serves as a benchmark funding level. Given this benchmark, the quantity represented by (Ncne) is the dollar amount of total funding a school district needs to finance this benchmark level. The implicit foundation level, that is, the minimum total funding per pupil (expressed in real dollars), is given by g e. Note that this level is a fraction (g ) of the state average total funding level. In our earlier report, we showed that the scalar g depends on the state financing share (1-a ) and the equalization factor b through the following relationship:Notice that the scalar equals 1 if the equalization factor b also equals 1. Thus, the implicit foundation level equals the state average funding level when b =1 and falls below the state average when b is less than 1. The equalization model can be expressed as a linear regression model by substituting the expression for n in equation II.6 and the expression for g in equation II.9 into equation II.8. Dividing both sides of the resulting equation by an expression for the average state grant per pupil (g = (1-a )e) and rearranging some terms yields the following regression model: g = a district’s state grant per pupil (g = average state grant per pupil) r = a district’s proportion of poor students (r = the proportion of all students in a state who are poor students) v = a district’s tax base per pupil (v = state average) c = a district’s input cost index that reflects geographic differences in, for example, the cost of teachers unrelated to poor students and tax bases. Our goal was to estimate states’ and federal targeting of additional funding to districts on the basis of the number of poor students. This appendix presents the statistical results of estimating the grant targeting model described in appendix II. It presents estimates of grant targeting on the basis of the number of poor students for (1) state funding alone, (2) federal funding alone, and (3) combined state and federal funding. We estimated the grant targeting by including the additional funds that districts received for each poor student from programs that target poor students directly through compensatory programs, or indirectly, through, for example, programs that target limited-English proficiency students. Appendix IV shows how the grant targeting estimates would change after controlling for some indirect targeting based on other student needs. As described in appendix II, we began by estimating the model summarized in equation II.10, reproduced here as equation III.1, for each of the 47 states included in our analysis: g = a district’s state grant per pupil (g = average state grant per pupil) r = a district’s proportion of poor students (r = the proportion of all students in the state who are poor) v = a district’s tax base per pupil (v = state average) c = a district’s input cost index that reflects geographic differences in, for example, the cost of teachers unrelated to poor students and tax bases. Given the estimated tax base coefficients in equation III.1, we then solved . From this, we derived the formula weight w from the estimated regression coefficient on poor students. We then calculated the implicit cost weight using equation II.7. The results of these calculations appear in table III.2. Implicit cost weight for state funding(continued) The formula weights reported in table III.2 show a wide range of variation in the proportion of state funding allocated on the basis of the number of poor students. In Alabama, for example, state funding was distributed as if 6 percent of the targeting based on student need was allocated on the basis of the number of poor students in a district and 94 percent on the basis of a district’s total enrollment. This formula weight is equivalent to providing Alabama school districts with an additional $.27 per poor student for every $1 allocated per student in a district. Overall, 43 of the 47 states in our analysis targeted additional funding either directly or indirectly to districts with poor students to some degree. The amount of extra funding that districts received varied substantially by state. On average, districts received an additional $.62 in state funding for each poor student. At the high end, for every $1 of state aid provided for each student, New Hampshire provided an extra $6.69 per poor student; at the low end, four states (Montana, Nevada, New Mexico, and New York) provided no additional funding on the basis of the number of poor students. Figure III.1 shows each state’s additional funding per poor student in ranked order. Figure III.1: Targeting of State Funds to Poor Students, School Year 1991-92 Extra Funding per Poor Pupil (Dollars) To estimate the targeting of federal funds per poor student, we modeled the distribution of federal funding on the basis of total enrollment and the number of poor students as shown in equation II.10. However, a tax base variable was not included in the model because federal programs do not allocate funding on the basis of this factor. Therefore, b resulting in equation III.2. The dependent variable was the district’s federal funding per pupil adjusted for statewide differences in teacher cost. The independent variable was the district’s proportion of poor students. Each variable was expressed in index form relative to their respective state averages. The regressions were weighted for a district’s enrollment size: The regression coefficients and their associated standard errors and the R squares from our analysis, along with the implicit cost weight associated with federal funding for poor students, appear in table III.3. Implicit cost weight for federal funding(continued) Implicit cost weight for federal funding(Table notes on next page) Federal funding was more targeted to poor students than state funds in 45 of the 47 states. On average, districts received an additional $4.73 in federal funding per poor student for every $1 of funding received by each student. This amount compares with an additional $0.62 in state funding.The amount of additional federal funding varied widely. At the high end, Alaska provided an additional $9.04 in federal funding; at the low end, West Virginia provided an additional $2.59. In the two states with the highest state targeting effort (Missouri and New Hampshire), federal funds were targeted to poor students but to a lesser extent than state funds. Figure III.2 shows federal targeting in ranked order. Figure III.2: Targeting of Federal Funding to Poor Students, School Year 1991-92 Extra Funding per Poor Pupil (Dollars) To estimate the targeting of combined state and federal funds, we estimated the model in equation III.1 using combined state and federal funding per pupil adjusted for statewide differences in the cost of teachers’ salaries as the dependent variable. The two independent variables were a district’s proportion of poor students and a district’s income per pupil adjusted for statewide differences in the cost of teachers. Because federal funding is not targeted on the basis of income per pupil, we constrained the coefficient of the income per pupil variable to be the same as that in the regression for state funding alone. As before, each variable was expressed in index form relative to its respective state averages. All regressions were weighted on the basis of a district’s enrollment. The regression coefficients and their associated standard errors from our analysis appear in table III.4 along with the R squares of the model. The formula weight and implicit cost weight associated with combined state and federal funding for poor students appear in table III.5. Table III.4: Regression Results Used to Determine Implicit Cost Weight of Combined State and Federal Funding (continued) (Table notes on next page) Implicit cost weight for combined state and federal funding(continued) Greater federal targeting had the effect of raising the average state targeting from $.62 to an average combined state and federal targeting of $1.10, a 77-percent increase. This increase reflects the fact that the relatively small share of federal funds was highly targeted in most states. Again, states’ amounts of combined targeting varied widely, ranging from $7.41 in Missouri to $.27 in West Virginia. The addition of federal funding increased state targeting most in North Dakota ($1.75) and least in Florida ($.13). In three states, the addition of federal funding did not enhance the state’s targeting effort. In New Hampshire, the addition of federal funding yielded less combined targeting for a poor student. The other two states (Nevada and New York) did not target poor students, and the addition of the relatively small amount of targeted federal funding did not raise the combined targeting effort above zero. Figure III.3 shows these results in ranked order. A comparison of the implicit cost weights of state funding alone and combined state and federal funding appears in table III.6. Figure III.3: Targeting of Combined State and Federal Funding to Poor Students, School Year 1991-92 Extra Funding per Poor Pupil (Dollars) Implicit cost weight for combined state and federal funding(continued) Our primary concern was to measure the extent to which states target additional funds to school districts to compensate for the higher cost of educating poor students, while controlling for state policies that target more funds to low tax base districts. In measuring state targeting to poor students, we wanted to measure the state’s targeting effort regardless of whether it was an explicit state policy or the result of targeting other types of student needs that may be correlated with poor students. However, estimating the extent to which states target more funds to districts on the basis of the number of poor students could also be done by holding other types of student needs constant. To measure this type of targeting, we expanded our modeling of student need by including additional student need factors. To account for additional student need factors, we modified equation II.5 to include each district’s share of (1) high school students, (2) students with an Individual Education Plan (a measure of pupils with special education needs), (3) the square of student enrollment, and (4) the land area of the school district. Adding these factors, each district’s share of total student need (Need Share) would be expressed in the following equation: Dividing both sides of equation IV.1 by each district’s share of total enrollment would produce the following expanded student need index: n = a district’s student need index PI = a district’s percentage of poor students expressed as a percentage of the state average HSI = a district’s percentage of high school students expressed as a percentage of the state average IEPI = a district’s percentage of students with an individual education plans expressed as a percentage of the state average NIits share of total enrollment = a district’s share of enrollment squared expressed as a percentage of LAI = a district’s land area per student expressed as a percentage of the state average. Substituting equations IV.2 and II.9 for equation II.8 and rearranging terms allows us to express the foundation equalization model with additional student need indicators as follows: We have estimated the model summarized in equation IV.3 for the 47 states included in our analysis using state and combined state and federal funding. Both dependent variables were expressed on a per pupil basis adjusted for statewide differences in teacher costs. The regression coefficients and standard errors for the two variables of interest—the poor student variable and the tax base variable—as well as the R squared and standard error of the model appear in table IV.1 for the state funding model. Regression results for the combined state and federal funding model appear in table IV.2. (continued) The analysis excluded the land area variable because of missing land area data. (continued) The analysis excluded the land area variable because of missing land area data. (continued) Implicit cost weight for state (continued) Implicit cost weight for state (continued) As table IV.4 shows, the results of the expanded model differ dramatically from those of the model used in appendix III for a number of states. Overall, the implicit weight of state funding in the expanded model increased in 16 states and decreased in 26 states. States with the largest increases in the implicit weight of state funding were Connecticut, Illinois, Massachusetts, New Hampshire, and New York; states with the largest decreases were Alaska, Michigan, Missouri, and New Jersey. Kansas, Maryland, Rhode Island, Tennessee, Texas, and West Virginia also had noteworthy increases; Delaware, Idaho, Iowa, North Dakota, and South Carolina had noteworthy decreases. Five states had the same implicit weight. The median implicit weight declined slightly, from $.62 to $.61. A comparison of the combined state and federal weights reveals a similar pattern. The same five states had the largest increases in the implicit weight of combined state and federal funding: Connecticut, Illinois, Massachusetts, New Hampshire, and New York; the same four states had the largest decreases: Alaska, Michigan, Missouri, and New Jersey. Kansas, Maryland, Texas, and West Virginia also had noteworthy increases; Iowa and Utah had noteworthy decreases. Overall, the implicit combined weight in the expanded model increased in 17 states and decreased in 28 states. Two states had the same implicit weight. The median implicit weight decreased slightly, from $1.10 to $1.06. One goal of this study was to determine the variability of local funding levels in high- and low-poverty districts and the changes in these funding levels as state and federal funds were added. Because high-poverty districts tend to have low tax bases, such districts typically have lower local funding levels compared with low-poverty districts. States generally distribute state funds on the basis of a district’s educational need and ability to raise local revenue for education, while federal funds are targeted largely to poor and other disadvantaged students. Given these policy features, we expected high-poverty districts to benefit from both state and federal funding and that total funding levels in high- and low-poverty districts would be more equal with the addition of state and then federal funds. This appendix presents the method we used to estimate the size of the local funding gap and the effect that state funds and then federal funds had on closing this gap in each state. It also presents results of the distribution of total funding to districts with the highest and lowest poverty levels. Finally, it discusses factors that affect poverty-related funding gaps. We measured the size of the funding gap by determining the extent to which funding varied in high- and low-poverty districts using the poverty elasticity of funding per weighted pupil. The elasticity measures the percentage change in a district’s funding level for every 1-percent change in a district’s poverty rate, where the district’s change in a variable is measured relative to its state average. We determined the poverty elasticity of local funding per weighted pupil and then the changes in poverty elasticity with the addition of state funds and then with the addition of federal funds. A negative poverty elasticity score indicates that high-poverty districts tended to have less funding per weighted pupil than low-poverty districts; a positive elasticity score indicates that high-poverty districts tended to have more funding per weighted pupil than low-poverty districts; and 0 score indicates that no systematic differences occurred in funding levels among all districts. We used a linear regression model to estimate the elasticity of a district’s funding relative to a district’s proportion of poor students. To assess the incremental effects of state funds and then federal funds, we estimated the poverty elasticity relative to a district’s funding in three ways. First, we determined the poverty elasticity to local funding per weighted pupil. We then added state funds and determined the poverty elasticity of the combined state and local funding per weighted pupil. Finally, we added the federal funds to determine the poverty elasticity of total (federal, state, and local) funding per weighted pupil. The dependent variable was one of the three measures of district funding per weighted pupil, and each was adjusted for statewide differences in geographic cost and student need (see app. I). The independent variable was a district’s poverty rate. For each regression, the dependent and the independent variables were placed in index form, that is, they were expressed as a percentage of their respective state averages. We estimated the three poverty elasticities by weighting each observation for membership size to better reflect the distribution of state funding to students rather than to districts; thus, school districts with larger enrollments had a greater effect in determining the estimated coefficients of the model. With these adjustments, a general model for all three regressions took the following form: Because both variables are measured relative to their respective state averages, the regression coefficient (b score. The error term (e ) in the equation reflects the variation in the ) represents the poverty elasticity funding per weighted pupil that could not be accounted for by the poverty index variable. In 37 states, high-poverty districts had less local funding per weighted pupil than low-poverty districts (the poverty elasticity was negative).Florida had the largest local funding gap and South Dakota the smallest. In two states (New Mexico and Utah), high-poverty districts had more local funding per weighted pupil (the poverty elasticity was positive). In both of these states, high-poverty districts made a greater tax effort than low-poverty districts. In the remaining eight states, local funding per weighted pupil was not statistically different as poverty rates increased. The addition of state funds reduced the number of states with a funding gap (negative elasticity) from 37 to 30 and the size of the funding gap in the remaining 30 states. Of the states where funding gaps remained, state funding in Florida reduced the gap most, and state funding in New Hampshire reduced it the least. Three states had a positive elasticity, that is, the combined local and state funding per weighted pupil increased as a district’s poverty rate increased. The addition of federal funds further reduced the number of states with a funding gap (negative elasticity) from 30 to 21 and the size of the funding gap in the remaining 21 states. Of the states where funding gaps remained, federal funding in Alabama reduced the gap the most, and federal funding in New Hampshire reduced it least. In six states, high-poverty districts had more total (local, state, and federal) funding per weighted pupil than low-poverty districts (positive elasticity). Table V.1 shows the elasticities of local, local and state, and total funding to district poverty rates and the adjusted R square for each state. Figure 4 (shown earlier in the report) provides the table information in graphic form. Local, state, and federal funding (continued) Local, state, and federal funding (Table notes on next page) Another way to analyze the size of the poverty-related funding gaps is to examine the amount of funding available to districts with the highest and lowest poverty rates. To do this, we grouped each state’s student population into five groups. These groups were determined by ranking a state’s districts according to increasing proportions of poor students and then dividing these districts into five groups, each with about the same number of students. We defined lowest poverty districts as those districts in the first group and highest poverty districts as those in the fifth group. Normally, each group consisted of about 20 percent of each state’s students. Nationwide, the lowest poverty districts had about 114 percent more local funding per weighted pupil than the highest poverty districts. The addition of state funds greatly reduced this funding gap to 25 percent. The addition of federal funding reduced the total funding gap to about 15 percent. Table V.2 summarizes the gaps in total funding per weighted pupil between each state’s highest and lowest poverty districts. Lowest poverty group funding compared with the highest poverty group funding(continued) Lowest poverty group funding compared with the highest poverty group funding(continued) Several factors affect the size of the gaps in total funding between high- and low-poverty districts. These include differences in the tax base of high- and low-poverty districts, differences in the tax effort of high- and low-poverty districts, the state and federal shares of total funding, the extent to which state and federal funding is targeted to districts with high-poverty rates, and the extent to which state funding is targeted to districts with low tax bases. To estimate the extent to which these factors accounted for the variation in the total funding gap between high and low-poverty districts, we constructed a regression model that used these factors to explain differences among states’ funding gaps. We used the elasticity of local, state, and federal funding reported in table V.1 as the measure of the total funding gap between high- and low-poverty districts. We used the elasticity of local tax bases relative to district poverty rates as the measure of tax base differences and the elasticity of local tax effort relative to districts’ poverty rates as the measure of tax effort differences. We also used the combined state and federal share of total funding, the combined state and federal targeting to poor students, and the states’ tax base targeting in the model. These factors appear in table V.3. We estimated several versions of the model with state and federal funding shares and poor student targeting entered separately and combined into one variable. The model described in table V.4, which has four variables, all of which were statistically significant, accounted for 57 percent of the variation in funding gaps among states. The model includes only four variables because when we included all five variables in the analysis, the fifth variable, differences in state tax base targeting, was insignificant. Table V.3: Factors Affecting Poverty-Related Funding Gaps, School Year 1991-92 Share of funding (percent) Elasticity of local tax effort(continued) Share of funding (percent) Elasticity of local tax effort(Table notes on next page) This is the amount of extra state funding per poor student a district received for every dollar of state funding received for each student. As the beta coefficients in table V.4 show, differences in tax efforts had the greatest effect on reducing the total funding gap—the greater the tax effort of high-poverty districts compared with low-poverty districts, the lower the funding gap. Combined state and federal share of total funding had the second greatest effect on reducing the funding gap: the larger the combined share, the less the funding gap. Differences in tax bases between high- and low-poverty districts had the third greatest effect: the greater the tax base of high-poverty districts compared with low-poverty districts, the lower the gap. Combined state and federal targeting to poor students also had the effect of reducing the gap, although to a lesser extent than the other three variables. Appendixes VII through LIII contain profiles for 47 states. Each profile provides the critical data resulting from our analysis of state and federal targeting to poor students and the effect that state and federal funds had on the funding levels among high- and low-poverty districts. In addition, each profile provides information in tabular and graphic form on the distribution of local, state, and federal funding to regular school districts in school year 1991-92. The profiles show state averages for districts in five groups according to increasing proportions of poor students. For example, the highest poverty group typically contains about 20 percent of a state’s students and has the highest proportion of poor students. All funding data in the profiles were adjusted for statewide differences in geographic cost and student need. Data used in the profiles were based mainly on the Department of Education’s Common Core of Data (CCD) for school districts for the 1991-92 school year. In some cases, we obtained data directly from state education offices, and we imputed data for a district when the source lacked data. For example, we imputed cost index data for 310 districts, including 18 in Alaska and 72 in New York (see app. I). Funding data included all local, state, and federal revenue for all purposes, including maintenance and operations, transportation, and capital expenditures and debt service. Federal impact aid was considered part of local revenue because states consider federal funding from this program as part of a district’s local education resources. The numbers in the profiles’ tables may not add due to rounding. Average total funding per weighted pupil Targeting to poor students (added amount allocated per poor student for every dollar allocated for each student) Total funding weight (effect of combined state and federal funding) As table VII.1 shows, in school year 1991-92, total funding (local, state, and federal funding combined) per weighted pupil in Alabama averaged $3,696. The localities provided about 27 percent of total funding for education; the state provided about 62 percent; federal funds provided about 11 percent. Alabama’s state funding had the effect of providing districts with an additional $.27 per poor student for every $1 provided to each student. When federal funding was added to the state funding, the combined effect provided an additional $.92 per poor student. (To compare these amounts with those of other states, see table III.6 in app. III.) Alabama’s targeting efforts and state share of total funding reduced the local funding gap between the lowest and highest poverty groups from about 66 percent to about 19 percent. The addition of federal funding further reduced the funding gap between these groups to about 6 percent. (To compare the total funding gap with those of other states, see table V.2 in app. V. For the funding gap results using a regression analysis, see table V.1.) The size of the local funding gap is partly determined by differences in districts’ local tax efforts. In Alabama, districts with the highest proportions of poor students made a slightly greater effort to raise local revenue than districts with the lowest proportions of poor students. Specifically, districts in the highest poverty group made a tax effort that was 101 percent of that made in districts in the lowest poverty group. To put the state’s school finance system in perspective, table VII.2 presents demographic data for school year 1991-92 for five groups of districts with increasing proportions of poor students. Table VII.3 presents data on how local, state, and federal funds were distributed among the five groups of Alabama districts. (Fig. VII.1 provides table information in graphic form.) Table VII.2: Demographic Information for Districts of Increasing Proportions of Poor Students, School Year 1991-92 Local funding raised for every $1,000 of district income. Percent difference (group 1 compared with group 5) Federal impact aid is considered part of local funding. Not applicable to our analysis. Average total funding per weighted pupil Targeting to poor students (added amount allocated per poor student for every dollar allocated for each student) Total funding weight (effect of combined state and federal funding) As table VIII.1 shows, in school year 1991-92, total funding (local, state, and federal funding combined) per weighted pupil in Alaska averaged $9,054. The localities provided about 27 percent of total funding for education; the state provided about 68 percent; federal funds provided about 5 percent. Alaska’s state funding had the effect of providing districts with an additional $1.81 per poor student for every $1 provided to each student. When federal funding was added to the state funding, the combined effect provided an additional $2.42 per poor student. (To compare these amounts with those of other states, see table III.6 in app. III.) Alaska’s targeting efforts and state share of total funding more than eliminated the 2-percent local funding gap between the lowest and highest poverty groups. Consequently, the lowest poverty group had about 19 percent less funding than the highest poverty group. The lowest poverty group had about 22 percent less funding after the addition of federal funding. (To compare the total funding gap with those of other states, see table V.2 in app. V. For the funding gap results using a regression analysis, see table V.1.) The size of the local funding gap is partly determined by differences in districts’ local tax efforts. In Alaska, districts with the highest proportions of poor students made more effort to raise local revenue than districts with the lowest proportions of poor students. Specifically, districts in the highest poverty group made a tax effort that was 240 percent of that made in districts in the lowest poverty group. To put the state’s school finance system in perspective, table VIII.2 presents demographic data for school year 1991-92 for five groups with districts of increasing proportions of poor students. Table VIII.3 presents data on how local, state, and federal funds were distributed among the five groups of Alaska districts. (Fig. VIII.1 provides table information in graphic form.) Table VIII.2: Demographic Information for Districts of Increasing Proportions of Poor Students, School Year 1991-92 Local funding raised for every $1,000 of district income. Percent difference (group 1 compared with group 5) Federal impact aid is considered part of local funding. Not applicable to our analysis. An Alaska education official reported that the state had not targeted more funding to high-poverty districts since school year 1991-92. More information on changes in Alaska’s school finance system made between 1991-92 and 1995-96 and such changes in other states appears in table LIV.1. Information on changes in federal funding between 1991-92 and 1994-95 appears in table LV.1. Additional technical information about Alaska appears in appendixes III and IV. Average total funding per weighted pupil Targeting to poor students (added amount allocated per poor student for every dollar allocated for each student) Total funding weight (effect of combined state and federal funding) As table IX.1 shows, in school year 1991-92, total funding (local, state, and federal funding combined) per weighted pupil in Arizona averaged $4,959. The localities provided about 51 percent of total funding for education; the state provided about 43 percent; federal funds provided about 7 percent. Arizona’s state funding had the effect of providing districts with an additional $0.50 per poor student for every $1 provided to each student. When federal funding was added to the state funding, the combined effect provided an additional $1.10 per poor student. (To compare these amounts with those of other states, see table III.6 in app. III.) Arizona’s targeting efforts and state share of total funding reduced the local funding gap between the lowest and highest poverty groups from about 55 percent to about 16 percent. The addition of federal funding further reduced the funding gap between these groups to about 5 percent. (To compare the total funding gap with those of other states, see table V.2 in app. V. For the funding gap results using a regression analysis, see table V.1.) The size of the local funding gap is partly determined by differences in districts’ local tax efforts. In Arizona, districts with the highest proportions of poor students made more effort to raise local revenue than districts with the lowest proportions of poor students. Specifically, districts in the highest poverty group made a tax effort that was 159 percent of that made in districts in the lowest poverty group. To put the state’s school finance system in perspective, table IX.2 presents demographic data for school year 1991-92 for five groups of districts with increasing proportions of poor students. Table IX.3 presents data on how local, state, and federal funds were distributed among the five groups of Arizona districts. (Fig. IX.1 provides table information in graphic form.) Table IX.2: Demographic Information for Districts of Increasing Proportions of Poor Students, School Year 1991-92 Local funding raised for every $1,000 of district income. Percent difference (group 1 compared with group 5) Federal impact aid is considered part of local funding. Not applicable to our analysis. An Arizona education official reported that the state had not targeted more funding to high-poverty districts since school year 1991-92. More information on changes in Arizona’s school finance system made between 1991-92 and 1995-96 and such changes in other states appears in table LIV.1. Information on changes in federal funding between 1991-92 and 1994-95 appears in table LV.1. Additional technical information about Arizona appears in appendixes III and IV. Average total funding per weighted pupil Targeting to poor students (added amount allocated per poor student for every dollar allocated for each student) Total funding weight (effect of combined state and federal funding) As table X.1 shows, in school year 1991-92, total funding (local, state, and federal funding combined) per weighted pupil in Arkansas averaged $4,164. The localities provided about 32 percent of total funding for education; the state provided about 59 percent; federal funds provided about 9 percent. Arkansas’s state funding had the effect of providing districts with an additional $.29 per poor student for every $1 provided to each student. When federal funding was added to the state funding, the combined effect provided an additional $.76 per poor student. (To compare these amounts with those of other states, see table III.6 in app. III.) Arkansas’s targeting efforts and state share of total funding reduced the local funding gap between the lowest and highest poverty groups from about 34 percent to about 8 percent. The addition of federal funding further reduced the funding gap between these groups to the extent that the lowest poverty group had 2 percent less funding. (To compare the total funding gap with those of other states, see table V.2 in app. V. For the funding gap results using a regression analysis, see table V.1.) The size of the local funding gap is partly determined by differences in districts’ local tax efforts. In Arkansas, districts with the highest proportions of poor students made more effort to raise local revenue than districts with the lowest proportions of poor students. Specifically, districts in the highest poverty group made a tax effort that was 106 percent of that made in districts in the lowest poverty group. To put the state’s school finance system in perspective, table X.2 presents demographic data for school year 1991-92 for five groups of districts with increasing proportions of poor students. Table X.3 presents data on how local, state, and federal funds were distributed among the five groups of Arkansas districts. (Fig. X.1 provides table information in graphic form.) Table X.2: Demographic Information for Districts of Increasing Proportions of Poor Students, School Year 1991-92 Local funding raised for every $1,000 of district income. Percent difference (group 1 compared with group 5) Federal impact aid is considered part of local funding. Not applicable to our analysis. An Arkansas education official reported that the state had not targeted more funding to high-poverty districts since school year 1991-92. More information on changes in Arkansas’s school finance system made between 1991-92 and 1995-96 and such changes in other states appears in table LIV.1. Information on changes in federal funding between 1991-92 and 1994-95 appears in table LV.1. Additional technical information about Arkansas appears in appendixes III and IV. Average total funding per weighted pupil Targeting to poor students (added amount allocated per poor student for every dollar allocated for each student) Total funding weight (effect of combined state and federal funding) As table XI.1 shows, in school year 1991-92, total funding (local, state, and federal funding combined) per weighted pupil in California averaged $4,902. The localities provided about 29 percent of total funding for education; the state provided about 64 percent; federal funds provided about 7 percent. California’s state funding had the effect of providing districts with an additional $1.15 per poor student for every $1 provided to each student. When federal funding was added to the state funding, the combined effect provided an additional $1.59 per poor student. (To compare these amounts with those of other states, see table III.6 in app. III.) California’s targeting efforts and state share of total funding reduced the local funding gap between the lowest and highest poverty groups from about 177 percent to about 14 percent. The addition of federal funding further reduced the funding gap between these groups to about 5 percent. (To compare the total funding gap with those of other states, see table V.2 in app. V. For the funding gap results using a regression analysis, see table V.1.) proportions of poor students made less effort to raise local revenue than districts with the lowest proportions of poor students. Specifically, districts in the highest poverty group made a tax effort that was 94 percent of that made in districts in the lowest poverty group. To put the state’s school finance system in perspective, table XI.2 presents demographic data for school year 1991-92 for five groups of districts with increasing proportions of poor students. Table XI.3 presents data on how local, state, and federal funds were distributed among the five groups of California districts. (Fig. XI.1 provides table information in graphic form.) Table XI.2: Demographic Information for Districts of Increasing Proportions of Poor Students, School Year 1991-92 Local funding raised for every $1,000 of district income. Percent difference (group 1 compared with group 5) Federal impact aid is considered part of local funding. Not applicable to our analysis. A California education official reported that the state had not targeted more funding to high-poverty districts since school year 1991-92. More information on changes in California’s school finance system made between 1991-92 and 1995-96 and such changes in other states appears in table LIV.1. Information on changes in federal funding between 1991-92 and 1994-95 appears in table LV.1. Additional technical information about California appears in appendixes III and IV. Average total funding per weighted pupil Targeting to poor students (added amount allocated per poor student for every dollar allocated for each student) Total funding weight (effect of combined state and federal funding) As table XII.1 shows, in school year 1991-92, total funding (local, state, and federal funding combined) per weighted pupil in Colorado averaged $5,288. The localities provided about 54 percent of total funding for education; the state provided about 42 percent; federal funds provided about 4 percent. Colorado’s state funding had the effect of providing districts with an additional $.27 per poor student for every $1 provided to each student. When federal funding was added to the state funding, the combined effect provided an additional $.57 per poor student. (To compare these amounts with those of other states, see table III.6 in app. III.) Colorado’s targeting efforts and state share of total funding reduced the local funding gap between the lowest and highest poverty groups from about 24 percent to about 9 percent. The addition of federal funding further reduced the funding gap between these groups to about 3 percent. (To compare the total funding gap with those of other states, see table V.2 in app. V. For the funding gap results using a regression analysis, see table V.1.) proportions of poor students made more effort to raise local revenue than districts with the lowest proportions of poor students. Specifically, districts in the highest poverty group made a tax effort that was 103 percent of that made in districts in the lowest poverty group. To put the state’s school finance system in perspective, table XII.2 presents demographic data for school year 1991-92 for five groups of districts with increasing proportions of poor students. Table XII.3 presents data on how local, state, and federal funds were distributed among the five groups of Colorado districts. (Fig. XII.1 provides table information in graphic form.) Table XII.2: Demographic Information for Districts of Increasing Proportions of Poor Students, School Year 1991-92 Local funding raised for every $1,000 of district income. Percent difference (group 1 compared with group 5) Federal impact aid is considered part of local funding. Not applicable to our analysis. A Colorado education official reported that the state had targeted much more funding to high-poverty districts since school year 1991-92. More information on changes in Colorado’s school finance system made between 1991-92 and 1995-96 and such changes in other states appears in table LIV.1. Information on changes in federal funding between 1991-92 and 1994-95 appears in table LV.1. Additional technical information about Colorado appears in appendixes III and IV. Average total funding per weighted pupil Targeting to poor students (added amount allocated per poor student for every dollar allocated for each student) Total funding weight (effect of combined state and federal funding) As table XIII.1 shows, in school year 1991-92, total funding (local, state, and federal funding combined) per weighted pupil in Connecticut averaged $8,531. The localities provided about 59 percent of total funding for education; the state provided about 37 percent; federal funds provided about 4 percent. Connecticut’s state funding had the effect of providing districts with an additional $1.53 per poor student for every $1 provided to each student. When federal funding was added to the state funding, the combined effect provided an additional $1.89 per poor student. (To compare these amounts with those of other states, see table III.6 in app. III.) Connecticut’s targeting efforts and state share of total funding reduced the local funding gap between the lowest and highest poverty groups from about 172 percent to about 27 percent. The addition of federal funding further reduced the funding gap between these groups to about 17 percent. (To compare the total funding gap with those of other states, see table V.2 in app. V. For the funding gap results using a regression analysis, see table V.1.) proportions of poor students made less effort to raise local revenue than districts with the lowest proportions of poor students. Specifically, districts in the highest poverty group made a tax effort that was 81 percent of that made in districts in the lowest poverty group. To put the state’s school finance system in perspective, table XIII.2 presents demographic data for school year 1991-92 for five groups of districts with increasing proportions of poor students. Table XIII.3 presents data on how local, state, and federal funds were distributed among the five groups of Connecticut districts. (Fig. XIII.1 provides table information in graphic form.) Table XIII.2: Demographic Information for Districts of Increasing Proportions of Poor Students, School Year 1991-92 Local funding raised for every $1,000 of district income. Percent difference (group 1 compared with group 5) Federal impact aid is considered part of local funding. Not applicable to our analysis. A Connecticut education official reported that the state had not targeted more funding to high-poverty districts since school year 1991-92. More information on changes in Connecticut’s school finance system made between 1991-92 and 1995-96 and such changes in other states appears in table LIV.1. Information on changes in federal funding between 1991-92 and 1994-95 appears in table LV.1. Additional technical information about Connecticut appears in appendixes III and IV. Average total funding per weighted pupil Targeting to poor students (added amount allocated per poor student for every dollar allocated for each student) Total funding weight (effect of combined state and federal funding) As table XIV.1 shows, in school year 1991-92, total funding (local, state, and federal funding combined) per weighted pupil in Delaware averaged $6,008. The localities provided about 28 percent of total funding for education; the state provided about 65 percent; federal funds provided about 7 percent. Delaware’s state funding had the effect of providing districts with an additional $.38 per poor student for every $1 provided to each student. When federal funding was added to the state funding, the combined effect provided an additional $.56 per poor student. (To compare these amounts with those of other states, see table III.6 in app. III.) Delaware’s targeting efforts and state share of total funding reduced the local funding gap between the lowest and highest poverty groups from about 65 percent to about 7 percent. The addition of federal funding further reduced the funding gap between these groups to about 4 percent. (To compare the total funding gap with those of other states, see table V.2 in app. V. For the funding gap results using a regression analysis, see table V.1.) proportions of poor students made more effort to raise local revenue than districts with the lowest proportions of poor students. Specifically, districts in the highest poverty group made a tax effort that was 130 percent of the effort made in districts in the lowest poverty group. To put the state’s school finance system in perspective, table XIV.2 presents demographic data for school year 1991-92 for five groups of districts with increasing proportions of poor students. Table XIV.3 presents data on how local, state, and federal funds were distributed among the five groups of Delaware districts. (Fig. XIV.1 provides table information in graphic form.) Table XIV.2: Demographic Information for Districts of Increasing Proportions of Poor Students, School Year 1991-92 Local funding raised for every $1,000 of district income. Percent difference (group 1 compared with group 5) Federal impact aid is considered part of local funding. Not applicable to our analysis. A Delaware education official reported that the state had not targeted more funding to high-poverty districts since school year 1991-92. More information on changes in Delaware’s school finance system made between 1991-92 and 1995-96 and such changes in other states appears in table LIV.1. Information on changes in federal funding between 1991-92 and 1994-95 appears in table LV.1. Additional technical information about Delaware appears in appendixes III and IV. Average total funding per weighted pupil Targeting to poor students (added amount allocated per poor student for every dollar allocated for each student) Total funding weight (effect of combined state and federal funding) As table XV.1 shows, in school year 1991-92, total funding (local, state, and federal funding combined) per weighted pupil in Florida averaged $5,964. The localities provided about 44 percent of total funding for education; the state provided about 49 percent; federal funds provided about 7 percent. Florida’s state funding had the effect of providing districts with an additional $.62 per poor student for every $1 provided to each student. When federal funding was added to the state funding, the combined effect provided an additional $.75 per poor student. (To compare these amounts with those of other states, see table III.6 in app. III.) Florida’s targeting efforts and state share of total funding reduced the local funding gap between the lowest and highest poverty groups from about 58 percent to about 7 percent. The addition of federal funding further reduced the funding gap between these groups to about 3 percent. (To compare the total funding gap with those of other states, see table V.2 in app. V. For the funding gap results using a regression analysis, see table V.1.) districts with the lowest proportions of poor students. Specifically, districts in the highest poverty group made a tax effort that was 99 percent of that made in districts in the lowest poverty group. To put the state’s school finance system in perspective, table XV.2 presents demographic data for school year 1991-92 for five groups of districts with increasing proportions of poor students. Table XV.3 presents data on how local, state, and federal funds were distributed among the five groups of Florida districts. (Fig. XV.1 provides table information in graphic form.) Table XV.2: Demographic Information for Districts of Increasing Proportions of Poor Students, School Year 1991-92 Local funding raised for every $1,000 of district income. Percent difference (group 1 compared with group 5) Federal impact aid is considered part of local funding. Not applicable to our analysis. A Florida education official reported that the state had not targeted more funding to high-poverty districts since school year 1991-92. More information on changes in Florida’s school finance system made between 1991-92 and 1995-96 and such changes in other states appears in table LIV.1. Information on changes in federal funding between 1991-92 and 1994-95 appears in table LV.1. Additional technical information about Florida appears in appendixes III and IV. Average total funding per weighted pupil Targeting to poor students (added amount allocated per poor student for every dollar allocated for each student) Total funding weight (effect of combined state and federal funding) As table XVI.1 shows, in school year 1991-92, total funding (local, state, and federal funding combined) per weighted pupil in Georgia averaged $4,688. The localities provided about 42 percent of total funding for education; the state provided about 50 percent; federal funds provided about 8 percent. Georgia’s state funding had the effect of providing districts with an additional $.40 per poor student for every $1 provided to each student. When federal funding was added to the state funding, the combined effect provided an additional $.81 per poor student. (To compare these amounts with those of other states, see table III.6 in app. III.) Georgia’s targeting efforts and state share of total funding reduced the local funding gap between the lowest and highest poverty groups from about 32 percent to about 7 percent. The addition of federal funding eliminated the funding gap between these groups to the extent that the lowest poverty group had about 3 percent less funding than the highest poverty group. (To compare the total funding gap with those of other states, see table V.2 in app. V. For the funding gap results using a regression analysis, see table V.1.) The size of the local funding gap is partly determined by differences in districts’ local tax efforts. In Georgia, districts with the highest proportions of poor students made more effort to raise local revenue than districts with the lowest proportions of poor students. Specifically, districts in the highest poverty group made a tax effort that was 122 percent of that made in districts in the lowest poverty group. To put the state’s school finance system in perspective, table XVI.2 presents demographic data for school year 1991-92 for five groups of districts with increasing proportions of poor students. Table XVI.3 presents data on how local, state, and federal funds were distributed among the five groups of Georgia districts. (Fig. XVI.1 provides table information in graphic form.) Table XVI.2: Demographic Information for Districts of Increasing Proportions of Poor Students, School Year 1991-92 Local funding raised for every $1,000 of district income. Percent difference (group 1 compared with group 5) Federal impact aid is considered part of local funding. Not applicable to our analysis. A Georgia education official reported that the state had not targeted more funding to high-poverty districts since school year 1991-92. More information on changes in Georgia’s school finance system made between 1991-92 and 1995-96 and such changes in other states appears in table LIV.1. Information on changes in federal funding between 1991-92 and 1994-95 appears in table LV.1. Additional technical information about Georgia appears in appendixes III and IV. Average total funding per weighted pupil Targeting to poor students (added amount allocated per poor student for every dollar allocated for each student) Total funding weight (effect of combined state and federal funding) As table XVII.1 shows, in school year 1991-92, total funding (local, state, and federal funding combined) per weighted pupil in Idaho averaged $3,805. The localities provided about 31 percent of total funding for education; the state provided about 62 percent; federal funds provided about 7 percent. Idaho’s state funding had the effect of providing districts with an additional $.66 per poor student for every $1 provided to each student. When federal funding was added to the state funding, the combined effect provided an additional $1.10 per poor student. (To compare these amounts with those of other states, see table III.6 in app. III.) Idaho’s targeting efforts and state share of total funding reduced the local funding gap between the lowest and highest poverty groups from about 32 percent to about 7 percent. The addition of federal funding further reduced the funding gap between these groups to about 2 percent. (To compare the total funding gap with those of other states, see table V.2 in app. V. For the funding gap results using a regression analysis, see table V.1.) of poor students made less effort to raise local revenue than districts with the lowest proportions of poor students. Specifically, districts in the highest poverty group made a tax effort that was 90 percent of that made in districts in the lowest poverty group. To put the state’s school finance system in perspective, table XVII.2 presents demographic data for school year 1991-92 for five groups with districts of increasing proportions of poor students. Table XVII.3 presents data on how local, state, and federal funds were distributed among the five groups of Idaho districts. (Fig. XVII.1 provides table information in graphic form.) Table XVII.2: Demographic Information for Districts of Increasing Proportions of Poor Students, School Year 1991-92 Local funding raised for every $1,000 of district income. Percent difference (group 1 compared with group 5) Federal impact aid is considered part of local funding. Not applicable to our analysis. An Idaho education official reported that the state had not targeted more funding to high-poverty districts since school year 1991-92. More information on changes in Idaho’s school finance system made between 1991-92 and 1995-96 and such changes in other states appears in table LIV.1. Information on changes in federal funding between 1991-92 and 1994-95 appears in table LV.1. Additional technical information about Idaho appears in appendixes III and IV. Average total funding per weighted pupil Targeting to poor students (added amount allocated per poor student for every dollar allocated for each student) Total funding weight (effect of combined state and federal funding) As table XVIII.1 shows, in school year 1991-92, total funding (local, state, and federal funding combined) per weighted pupil in Illinois averaged $5,295. The localities provided about 63 percent of total funding for education; the state provided about 31 percent; federal funds provided about 6 percent. Illinois’s state funding had the effect of providing districts with an additional $2.01 per poor student for every $1 provided to each student. When federal funding was added to the state funding, the combined effect provided an additional $3.08 per poor student. (To compare these amounts with those of other states, see table III.6 in app. III.) Illinois’s targeting efforts and state share of total funding reduced the local funding gap between the lowest and highest poverty groups from about 158 percent to about 63 percent. The addition of federal funding further reduced the funding gap between these groups to about 42 percent. (To compare the total funding gap with those of other states, see table V.2 in app. V. For the funding gap results using a regression analysis, see table V.1.) of poor students made more effort to raise local revenue than districts with the lowest proportions of poor students. Specifically, districts in the highest poverty group made a tax effort that was 131 percent of that made in districts in the lowest poverty group. To put the state’s school finance system in perspective, table XVIII.2 presents demographic data for school year 1991-92 for five groups of districts with increasing proportions of poor students. Table XVIII.3 presents data on how local, state, and federal funds were distributed among the five groups of Illinois districts. (Fig. XVIII.1 provides table information in graphic form.) Table XVIII.2: Demographic Information for Districts of Increasing Proportions of Poor Students, School Year 1991-92 Local funding raised for every $1,000 of district income. Percent difference (group 1 compared with group 5) Federal impact aid is considered part of local funding. Not applicable to our analysis. An Illinois education official reported that the state had not targeted more funding to high-poverty districts since school year 1991-92. More information on changes in Illinois’s school finance system made between 1991-92 and 1995-96 and such changes in other states appears in table LIV.1. Information on changes in federal funding between 1991-92 and 1994-95 appears in table LV.1. Additional technical information about Illinois appears in appendixes III and IV. Average total funding per weighted pupil Targeting to poor students (added amount allocated per poor student for every dollar allocated for each student) Total funding weight (effect of combined state and federal funding) As table XIX.1 shows, in school year 1991-92, total funding (local, state, and federal funding combined) per weighted pupil in Indiana averaged $5,248. The localities provided about 44 percent of total funding for education; the state provided about 52 percent; federal funds provided about 5 percent. Indiana’s state funding had the effect of providing districts with an additional $.78 per poor student for every $1 provided to each student. When federal funding was added to the state funding, the combined effect provided an additional $1.19 per poor student. (To compare these amounts with those of other states, see table III.6 in app. III.) Indiana’s targeting efforts and state share of total funding reduced the local funding gap between the lowest and highest poverty groups from about 43 percent to about 14 percent. The addition of federal funding further reduced the funding gap between these groups to about 7 percent. (To compare the total funding gap with those of other states, see table V.2 in app. V. For the funding gap results using a regression analysis, see table V.1.) of poor students made more effort to raise local revenue than districts with the lowest proportions of poor students. Specifically, districts in the highest poverty group made a tax effort that was 110 percent of that made in districts in the lowest poverty group. To put the state’s school finance system in perspective, table XIX.2 presents demographic data for school year 1991-92 for five groups of districts with increasing proportions of poor students. Table XIX.3 presents data on how local, state, and federal funds were distributed among the five groups of Indiana districts. (Fig. XIX.1 provides table information in graphic form.) Table XIX.2: Demographic Information for Districts of Increasing Proportions of Poor Students, School Year 1991-92 Local funding raised for every $1,000 of district income. Percent difference (group 1 compared with group 5) Federal impact aid is considered part of local funding. Not applicable to our analysis. An Indiana education official reported that the state had targeted more funding to high-poverty districts since school year 1991-92. More information on changes in Indiana’s school finance system made between 1991-92 and 1995-96 and such changes in other states appears in table LIV.1. Information on changes in federal funding between 1991-92 and 1994-95 appears in table LV.1. Additional technical information about Indiana appears in appendixes III and IV. Average total funding per weighted pupil Targeting to poor students (added amount allocated per poor student for every dollar allocated for each student) Total funding weight (effect of combined state and federal funding) As table XX.1 shows, in school year 1991-92, total funding (local, state, and federal funding combined) per weighted pupil in Iowa averaged $5,051. The localities provided about 49 percent of total funding for education; the state provided about 47 percent; federal funds provided about 4 percent. Iowa’s state funding had the effect of providing districts with an additional $.91 per poor student for every $1 provided to each student. When federal funding was added to the state funding, the combined effect provided an additional $1.27 per poor student. (To compare these amounts with those of other states, see table III.6 in app. III.) Iowa’s targeting efforts and state share of total funding reduced the local funding gap between the lowest and highest poverty groups from about 34 percent to about 15 percent. The addition of federal funding further reduced the funding gap between these groups to about 11 percent. (To compare the total funding gap with those of other states, see table V.2 in app. V. For the funding gap results using a regression analysis, see table V.1.) lowest proportions of poor students. Specifically, districts in the highest poverty group made a tax effort that was 92 percent of that made in districts in the lowest poverty group. To put the state’s school finance system in perspective, table XX.2 presents demographic data for school year 1991-92 for five groups of districts with increasing proportions of poor students. Table XX.3 presents data on how local, state, and federal funds were distributed among the five groups of Iowa districts. (Fig. XX.1 provides table information in graphic form.) Table XX.2: Demographic Information for Districts of Increasing Proportions of Poor Students, School Year 1991-92 Local funding raised for every $1,000 of district income. Percent difference (group 1 compared with group 5) Federal impact aid is considered part of local funding. Not applicable to our analysis. An Iowa education official reported that the state had not targeted more funding to high-poverty districts since school year 1991-92. More information on changes in Iowa’s school finance system made between 1991-92 and 1995-96 and such changes in other states appears in table LIV.1. Information on changes in federal funding between 1991-92 and 1994-95 appears in table LV.1. Additional technical information about Iowa appears in appendixes III and IV. Average total funding per weighted pupil Targeting to poor students (added amount allocated per poor student for every dollar allocated for each student) Total funding weight (effect of combined state and federal funding) As table XXI.1 shows, in school year 1991-92, total funding (local, state, and federal funding combined) per weighted pupil in Kansas averaged $5,240. The localities provided about 54 percent of total funding for education; the state provided about 42 percent; federal funds provided about 5 percent. Kansas’ state funding had the effect of providing districts with an additional $.18 per poor student for every $1 provided to each student. When federal funding was added to the state funding, the combined effect provided an additional $.52 per poor student. (To compare these amounts with those of other states, see table III.6 in app. III.) Kansas’ targeting efforts and state share of total funding reduced the local funding gap between the lowest and highest poverty groups from about 39 percent to about 10 percent. The addition of federal funding further reduced the funding gap between these groups to about 4 percent. (To compare the total funding gap with those of other states, see table V.2 in app. V. For the funding gap results using a regression analysis, see table V.1.) of poor students made more effort to raise local revenue than districts with the lowest proportions of poor students. Specifically, districts in the highest poverty group made a tax effort that was 122 percent of that made in districts in the lowest poverty group. To put the state’s school finance system in perspective, table XXI.2 presents demographic data for school year 1991-92 for five groups of districts with increasing proportions of poor students. Table XXI.3 presents data on how local, state, and federal funds were distributed among the five groups of Kansas districts. (Fig. XXI.1 provides table information in graphic form.) Table XXI.2: Demographic Information for Districts of Increasing Proportions of Poor Students, School Year 1991-92 Local funding raised for every $1,000 of district income. Percent difference (group 1 compared with group 5) Federal impact aid is considered part of local funding. Not applicable to our analysis. A Kansas education official reported that the state had targeted more funding to high-poverty districts since school year 1991-92. More information on changes in Kansas’ school finance system made between 1991-92 and 1995-96 and such changes in other states appears in table LIV.1. Information on changes in federal funding between 1991-92 and 1994-95 appears in table LV.1. Additional technical information about Kansas appears in appendixes III and IV. Average total funding per weighted pupil Targeting to poor students (added amount allocated per poor student for every dollar allocated for each student) Total funding weight (effect of combined state and federal funding) As table XXII.1 shows, in school year 1991-92, total funding (local, state, and federal funding combined) per weighted pupil in Kentucky averaged $4,174. The localities provided about 27 percent of total funding for education; the state provided about 63 percent; federal funds provided about 11 percent. Kentucky’s state funding had the effect of providing districts with an additional $.59 per poor student for every $1 provided to each student. When federal funding was added to the state funding, the combined effect provided an additional $.87 per poor student. (To compare these amounts with those of other states, see table III.6 in app. III.) Kentucky’s targeting efforts and state share of total funding reduced the local funding gap between the lowest and highest poverty groups from about 104 percent to about 6 percent. The addition of federal funding eliminated the funding gap between these groups to the extent that the lowest poverty group had about 3 percent less funding than the highest poverty group. (To compare the total funding gap with those of other states, see table V.2 in app. V. For the funding gap results using a regression analysis, see table V.1.) The size of the local funding gap is partly determined by differences in districts’ local tax efforts. In Kentucky, districts with the highest proportions of poor students made slightly less effort to raise local revenue than districts with the lowest proportions of poor students. Specifically, districts in the highest poverty group made a tax effort that was 99 percent of that made in districts in the lowest poverty group. To put the state’s school finance system in perspective, table XXII.2 presents demographic data for school year 1991-92 for five groups of districts with increasing proportions of poor students. Table XXII.3 presents data on how local, state, and federal funds were distributed among the five groups of Kentucky districts. (Fig. XXII.1 provides table information in graphic form.) Table XXII.2: Demographic Information for Districts of Increasing Proportions of Poor Students, School Year 1991-92 Local funding raised for every $1,000 of district income. Percent difference (group 1 compared with group 5) Federal impact aid is considered part of local funding. Not applicable to our analysis. A Kentucky education official reported that the state had not targeted more funding to high-poverty districts since school year 1991-92. More information on changes in Kentucky’s school finance system made between 1991-92 and 1995-96 and such changes in other states appears in table LIV.1. Information on changes in federal funding between 1991-92 and 1994-95 appears in table LV.1. Additional technical information about Kentucky appears in appendixes III and IV. Average total funding per weighted pupil Targeting to poor students (added amount allocated per poor student for every dollar allocated for each student) Total funding weight (effect of combined state and federal funding) As table XXIII.1 shows, in school year 1991-92, total funding (local, state, and federal funding combined) per weighted pupil in Louisiana averaged $4,397. The localities provided about 34 percent of total funding for education; the state provided about 55 percent; federal funds provided about 11 percent. Louisiana’s state funding had the effect of providing districts with an additional $.14 per poor student for every $1 provided to each student. When federal funding was added to the state funding, the combined effect provided an additional $.70 per poor student. (To compare these amounts with those of other states, see table III.6 in app. III.) Louisiana’s targeting efforts and state share of total funding reduced the local funding gap between the lowest and highest poverty groups from about 15 percent to about 11 percent. The addition of federal funding further reduced the funding gap between these groups to about 2 percent. (To compare the total funding gap with those of other states, see table V.2 in app. V. For the funding gap results using a regression analysis, see table V.1.) proportions of poor students made more effort to raise local revenue than districts with the lowest proportions of poor students. Specifically, districts in the highest poverty group made a tax effort that was 126 percent of that made in districts in the lowest poverty group. To put the state’s school finance system in perspective, table XXIII.2 presents demographic data for school year 1991-92 for five groups of districts with increasing proportions of poor students. Table XXIII.3 presents data on how local, state, and federal funds were distributed among the five groups of Louisiana districts. (Fig. XXIII.1 provides table information in graphic form.) Table XXIII.2: Demographic Information for Districts of Increasing Proportions of Poor Students, School Year 1991-92 Local funding raised for every $1,000 of district income. Percent difference (group 1 compared with group 5) Federal impact aid is considered part of local funding. Not applicable to this analysis. A Louisiana education official reported that the state had targeted more funding to high-poverty districts since school year 1991-92. More information on changes in Louisiana’s school finance system made between 1991-92 and 1995-96 and such changes in other states appears in table LIV.1. Information on changes in federal funding between 1991-92 and 1994-95 appears in table LV.1. Additional technical information about Louisiana appears in appendixes III and IV. Average total funding per weighted pupil Targeting to poor students (added amount allocated per poor student for every dollar allocated for each student) Total funding weight (effect of combined state and federal funding) As table XXIV.1 shows, in school year 1991-92, total funding (local, state, and federal funding combined) per weighted pupil in Maine averaged $6,017. The localities provided about 48 percent of total funding for education; the state provided about 47 percent; federal funds provided about 5 percent. Maine’s state funding had the effect of providing districts with an additional $.86 per poor student for every $1 provided to each student. When federal funding was added to the state funding, the combined effect provided an additional $1.43 per poor student. (To compare these amounts with those of other states, see table III.6 in app. III.) Maine’s targeting efforts and state share of total funding reduced the local funding gap between the lowest and highest poverty groups from about 31 percent to about 10 percent. The addition of federal funding further reduced the funding gap between these groups to about 4 percent. (To compare the total funding gap with those of other states, see table V.2 in app. V. For the funding gap results using a regression analysis, see table V.1.) of poor students made slightly less effort to raise local revenue than districts with the lowest proportions of poor students. Specifically, districts in the highest poverty group made a tax effort that was 99 percent of that made in districts in the lowest poverty group. To put the state’s school finance system in perspective, table XXIV.2 presents demographic data for school year 1991-92 for five groups of districts with increasing proportions of poor students. Table XXIV.3 presents data on how local, state, and federal funds were distributed among the five groups of Maine districts. (Fig. XXIV.1 provides table information in graphic form.) Table XXIV.2: Demographic Information for Districts of Increasing Proportions of Poor Students, School Year 1991-92 Local funding raised for every $1,000 of district income. Percent difference (group 1 compared with group 5) Federal impact aid is considered part of local funding. Not applicable to this analysis. A Maine education official reported that the state had targeted less funding to high-poverty districts since school year 1991-92. More information on changes in Maine’s school finance system made between 1991-92 and 1995-96 and such changes in other states appears in table LIV.1. Information on changes in federal funding between 1991-92 and 1994-95 appears in table LV.1. Additional technical information about Maine appears in appendixes III and IV. Average total funding per weighted pupil Targeting to poor students (added amount allocated per poor student for every dollar allocated for each student) Total funding weight (effect of combined state and federal funding) As table XXV.1 shows, in school year 1991-92, total funding (local, state, and federal funding combined) per weighted pupil in Maryland averaged $6,349. The localities provided about 57 percent of total funding for education; the state provided about 38 percent; federal funds provided about 5 percent. Maryland’s state funding had the effect of providing districts with an additional $.04 per poor student for every $1 provided to each student. When federal funding was added to the state funding, the combined effect provided an additional $.38 per poor student. (To compare these amounts with those of other states, see table III.6 in app. III.) Maryland’s targeting efforts and state share of total funding reduced the local funding gap between the lowest and highest poverty groups from about 262 percent to about 80 percent. The addition of federal funding further reduced the funding gap between these groups to about 63 percent. (To compare the total funding gap with those of other states, see table V.2 in app. V. For the funding gap results using a regression analysis, see table V.1.) proportions of poor students made less effort to raise local revenue than districts with the lowest proportions of poor students. Specifically, districts in the highest poverty group made a tax effort that was 65 percent of that made in districts in the lowest poverty group. To put the state’s school finance system in perspective, table XXV.2 presents demographic data for school year 1991-92 for five groups of districts with increasing proportions of poor students. Table XXV.3 presents data on how local, state, and federal funds were distributed among the five groups of Maryland districts. (Fig. XXV.1 provides table information in graphic form.) Table XXV.2: Demographic Information for Districts of Increasing Proportions of Poor Students, School Year 1991-92 Local funding raised for every $1,000 of district income. Percent difference (group 1 compared with group 5) Federal impact aid is considered part of local funding. Not applicable to our analysis. A Maryland education official reported that the state had targeted more funding to high-poverty districts since school year 1991-92. More information on changes in Maryland’s school finance system made between 1991-92 and 1995-96 and such changes in other states appears in table LIV.1. Information on changes in federal funding between 1991-92 and 1994-95 appears in table LV.1. Additional technical information about Maryland appears in appendixes III and IV. Average total funding per weighted pupil Targeting to poor students (added amount allocated per poor student for every dollar allocated for each student) Total funding weight (effect of combined state and federal funding) As table XXVI.1 shows, in school year 1991-92, total funding (local, state, and federal funding combined) per weighted pupil in Massachusetts averaged $6,601. The localities provided about 66 percent of total funding for education; the state provided about 29 percent; federal funds provided about 5 percent. Massachusetts’s state funding had the effect of providing districts with an additional $2.98 per poor student for every $1 provided to each student. When federal funding was added to the state funding, the combined effect provided an additional $3.60 per poor student. (To compare these amounts with those of other states, see table III.6 in app. III.) Massachusetts’s targeting efforts and state share of total funding reduced the local funding gap between the lowest and highest poverty groups from about 99 percent to about 25 percent. The addition of federal funding further reduced the funding gap between these groups to about 14 percent. (To compare the total funding gap with those of other states, see table V.2 in app. V. For the funding gap results using a regression analysis, see table V.1.) proportions of poor students made less effort to raise local revenue than districts with the lowest proportions of poor students. Specifically, districts in the highest poverty group made a tax effort that was 88 percent of that made in districts in the lowest poverty group. To put the state’s school finance system in perspective, table XXVI.2 presents demographic data for school year 1991-92 for five groups of districts with increasing proportions of poor students. Table XXVI.3 presents data on how local, state, and federal funds were distributed among the five groups of Massachusetts districts. (Fig. XXVI.1 provides table information in graphic form.) Table XXVI.2: Demographic Information for Districts of Increasing Proportions of Poor Students, School Year 1991-92 Local funding raised for every $1,000 of district income. Percent difference (group 1 compared with group 5) Federal impact aid is considered part of local funding. Not applicable to our analysis. A Massachusetts education official reported that the state had targeted much more funding to high-poverty districts since school year 1991-92. More information on changes in Massachusetts’s school finance system made between 1991-92 and 1995-96 and such changes in other states appears in table LIV.1. Information on changes in federal funding between 1991-92 and 1994-95 appears in table LV.1. Additional technical information about Massachusetts appears in appendixes III and IV. Average total funding per weighted pupil Targeting to poor students (added amount allocated per poor student for every dollar allocated for each student) Total funding weight (effect of combined state and federal funding) As table XXVII.1 shows, in school year 1991-92, total funding (local, state, and federal funding combined) per weighted pupil in Michigan averaged $6,110. The localities provided about 64 percent of total funding for education; the state provided about 32 percent; federal funds provided about 4 percent. Michigan’s state funding had the effect of providing districts with an additional $2.71 per poor student for every $1 provided to each student. When federal funding was added to the state funding, the combined effect provided an additional $3.11 per poor student. (To compare these amounts with those of other states, see table III.6 in app. III.) Michigan’s targeting efforts and state share of total funding reduced the local funding gap between the lowest and highest poverty groups from about 193 percent to about 37 percent. The addition of federal funding further reduced the funding gap between these groups to about 26 percent. (To compare the total funding gap with those of other states, see table V.2 in app. V. For the funding gap results using a regression analysis, see table V.1.) proportions of poor students made less effort to raise local revenue than districts with the lowest proportions of poor students. Specifically, districts in the highest poverty group made a tax effort that was 82 percent of that made in districts in the lowest poverty group. To put the state’s school finance system in perspective, table XXVII.2 presents demographic data for school year 1991-92 for five groups of districts with increasing proportions of poor students. Table XXVII.3 presents data on how local, state, and federal funds were distributed among the five groups of Michigan districts. (Fig. XXVII.1 provides table information in graphic form.) Table XXVII.2: Demographic Information for Districts of Increasing Proportions of Poor Students, School Year 1991-92 Local funding raised for every $1,000 of district income. Percent difference (group 1 compared with group 5) Federal impact aid is considered part of local funding. Not applicable to our analysis. A Michigan education official reported that the state had targeted more funding to high-poverty districts since school year 1991-92. More information on changes in Michigan’s school finance system made between 1991-92 and 1995-96 and such changes in other states appears in table LIV.1. Information on changes in federal funding between 1991-92 and 1994-95 appears in table LV.1. Additional technical information about Michigan appears in appendixes III and IV. Average total funding per weighted pupil Targeting to poor students (added amount allocated per poor student for every dollar allocated for each student) Total funding weight (effect of combined state and federal funding) As table XXVIII.1 shows, in school year 1991-92, total funding (local, state, and federal funding combined) per weighted pupil in Minnesota averaged $5,872. The localities provided about 45 percent of total funding for education; the state provided about 51 percent; federal funds provided about 4 percent. Minnesota’s state funding had the effect of providing districts with an additional $.96 per poor student for every $1 provided to each student. When federal funding was added to the state funding, the combined effect provided an additional $1.25 per poor student. (To compare these amounts with those of other states, see table III.6 in app. III.) Minnesota’s targeting efforts and state share of total funding more than eliminated the 25-percent local funding gap between the lowest and highest poverty groups. Consequently, the lowest poverty group had about 5 percent less funding than the highest poverty group. The low-poverty group had about 9 percent less funding after the addition of federal funding. (To compare the total funding gap with those of other states, see table V.2 in app. V. For the funding gap results using a regression analysis, see table V.1.) The size of the local funding gap is partly determined by differences in districts’ local tax efforts. In Minnesota, districts with the highest proportions of poor students made more effort to raise local revenue than districts with the lowest proportions of poor students. Specifically, districts in the highest poverty group made a tax effort that was 111 percent of that made in districts in the lowest poverty group. To put the state’s school finance system in perspective, table XXVIII.2 presents demographic data for school year 1991-92 for five groups of districts with increasing proportions of poor students. Table XXVIII.3 presents data on how local, state, and federal funds were distributed among the five groups of Minnesota districts. (Fig. XXVIII.1 provides table information in graphic form.) Table XXVIII.2: Demographic Information for Districts of Increasing Proportions of Poor Students, School Year 1991-92 Local funding raised for every $1,000 of district income. Percent difference (group 1 compared with group 5) Federal impact aid is considered part of local funding. Not applicable to our analysis. A Minnesota education official reported that the state had not targeted more funding to high-poverty districts since school year 1991-92. More information on changes in Minnesota’s school finance system made between 1991-92 and 1995-96 and such changes in other states appears in table LIV.1. Information on changes in federal funding between 1991-92 and 1994-95 appears in table LV.1. Additional technical information about Minnesota appears in appendixes III and IV. Average total funding per weighted pupil Targeting to poor students (added amount allocated per poor student for every dollar allocated for each student) Total funding weight (effect of combined state and federal funding) As table XXIX.1 shows, in school year 1991-92, total funding (local, state, and federal funding combined) per weighted pupil in Mississippi averaged $3,386. The localities provided about 30 percent of total funding for education; the state provided about 54 percent; federal funds provided about 16 percent. Mississippi’s state funding had the effect of providing districts with an additional $.22 per poor student for every $1 provided to each student. When federal funding was added to the state funding, the combined effect provided an additional $1.03 per poor student. (To compare these amounts with those of other states, see table III.6 in app. III.) Mississippi’s targeting efforts and state share of total funding reduced the local funding gap between the lowest and highest poverty groups from about 34 percent to about 18 percent. The addition of federal funding further reduced the funding gap between these groups to about 1 percent. (To compare the total funding gap with those of other states, see table V.2 in app. V. For the funding gap results using a regression analysis, see table V.1.) proportions of poor students made more effort to raise local revenue than districts with the lowest proportions of poor students. Specifically, districts in the highest poverty group made a tax effort that was 126 percent of that made in districts in the lowest poverty group. To put the state’s school finance system in perspective, table XXIX.2 presents demographic data for school year 1991-92 for five groups of districts with increasing proportions of poor students. Table XXIX.3 presents data on how local, state, and federal funds were distributed among the five groups of Mississippi districts. (Fig. XXIX.1 provides table information in graphic form.) Table XXIX.2: Demographic Information for Districts of Increasing Proportions of Poor Students, School Year 1991-92 Local funding raised for every $1,000 of district income. Percent difference (group 1 compared with group 5) Federal impact aid is considered part of local funding. Not applicable to our analysis. A Mississippi education official reported that the state had not targeted more funding to high-poverty districts since school year 1991-92. More information on changes in Mississippi’s school finance system made between 1991-92 and 1995-96 and such changes in other states appears in table LIV.1. Information on changes in federal funding between 1991-92 and 1994-95 appears in table LV.1. Additional technical information about Mississippi appears in appendixes III and IV. Average total funding per weighted pupil Targeting to poor students (added amount allocated per poor student for every dollar allocated for each student) Total funding weight (effect of combined state and federal funding) As table XXX.1 shows, in school year 1991-92, total funding (local, state, and federal funding combined) per weighted pupil in Missouri averaged $4,272. The localities provided about 52 percent of total funding for education; the state provided about 41 percent; federal funds provided about 7 percent. Missouri’s state funding had the effect of providing districts with an additional $5.97 per poor student for every $1 provided to each student. When federal funding was added to the state funding, the combined effect provided an additional $7.41 per poor student. (To compare these amounts with those of other states, see table III.6 in app. III.) Missouri’s targeting efforts and state share of total funding more than eliminated the 44-percent local funding gap between the lowest and highest poverty groups. Consequently, the lowest poverty group had about 7 percent less funding than the highest poverty group. The lowest poverty group had about 14 percent less funding after the addition of federal funding. (To compare the total funding gap with those of other states, see table V.2 in app. V. For the funding gap results using a regression analysis, see table V.1.) The size of the local funding gap is partly determined by differences in districts’ local tax efforts. In Missouri, districts with the highest proportions of poor students made more effort to raise local revenue than districts with the lowest proportions of poor students. Specifically, districts in the highest poverty group made a tax effort that was 137 percent of that made in districts in the lowest poverty group. To put the state’s school finance system in perspective, table XXX.2 presents demographic data for school year 1991-92 for five groups of districts with increasing proportions of poor students. Table XXX.3 presents data on how local, state, and federal funds were distributed among the five groups of Missouri districts. (Fig. XXX.1 provides table information in graphic form.) Table XXX.2: Demographic Information for Districts of Increasing Proportions of Poor Students, School Year 1991-92 Local funding raised for every $1,000 of district income. Percent difference (group 1 compared with group 5) Federal impact aid is considered part of local funding. Not applicable to our analysis. A Missouri education official reported that the state had targeted much more funding to high-poverty districts since school year 1991-92. More information on changes in Missouri’s school finance system made between 1991-92 and 1995-96 and such changes in other states appears in table LIV.1. Information on changes in federal funding between 1991-92 and 1994-95 appears in table LV.1. Additional technical information about Missouri appears in appendixes III and IV. Average total funding per weighted pupil Targeting to poor students (added amount allocated per poor student for every dollar allocated for each student) Total funding weight (effect of combined state and federal funding) As table XXXI.1 shows, in school year 1991-92, total funding (local, state, and federal funding combined) per weighted pupil in Montana averaged $5,260. The localities provided about 54 percent of total funding for education; the state provided about 41 percent; federal funds provided about 5 percent. Montana’s state funding had the effect of providing districts with no additional funding per poor student for every $1 provided to each student. When federal funding was added to the state funding, the combined effect provided an additional $.54 per poor student. (To compare these amounts with those of other states, see table III.6 in app. III.) State funding in Montana had no effect on the 13-percent local funding gap between the lowest and highest poverty groups. The addition of federal funding reduced the funding gap between these groups to about 8 percent. (To compare the total funding gap with those of other states, see table V.2 in app. V. For the funding gap results using a regression analysis, see table V.1.) districts with the lowest proportions of poor students. Specifically, districts in the highest poverty group made a tax effort that was 155 percent of that made in districts in the lowest poverty group. To put the state’s school finance system in perspective, table XXXI.2 presents demographic data for school year 1991-92 for five groups of districts with increasing proportions of poor students. Table XXXI.3 presents data on how local, state, and federal funds were distributed among the five groups of Montana districts. (Fig. XXXI.1 provides table information in graphic form.) Table XXXI.2: Demographic Information for Districts of Increasing Proportions of Poor Students, School Year 1991-92 Local funding raised for every $1,000 of district income. Percent difference (group 1 compared with group 5) Federal impact aid is considered part of local funding. Not applicable to our analysis. A Montana education official reported that the state had not targeted more funding to high-poverty districts since school year 1991-92. More information on changes in Montana’s school finance system made between 1991-92 and 1995-96 and such changes in other states appears in table LIV.1. Information on changes in federal funding between 1991-92 and 1994-95 appears in table LV.1. Additional technical information about Montana appears in appendixes III and IV. Average total funding per weighted pupil Targeting to poor students (added amount allocated per poor student for every dollar allocated for each student) Total funding weight (effect of combined state and federal funding) As table XXXII.1 shows, in school year 1991-92, total funding (local, state, and federal funding combined) per weighted pupil in Nebraska averaged $5,448. The localities provided about 63 percent of total funding for education; the state provided about 33 percent; federal funds provided about 4 percent. Nebraska’s state funding had the effect of providing districts with an additional $.39 per poor student for every $1 provided to each student. When federal funding was added to the state funding, the combined effect provided an additional $.70 per poor student. (To compare these amounts with those of other states, see table III.6 in app. III.) Nebraska’s targeting efforts and state share of total funding reduced the local funding gap between the lowest and highest poverty groups from about 8 percent to about 7 percent. The addition of federal funding further reduced the funding gap between these groups to about 3 percent. (To compare the total funding gap with those of other states, see table V.2 in app. V. For the funding gap results using a regression analysis, see table V.1.) proportions of poor students made slightly less effort to raise local revenue than districts with the lowest proportions of poor students. Specifically, districts in the highest poverty group made a tax effort that was 99 percent of that made in districts in the lowest poverty group. To put the state’s school finance system in perspective, table XXXII.2 presents demographic data for school year 1991-92 for five groups of districts with increasing proportions of poor students. Table XXXII.3 presents data on how local, state, and federal funds were distributed among the five groups of Nebraska districts. (Fig. XXXII.1 provides table information in graphic form.) Table XXXII.2: Demographic Information for Districts of Increasing Proportions of Poor Students, School Year 1991-92 Local funding raised for every $1,000 of district income. Percent difference (group 1 compared with group 5) Federal impact aid is considered part of local funding. Not applicable to our analysis. A Nebraska education official reported that the state had not targeted more funding to high-poverty districts since school year 1991-92. More information on changes in Nebraska’s school finance system made between 1991-92 and 1995-96 and such changes in other states appears in table LIV.1. Information on changes in federal funding between 1991-92 and 1994-95 appears in table LV.1. Additional technical information about Nebraska appears in appendixes III and IV. Average total funding per weighted pupil Targeting to poor students (added amount allocated per poor student for every dollar allocated for each student) Total funding weight (effect of combined state and federal funding) As table XXXIII.1 shows, in school year 1991-92, total funding (local, state, and federal funding combined) per weighted pupil in Nevada averaged $3,810. The localities provided about 41 percent of total funding for education; the state provided about 54 percent; federal funds provided about 5 percent. Nevada’s state funding had the effect of providing districts with no additional funding per poor student for every $1 provided to each student. The addition of federal funding had no effect on the amount of additional funding per poor student. (To compare these amounts with those of other states, see table III.6 in app. III.) State funding in Nevada more than eliminated the 43-percent local funding gap between the lowest and highest poverty groups. Consequently, the lowest poverty group had about 13 percent less funding than the highest poverty group. The lowest poverty group had about 14 percent less funding after the addition of federal funding. (To compare the total funding gap with those of other states, see table V.2 in app. V. For the funding gap results using a regression analysis, see table V.1.) of poor students made more effort to raise local revenue than districts with the lowest proportions of poor students. Specifically, districts in the highest poverty group made a tax effort that was 104 percent of that made in districts in the lowest poverty group. To put the state’s school finance system in perspective, table XXXIII.2 presents demographic data for school year 1991-92 for four groups of districts with increasing proportions of poor students. Table XXXIII.3 presents data on how local, state, and federal funds were distributed among the four groups of Nevada districts. (Fig. XXXIII.1 provides table information in graphic form.) Poverty rate (percent) Percent difference (group 1 compared with group 4) Federal impact aid is considered part of local funding. Not applicable to our analysis. A Nevada education official reported that the state had not targeted more funding to high-poverty districts since school year 1991-92. More information on changes in Nevada’s school finance system made between 1991-92 and 1995-96 and such changes in other states appears in table LIV.1. Information on changes in federal funding between 1991-92 and 1994-95 appears in table LV.1. Additional technical information about Nevada appears in appendixes III and IV. Average total funding per weighted pupil Targeting to poor students (added amount allocated per poor student for every dollar allocated for each student) Total funding weight (effect of combined state and federal funding) As table XXXIV.1 shows, in school year 1991-92, total funding (local, state, and federal funding combined) per weighted pupil in New Hampshire averaged $6,028. The localities provided about 89 percent of total funding for education; the state provided about 8 percent; federal funds provided about 3 percent. New Hampshire’s state funding had the effect of providing districts with an additional $6.69 per poor student for every $1 provided to each student. When federal funding was added to the state funding, the combined effect provided an additional $5.50 per poor student. (To compare these amounts with those of other states, see table III.6 in app. III.) New Hampshire’s targeting efforts and state share of total funding reduced the local funding gap between the lowest and highest poverty groups from about 49 percent to about 35 percent. The addition of federal funding further reduced the funding gap between these groups to about 32 percent. (To compare the total funding gap with those of other states, see table V.2 in app. V. For the funding gap results using a regression analysis, see table V.1.) proportions of poor students made a slightly greater effort to raise local revenue than districts with the lowest proportions of poor students. Specifically, districts in the highest poverty group made a tax effort that was 101 percent of that made in districts in the lowest poverty group. To put the state’s school finance system in perspective, table XXXIV.2 presents demographic data for school year 1991-92 for five groups of districts with increasing proportions of poor students. Table XXXIV.3 presents data on how local, state, and federal funds were distributed among the five groups of New Hampshire districts. (Fig. XXXIV.1 provides table information in graphic form.) Table XXXIV.2: Demographic Information for Districts of Increasing Proportions of Poor Students, School Year 1991-92 Local funding raised for every $1,000 of district income. Percent difference (group 1 compared with group 5) Federal impact aid is considered part of local funding. Not applicable to our analysis. A New Hampshire education official reported that the state had not targeted more funding to high-poverty districts since school year 1991-92. More information on changes in New Hampshire’s school finance system made between 1991-92 and 1995-96 and such changes in other states appears in table LIV.1. Information on changes in federal funding between 1991-92 and 1994-95 appears in table LV.1. Additional technical information about New Hampshire appears in appendixes III and IV. Average total funding per weighted pupil Targeting to poor students (added amount allocated per poor student for every dollar allocated for each student) Total funding weight (effect of combined state and federal funding) As table XXXV.1 shows, in school year 1991-92, total funding (local, state, and federal funding combined) per weighted pupil in New Jersey averaged $9,605. The localities provided about 55 percent of total funding for education; the state provided about 41 percent; federal funds provided about 4 percent. New Jersey’s state funding had the effect of providing districts with an additional $3.45 per poor student for every $1 provided to each student. When federal funding was added to the state funding, the combined effect provided an additional $4.03 per poor student. (To compare these amounts with those of other states, see table III.6 in app. III.) New Jersey’s targeting efforts and state share of total funding reduced the local funding gap between the lowest and highest poverty groups from about 194 percent to about 22 percent. The addition of federal funding further reduced the funding gap between these groups to about 13 percent. (To compare the total funding gap with those of other states, see table V.2 in app. V. For the funding gap results using a regression analysis, see table V.1.) proportions of poor students made more effort to raise local revenue than districts with the lowest proportions of poor students. Specifically, districts in the highest poverty group made a tax effort that was 116 percent of that made in districts in the lowest poverty group. To put the state’s school finance system in perspective, table XXXV.2 presents demographic data for school year 1991-92 for five groups of districts with increasing proportions of poor students. Table XXXV.3 presents data on how local, state, and federal funds were distributed among the five groups of New Jersey districts. (Fig. XXXV.1 provides table information in graphic form.) Table XXXV.2: Demographic Information for Districts of Increasing Proportions of Poor Students, School Year 1991-92 Local funding raised for every $1,000 of district income. Percent difference (group 1 compared with group 5) Federal impact aid is considered part of local funding. Not applicable to our analysis. A New Jersey education official reported that the state had not targeted more funding to high-poverty districts since school year 1991-92. More information on changes in New Jersey’s school finance system made between 1991-92 and 1995-96 and such changes in other states appears in table LIV.1. Information on changes in federal funding between 1991-92 and 1994-95 appears in table LV.1. Additional technical information about New Jersey appears in appendixes III and IV. Average total funding per weighted pupil Targeting to poor students (added amount allocated per poor student for every dollar allocated for each student) Total funding weight (effect of combined state and federal funding) As table XXXVI.1 shows, in school year 1991-92, total funding (local, state, and federal funding combined) per weighted pupil in New Mexico averaged $4,353. The localities provided about 15 percent of total funding for education; the state provided about 75 percent; federal funds provided about 10 percent. New Mexico’s state funding had the effect of providing districts no additional funding per poor student for every $1 provided to each student. When federal funding was added to the state funding, the combined effect provided an additional $.28 per poor student. (To compare these amounts with those of other states, see table III.6 in app. III.) State funding in New Mexico increased the local funding gap between the lowest and highest poverty groups from about 1 percent to about 9 percent. The addition of federal funding reduced the funding gap between these groups to about 2 percent. (To compare the total funding gap with those of other states, see table V.2 in app. V. For the funding gap results using a regression analysis, see table V.1.) districts with the lowest proportions of poor students. Specifically, districts in the highest poverty group made a tax effort that was 333 percent of that made in districts in the lowest poverty group. To put the state’s school finance system in perspective, table XXXVI.2 presents demographic data for school year 1991-92 for five groups of districts with increasing proportions of poor students. Table XXXVI.3 presents data on how local, state, and federal funds were distributed among the five groups of New Mexico districts. (Fig. XXXVI.1 provides table information in graphic form.) Table XXXVI.2: Demographic Information for Districts of Increasing Proportions of Poor Students, School Year 1991-92 Local funding raised for every $1,000 of district income. Percent difference (group 1 compared with group 5) Federal impact aid is considered part of local funding. Not applicable to our analysis. A New Mexico education official reported that the state had not targeted more funding to high-poverty districts since school year 1991-92. More information on changes in New Mexico’s school finance system made between 1991-92 and 1995-96 and such changes in other states appears in table LIV.1. Information on changes in federal funding between 1991-92 and 1994-95 appears in table LV.1. Additional technical information about New Mexico appears in appendixes III and IV. Average total funding per weighted pupil Targeting to poor students (added amount allocated per poor student for every dollar allocated for each student) Total funding weight (effect of combined state and federal funding) As table XXXVII.1 shows, in school year 1991-92, total funding (local, state, and federal funding combined) per weighted pupil in New York averaged $8,233. The localities provided about 54 percent of total funding for education; the state provided about 40 percent; federal funds provided about 5 percent. New York’s state funding had the effect of providing districts with no additional funding per poor student for every $1 provided to each student. The addition of federal funding had no effect on the amount of additional funding per poor student. (To compare these amounts with those of other states, see table III.6 in app. III.) State funding in New York reduced the local funding gap between the lowest and highest poverty groups from about 213 percent to about 44 percent. The addition of federal funding further reduced the funding gap between these groups to about 34 percent. (To compare the total funding gap with those of other states, see table V.2 in app. V. For the funding gap results using a regression analysis, see table V.1.) districts with the lowest proportions of poor students. Specifically, districts in the highest poverty group made a tax effort that was 72 percent of that made in districts in the lowest poverty group. To put the state’s school finance system in perspective, table XXXVII.2 presents demographic data for school year 1991-92 for five groups of districts with increasing proportions of poor students. Table XXXVII.3 presents data on how local, state, and federal funds were distributed among the five groups of New York districts. (Fig. XXXVII.1 provides table information in graphic form.) Table XXXVII.2: Demographic Information for Districts of Increasing Proportions of Poor Students, School Year 1991-92 Local funding raised for every $1,000 of district income. Percent difference (group 1 compared with group 5) Federal impact aid is considered part of local funding. Not applicable to our analysis. A New York education official reported that the state had targeted much more funding to high-poverty districts since school year 1991-92. More information on changes in New York’s school finance system made between 1991-92 and 1995-96 and such changes in other states appears in table LIV.1. Information on changes in federal funding between 1991-92 and 1994-95 appears in table LV.1. Additional technical information about New York appears in appendixes III and IV. Average total funding per weighted pupil Targeting to poor students (added amount allocated per poor student for every dollar allocated for each student) Total funding weight (effect of combined state and federal funding) As table XXXVIII.1 shows, in school year 1991-92, total funding (local, state, and federal funding combined) per weighted pupil in North Carolina averaged $4,780. The localities provided about 30 percent of total funding for education; the state provided about 63 percent; federal funds provided about 7 percent. North Carolina’s state funding had the effect of providing districts with an additional $.53 per poor student for every $1 provided to each student. When federal funding was added to the state funding, the combined effect provided an additional $1.05 per poor student. (To compare these amounts with those of other states, see table III.6 in app. III.) North Carolina’s targeting efforts and state share of total funding reduced the local funding gap between the lowest and highest poverty groups from about 81 percent to about 16 percent. The addition of federal funding further reduced the funding gap between these groups to about 7 percent. (To compare the total funding gap with those of other states, see table V.2 in app. V. For the funding gap results using a regression analysis, see table V.1.) proportions of poor students made less effort to raise local revenue than districts with the lowest proportions of poor students. Specifically, districts in the highest poverty group made a tax effort that was 93 percent of that made in districts in the lowest poverty group. To put the state’s school finance system in perspective, table XXXVIII.2 presents demographic data for school year 1991-92 for five groups of districts with increasing proportions of poor students. Table XXXVIII.3 presents data on how local, state, and federal funds were distributed among the five groups of North Carolina districts. (Fig. XXXVIII.1 provides table information in graphic form.) Table XXXVIII.2: Demographic Information for Districts of Increasing Proportions of Poor Students, School Year 1991-92 Local funding raised for every $1,000 of district income. Percent difference (group 1 compared with group 5) Federal impact aid is considered part of local funding. Not applicable to our analysis. A North Carolina education official reported that the state targeted more funding to high-poverty districts as of school year 1996-97. More information on changes in North Carolina’s school finance system made between 1991-92 and 1995-96 and such changes in other states appears in table LIV.1. Information on changes in federal funding between 1991-92 and 1994-95 appears in table LV.1. Additional technical information about North Carolina appears in appendixes III and IV. Average total funding per weighted pupil Targeting to poor students (added amount allocated per poor student for every dollar allocated for each student) Total funding weight (effect of combined state and federal funding) As table XXXIX.1 shows, in school year 1991-92, total funding (local, state, and federal funding combined) per weighted pupil in North Dakota averaged $4,467. The localities provided about 48 percent of total funding for education; the state provided about 44 percent; federal funds provided about 8 percent. North Dakota’s state funding had the effect of providing districts with an additional $.78 per poor student for every $1 provided to each student. When federal funding was added to the state funding, the combined effect provided an additional $2.53 per poor student. (To compare these amounts with those of other states, see table III.6 in app. III.) North Dakota’s targeting efforts and state share of total funding more than eliminated the 2-percent local funding gap between the lowest and highest poverty groups. Consequently, the lowest poverty group had about 5 percent less funding than the highest poverty group. The lowest poverty group had about 12 percent less funding after the addition of federal funding. (To compare the total funding gap with those of other states, see table V.2 in app. V. For the funding gap results using a regression analysis, see table V.1.) The size of the local funding gap is partly determined by differences in districts’ local tax efforts. In North Dakota, districts with the highest proportions of poor students made more effort to raise local revenue than districts with the lowest proportions of poor students. Specifically, districts in the highest poverty group made a tax effort that was 153 percent of that made in districts in the lowest poverty group. To put the state’s school finance system in perspective, table XXXIX.2 presents demographic data for school year 1991-92 for five groups of districts with increasing proportions of poor students. Table XXXIX.3 presents data on how local, state, and federal funds were distributed among the five groups of North Dakota districts. (Fig. XXXIX.1 provides table information in graphic form.) Table XXXIX.2: Demographic Information for Districts of Increasing Proportions of Poor Students, School Year 1991-92 Local funding raised for every $1,000 of district income. Percent difference (group 1 compared with group 5) Federal impact aid is considered part of local funding. Not applicable to our analysis. A North Dakota education official reported that the state had not targeted more funding to high-poverty districts since school year 1991-92. More information on changes in North Dakota’s school finance system made between 1991-92 and 1995-96 and such changes in other states appears in table LIV.1. Information on changes in federal funding between 1991-92 and 1994-95 appears in table LV.1. Additional technical information about North Dakota appears in appendixes III and IV. Average total funding per weighted pupil Targeting to poor students (added amount allocated per poor student for every dollar allocated for each student) Total funding weight (effect of combined state and federal funding) As table XL.1 shows, in school year 1991-92, total funding (local, state, and federal funding combined) per weighted pupil in Ohio averaged $4,984. The localities provided about 55 percent of total funding for education; the state provided about 40 percent; federal funds provided about 5 percent. Ohio’s state funding had the effect of providing districts with an additional $1.48 per poor student for every $1 provided to each student. When federal funding was added to the state funding, the combined effect provided an additional $2.19 per poor student. (To compare these amounts with those of other states, see table III.6 in app. III.) Ohio’s targeting efforts and state share of total funding reduced the local funding gap between the lowest and highest poverty groups from about 96 percent to about 27 percent. The addition of federal funding further reduced the funding gap between these groups to about 15 percent. (To compare the total funding gap with those of other states, see table V.2 in app. V. For the funding gap results using a regression analysis, see table V.1.) lowest proportions of poor students. Specifically, districts in the highest poverty group made a tax effort that was 98 percent of that made in districts in the lowest poverty group. To put the state’s school finance system in perspective, table XL.2 presents demographic data for school year 1991-92 for five groups of districts with increasing proportions of poor students. Table XL.3 presents data on how local, state, and federal funds were distributed among the five groups of Ohio districts. (Fig. XL.1 provides table information in graphic form.) Table XL.2: Demographic Information for Districts of Increasing Proportions of Poor Students, School Year 1991-92 Local funding raised for every $1,000 of district income. Percent difference (group 1 compared with group 5) Federal impact aid is considered part of local funding. Not applicable to our analysis. An Ohio education official reported that the state had not targeted more funding to high-poverty districts since school year 1991-92. More information on changes in Ohio’s school finance system made between 1991-92 and 1995-96 and such changes in other states appears in table LIV.1. Information on changes in federal funding between 1991-92 and 1994-95 appears in table LV.1. Additional technical information about Ohio appears in appendixes III and IV. Average total funding per weighted pupil Targeting to poor students (added amount allocated per poor student for every dollar allocated for each student) Total funding weight (effect of combined state and federal funding) As table XLI.1 shows, in school year 1991-92, total funding (local, state, and federal funding combined) per weighted pupil in Oklahoma averaged $3,929. The localities provided about 28 percent of total funding for education; the state provided about 66 percent; federal funds provided about 7 percent. Oklahoma’s state funding had the effect of providing districts with an additional $.76 per poor student for every $1 provided to each student. When federal funding was added to the state funding, the combined effect provided an additional $1.09 per poor student. (To compare these amounts with those of other states, see table III.6 in app. III.) Oklahoma’s targeting efforts and state share of total funding more than eliminated the 19-percent local funding gap between the lowest and highest poverty groups. Consequently, the lowest poverty group had about 2 percent less funding than the highest poverty group. The lowest poverty group had about 8 percent less funding after the addition of federal funding. (To compare the total funding gap with those of other states, see table V.2 in app. V. For the funding gap results using a regression analysis, see table V.1.) The size of the local funding gap is partly determined by differences in districts’ local tax efforts. In Oklahoma, districts with the highest proportions of poor students made more effort to raise local revenue than districts with the lowest proportions of poor students. Specifically, districts in the highest poverty group made a tax effort that was 112 percent of that made in districts in the lowest poverty group. To put the state’s school finance system in perspective, table XLI.2 presents demographic data for school year 1991-92 for five groups of districts with increasing proportions of poor students. Table XLI.3 presents data on how local, state, and federal funds were distributed among the five groups of Oklahoma districts. (Fig. XLI.1 provides table information in graphic form.) Table XLI.2: Demographic Information for Districts of Increasing Proportions of Poor Students, School Year 1991-92 Local funding raised for every $1,000 of district income. Percent difference (group 1 compared with group 5) Federal impact aid is considered part of local funding. Not applicable to our analysis. An Oklahoma education official reported that the state had not targeted more funding to high-poverty districts since school year 1991-92. More information on changes in Oklahoma’s school finance system made between 1991-92 and 1995-96 and such changes in other states appears in table LIV.1. Information on changes in federal funding between 1991-92 and 1994-95 appears in table LV.1. Additional technical information about Oklahoma appears in appendixes III and IV. Average total funding per weighted pupil Targeting to poor students (added amount allocated per poor student for every dollar allocated for each student) Total funding weight (effect of combined state and federal funding) As table XLII.1 shows, in school year 1991-92, total funding (local, state, and federal funding combined) per weighted pupil in Oregon averaged $5,411. The localities provided about 65 percent of total funding for education; the state provided about 29 percent; federal funds provided about 6 percent. Oregon’s state funding had the effect of providing districts with an additional $1.57 per poor student for every $1 provided to each student. When federal funding was added to the state funding, the combined effect provided an additional $2.32 per poor student. (To compare these amounts with those of other states, see table III.6 in app. III.) Oregon’s targeting efforts and state share of total funding reduced the local funding gap between the lowest and highest poverty groups from about 43 percent to about 17 percent. The addition of federal funding further reduced the funding gap between these groups to about 12 percent. (To compare the total funding gap with those of other states, see table V.2 in app. V. For the funding gap results using a regression analysis, see table V.1.) of poor students made more effort to raise local revenue than districts with the lowest proportions of poor students. Specifically, districts in the highest poverty group made a tax effort that was 119 percent of that made in districts in the lowest poverty group. To put the state’s school finance system in perspective, table XLII.2 presents demographic data for school year 1991-92 for five groups of districts with increasing proportions of poor students. Table XLII.3 presents data on how local, state, and federal funds were distributed among the five groups of Oregon districts. (Fig. XLII.1 provides table information in graphic form.) Table XLII.2: Demographic Information for Districts of Increasing Proportions of Poor Students, School Year 1991-92 Local funding raised for every $1,000 of district income. Percent difference (group 1 compared with group 5) Federal impact aid is considered part of local funding. Not applicable to our analysis. An Oregon education official reported that the state had targeted more funding to high-poverty districts since school year 1991-92. More information on changes in Oregon’s school finance system made between 1991-92 and 1995-96 and such changes in other states appears in table LIV.1. Information on changes in federal funding between 1991-92 and 1994-95 appears in table LV.1. Additional technical information about Oregon appears in appendixes III and IV. Average total funding per weighted pupil Targeting to poor students (added amount allocated per poor student for every dollar allocated for each student) Total funding weight (effect of combined state and federal funding) As table XLIII.1 shows, in school year 1991-92, total funding (local, state, and federal funding combined) per weighted pupil in Pennsylvania averaged $6,709. The localities provided 54.5 percent of total funding for education; the state provided 41 percent; federal funds provided 4.5 percent. Pennsylvania’s state funding had the effect of providing districts with an additional $1.31 per poor student for every $1 provided to each student. When federal funding was added to the state funding, the combined effect provided an additional $1.89 per poor student. (To compare these amounts with those of other states, see table III.6 in app. III.) Pennsylvania’s targeting efforts and state share of total funding reduced the local funding gap between the lowest and highest poverty groups from about 110 percent to about 32 percent. The addition of federal funding further reduced the funding gap between these groups to about 20 percent. (To compare the total funding gap with those of other states, see table V.2 in app. V. For the funding gap results using a regression analysis, see table V.1.) proportions of poor students made less effort to raise local revenue than districts with the lowest proportions of poor students. Specifically, districts in the highest poverty group made a tax effort that was 86 percent of that made in districts in the lowest poverty group. To put the state’s school finance system in perspective, table XLIII.2 presents demographic data for school year 1991-92 for five groups of districts with increasing proportions of poor students. Table XLIII.3 presents data on how local, state, and federal funds were distributed among the five groups of Pennsylvania districts. (Fig. XLIII.1 provides table information in graphic form.) Table XLIII.2: Demographic Information for Districts of Increasing Proportions of Poor Students, School Year 1991-92 Local funding raised for every $1,000 of district income. Percent difference (group 1 compared with group 5) Federal impact aid is considered part of local funding. Not applicable to our analysis. A Pennsylvania education official reported that the state had targeted much more funding to high-poverty districts since school year 1991-92. More information on changes in Pennsylvania’s school finance system made between 1991-92 and 1995-96 and such changes in other states appears in table LIV.1. Information on changes in federal funding between 1991-92 and 1994-95 appears in table LV.1. Additional technical information about Pennsylvania appears in appendixes III and IV. Average total funding per weighted pupil Targeting to poor students (added amount allocated per poor student for every dollar allocated for each student) Total funding weight (effect of combined state and federal funding) As table XLIV.1 shows, in school year 1991-92, total funding (local, state, and federal funding combined) per weighted pupil in Rhode Island averaged $6,244. The localities provided about 58 percent of total funding for education; the state provided about 37 percent; federal funds provided about 4 percent. Rhode Island’s state funding had the effect of providing districts with an additional $.23 per poor student for every $1 provided to each student. When federal funding was added to the state funding, the combined effect provided an additional $.42 per poor student. (To compare these amounts with those of other states, see table III.6 in app. III.) Rhode Island’s targeting efforts and state share of total funding reduced the local funding gap between the lowest and highest poverty groups from about 87 percent to about 28 percent. The addition of federal funding further reduced the funding gap between these groups to about 21 percent. (To compare the total funding gap with those of other states, see table V.2 in app. V. For the funding gap results using a regression analysis, see table V.1.) proportions of poor students made less effort to raise local revenue than districts with the lowest proportions of poor students. Specifically, districts in the highest poverty group made a tax effort that was 84 percent of that made in districts in the lowest poverty group. To put the state’s school finance system in perspective, table XLIV.2 presents demographic data for school year 1991-92 for five groups of districts with increasing proportions of poor students. Table XLIV.3 presents data on how local, state, and federal funds were distributed among the five groups of Rhode Island districts. (Fig. XLIV.1 provides table information in graphic form.) Table XLIV.2: Demographic Information for Districts of Increasing Proportions of Poor Students, School Year 1991-92 Local funding raised for every $1,000 of district income. Percent difference (group 1 compared with group 5) Federal impact aid is considered part of local funding. Not applicable to our analysis. A Rhode Island education official reported that the state had targeted much more funding to high-poverty districts since school year 1991-92. More information on changes in Rhode Island’s school finance system made between 1991-92 and 1995-96 and such changes in other states appears in table LIV.1. Information on changes in federal funding between 1991-92 and 1994-95 appears in table LV.1. Additional technical information about Rhode Island appears in appendixes III and IV. Average total funding per weighted pupil Targeting to poor students (added amount allocated per poor student for every dollar allocated for each student) Total funding weight (effect of combined state and federal funding) As table XLV.1 shows, in school year 1991-92, total funding (local, state, and federal funding combined) per weighted pupil in South Carolina averaged $4,509. The localities provided about 44 percent of total funding for education; the state provided about 48 percent; federal funds provided about 9 percent. South Carolina’s state funding had the effect of providing districts with an additional $.21 per poor student for every $1 provided to each student. When federal funding was added to the state funding, the combined effect provided an additional $.66 per poor student. (To compare these amounts with those of other states, see table III.6 in app. III.) South Carolina’s targeting efforts and state share of total funding reduced the local funding gap between the lowest and highest poverty groups from about 52 percent to about 17 percent. The addition of federal funding further reduced the funding gap between these groups to about 6 percent. (To compare the total funding gap with those of other states, see table V.2 in app. V. For the funding gap results using a regression analysis, see table V.1.) proportions of poor students made more effort to raise local revenue than districts with the lowest proportions of poor students. Specifically, districts in the highest poverty group made a tax effort that was 106 percent of that made in districts in the lowest poverty group. To put the state’s school finance system in perspective, table XLV.2 presents demographic data for school year 1991-92 for five groups of districts with increasing proportions of poor students. Table XLV.3 presents data on how local, state, and federal funds were distributed among the five groups of South Carolina districts. (Fig. XLV.1 provides table information in graphic form.) Table XLV.2: Demographic Information for Districts of Increasing Proportions of Poor Students, School Year 1991-92 Local funding raised for every $1,000 of district income. Percent difference (group 1 compared with group 5) Federal impact aid is considered part of local funding. Not applicable to our analysis. A South Carolina education official reported that the state had not targeted more funding to high-poverty districts since school year 1991-92. More information on changes in South Carolina’s school finance system made between 1991-92 and 1995-96 and such changes in other states appears in table LIV.1. Information on changes in federal funding between 1991-92 and 1994-95 appears in table LV.1. Additional technical information about South Carolina appears in appendixes III and IV. Average total funding per weighted pupil Targeting to poor students (added amount allocated per poor student for every dollar allocated for each student) Total funding weight (effect of combined state and federal funding) As table XLVI.1 shows, in school year 1991-92, total funding (local, state, and federal funding combined) per weighted pupil in South Dakota averaged $4,217. The localities provided about 66 percent of total funding for education; the state provided about 26 percent; federal funds provided about 8 percent. South Dakota’s state funding had the effect of providing districts with an additional $1.30 per poor student for every $1 provided to each student. When federal funding was added to the state funding, the combined effect provided an additional $2.51 per poor student. (To compare these amounts with those of other states, see table III.6 in app. III.) South Dakota’s targeting efforts and state share of total funding reduced the local funding gap between the lowest and highest poverty groups from about 28 percent to about 9 percent. The addition of federal funding further reduced the funding gap between these groups to about 1 percent. (To compare the total funding gap with those of other states, see table V.2 in app. V. For the funding gap results using a regression analysis, see table V.1.) proportions of poor students made more effort to raise local revenue than districts with the lowest proportions of poor students. Specifically, districts in the highest poverty group made a tax effort that was 140 percent of that made in districts in the lowest poverty group. To put the state’s school finance system in perspective, table XLVI.2 presents demographic data for school year 1991-92 for five groups of districts with increasing proportions of poor students. Table XLVI.3 presents data on how local, state, and federal funds were distributed among the five groups of South Dakota districts. (Fig. XLVI.1 provides table information in graphic form.) Table XLVI.2: Demographic Information for Districts of Increasing Proportions of Poor Students, School Year 1991-92 Local funding raised for every $1,000 of district income. Percent difference (group 1 compared with group 5) Federal impact aid is considered part of local funding. Not applicable to our analysis. A South Dakota education official reported that the state had not targeted more funding to high-poverty districts since school year 1991-92. More information on changes in South Dakota’s school finance system made between 1991-92 and 1995-96 and such changes in other states appears in table LIV.1. Information on changes in federal funding between 1991-92 and 1994-95 appears in table LV.1. Additional technical information about South Dakota appears in appendixes III and IV. Average total funding per weighted pupil Targeting to poor students (added amount allocated per poor student for every dollar allocated for each student) Total funding weight (effect of combined state and federal funding) As table XLVII.1 shows, in school year 1991-92, total funding (local, state, and federal funding combined) per weighted pupil in Tennessee averaged $3,699. The localities provided about 48 percent of total funding for education; the state provided about 42 percent; federal funds provided about 10 percent. Tennessee’s state funding had the effect of providing districts with an additional $.31 per poor student for every $1 provided to each student. When federal funding was added to the state funding, the combined effect provided an additional $1.16 per poor student. (To compare these amounts with those of other states, see table III.6 in app. III.) Tennessee’s targeting efforts and state share of total funding reduced the local funding gap between the lowest and highest poverty groups from about 5 percent to about 3 percent. The addition of federal funding eliminated the funding gap between these groups, resulting in the lowest poverty group having about 7 percent less funding than the highest poverty group. (To compare the total funding gap with those of other states, see table V.2 in app. V. For the funding gap results using a regression analysis, see table V.1.) The size of the local funding gap is partly determined by differences in districts’ local tax efforts. In Tennessee, districts with the highest proportions of poor students made more effort to raise local revenue than districts with the lowest proportions of poor students. Specifically, districts in the highest poverty group made a tax effort that was 125 percent of that made in districts in the lowest poverty group. To put the state’s school finance system in perspective, table XLVII.2 presents demographic data for school year 1991-92 for five groups of districts with increasing proportions of poor students. Table XLVII.3 presents data on how local, state, and federal funds were distributed among the five groups of Tennessee districts. (Fig. XLVII.1 provides table information in graphic form.) Table XLVII.2: Demographic Information for Districts of Increasing Proportions of Poor Students, School Year 1991-92 Local funding raised for every $1,000 of district income. Percent difference (group 1 compared with group 5) Federal impact aid is considered part of local funding. Not applicable to our analysis. A Tennessee education official reported that the state had targeted much more funding to high-poverty districts since school year 1991-92. More information on changes in Tennessee’s school finance system made between 1991-92 and 1995-96 and such changes in other states appears in table LIV.1. Information on changes in federal funding between 1991-92 and 1994-95 appears in table LV.1. Additional technical information about Tennessee appears in appendixes III and IV. Average total funding per weighted pupil Targeting to poor students (added amount allocated per poor student for every dollar allocated for each student) Total funding weight (effect of combined state and federal funding) As table XLVIII.1 shows, in school year 1991-92, total funding (local, state, and federal funding combined) per weighted pupil in Texas averaged $4,946. The localities provided about 49 percent of total funding for education; the state provided about 44 percent; federal funds provided about 7 percent. Texas’ state funding had the effect of providing districts with an additional $.39 per poor student for every $1 provided to each student. When federal funding was added to the state funding, the combined effect provided an additional $.58 per poor student. (To compare these amounts with those of other states, see table III.6 in app. III.). Texas’ targeting efforts and state share of total funding reduced the local funding gap between the lowest and highest poverty groups from about 140 percent to about 11 percent. The addition of federal funding further reduced the funding gap between these groups to about 1 percent. (To compare the total funding gap with those of other states, see table V.2 in app. V. For the funding gap results using a regression analysis, see table V.1.) of poor students made more effort to raise local revenue than districts with the lowest proportions of poor students. Specifically, districts in the highest poverty group made a tax effort that was 115 percent of that made in districts in the lowest poverty group. To put the state’s school finance system in perspective, table XLVIII.2 presents demographic data for school year 1991-92 for five groups of districts with increasing proportions of poor students. Table XLVIII.3 presents data on how local, state, and federal funds were distributed among the five groups of Texas districts. (Fig. XLVIII.1 provides table information in graphic form.) Table XLVIII.2: Demographic Information for Districts of Increasing Proportions of Poor Students, School Year 1991-92 Local funding raised for every $1,000 of district income. Percent difference (group 1 compared with group 5) Federal impact aid is considered part of local funding. Not applicable to our analysis. A Texas education official reported that the state had not targeted more funding to high-poverty districts since school year 1991-92. More information on changes in Texas’ school finance system made between 1991-92 and 1995-96 and such changes in other states appears in table LIV.1. Information on changes in federal funding between 1991-92 and 1994-95 appears in table LV.1. Additional technical information about Texas appears in appendixes III and IV. Average total funding per weighted pupil Targeting to poor students (added amount allocated per poor student for every dollar allocated for each student) Total funding weight (effect of combined state and federal funding) As table XLIX.1 shows, in school year 1991-92, total funding (local, state, and federal funding combined) per weighted pupil in Utah averaged $3,408. The localities provided about 38 percent of total funding for education; the state provided about 56 percent; federal funds provided about 6 percent. Utah’s state funding had the effect of providing districts with an additional $.02 per poor student for every $1 provided to each student. When federal funding was added to the state funding, the combined effect provided an additional $.59 per poor student. (To compare these amounts with those of other states, see table III.6 in app. III.) The lowest poverty group in Utah had about 29 percent less local funding than the highest poverty group. State funding reduced this funding gap to about 8 percent. The lowest poverty group had about 11 percent less funding after the addition of federal funding. (To compare the total funding gap with those of other states, see table V.2 in app. V. For the funding gap results using a regression analysis, see table V.1.) the lowest proportions of poor students. Specifically, districts in the highest poverty group made a tax effort that was 123 percent of that made in districts in the lowest poverty group. To put the state’s school finance system in perspective, table XLIX.2 presents demographic data for school year 1991-92 for five groups of districts with increasing proportions of poor students. Table XLIX.3 presents data on how local, state, and federal funds were distributed among the five groups of Utah districts. (Fig. XLIX.1 provides table information in graphic form.) Table XLIX.2: Demographic Information for Districts of Increasing Proportions of Poor Students, School Year 1991-92 Local funding raised for every $1,000 of district income. Percent difference (group 1 compared with group 5) Federal impact aid is considered part of local funding. Not applicable to our analysis. A Utah education official reported that the state had targeted more funding to high-poverty districts since school year 1991-92. More information on changes in Utah’s school finance system made between 1991-92 and 1995-96 and such changes in other states appears in table LIV.1. Information on changes in federal funding between 1991-92 and 1994-95 appears in table LV.1. Additional technical information about Utah appears in appendixes III and IV. Average total funding per weighted pupil Targeting to poor students (added amount allocated per poor student for every dollar allocated for each student) Total funding weight (effect of combined state and federal funding) As table L.1 shows, in school year 1991-92, total funding (local, state, and federal funding combined) per weighted pupil in Virginia averaged $5,021. The localities provided about 61 percent of total funding for education; the state provided about 34 percent; federal funds provided about 5 percent. Virginia’s state funding had the effect of providing districts with an additional $.93 per poor student for every $1 provided to each student. When federal funding was added to the state funding, the combined effect provided an additional $1.29 per poor student. (To compare these amounts with those of other states, see table III.6 in app. III.) Virginia’s targeting efforts and state share of total funding reduced the local funding gap between the lowest and highest poverty groups from about 98 percent to about 30 percent. The addition of federal funding further reduced the funding gap between these groups to about 20 percent. (To compare the total funding gap with those of other states, see table V.2 in app. V. For the funding gap results using a regression analysis, see table V.1.) The size of the local funding gap is partly determined by differences in districts’ local tax efforts. In Virginia, districts with the highest proportions of poor students made less effort to raise local revenue than districts with the lowest proportions of poor students. Specifically, districts in the highest poverty group made a tax effort that was 91 percent of that made in districts in the lowest poverty group. To put the state’s school finance system in perspective, table L.2 presents demographic data for school year 1991-92 for five groups of districts with increasing proportions of poor students. Table L.3 presents data on how local, state, and federal funds were distributed among the five groups of Virginia districts. (Fig. L.1 provides table information in graphic form.) Table L.2: Demographic Information for Districts of Increasing Proportions of Poor Students, School Year 1991-92 Local funding raised for every $1,000 of district income. Percent difference (group 1 compared with group 5) Federal impact aid is considered part of local funding. Not applicable to our analysis. A Virginia education official reported that the state had targeted much more funding to high-poverty districts since school year 1991-92. More information on changes in Virginia’s school finance system made between 1991-92 and 1995-96 and such changes in other states appears in table LIV.1. Information on changes in federal funding between 1991-92 and 1994-95 appears in table LV.1. Additional technical information about Virginia appears in appendixes III and IV. Average total funding per weighted pupil Targeting to poor students (added amount allocated per poor student for every dollar allocated for each student) Total funding weight (effect of combined state and federal funding) As table LI.1 shows, in school year 1991-92, total funding (local, state, and federal funding combined) per weighted pupil in Washington averaged $5,604. The localities provided about 24 percent of total funding for education; the state provided about 71 percent; federal funds provided about 5 percent. Washington’s state funding had the effect of providing districts with an additional $.70 per poor student for every $1 provided to each student. When federal funding was added to the state funding, the combined effect provided an additional $1.11 per poor student. (To compare these amounts with those of other states, see table III.6 in app. III.) Washington’s targeting efforts and state share of total funding reduced the local funding gap between the lowest and highest poverty groups from about 46 percent to about 7 percent. The addition of federal funding further reduced the funding gap between these groups to about 1 percent. (To compare the total funding gap with those of other states, see table V.2 in app. V. For the funding gap results using a regression analysis, see table V.1.) proportions of poor students made more effort to raise local revenue than districts with the lowest proportions of poor students. Specifically, districts in the highest poverty group made a tax effort that was 123 percent of that made in districts in the lowest poverty group. To put the state’s school finance system in perspective, table LI.2 presents demographic data for school year 1991-92 for five groups of districts with increasing proportions of poor students. Table LI.3 presents data on how local, state, and federal funds were distributed among the five groups of Washington districts. (Fig. LI.1 provides table information in graphic form.) Table LI.2: Demographic Information for Districts of Increasing Proportions of Poor Students, School Year 1991-92 Local funding raised for every $1,000 of district income. Percent difference (group 1 compared with group 5) Federal impact aid is considered part of local funding. Not applicable to our analysis. A Washington education official reported that the state had targeted more funding to high-poverty districts since school year 1991-92. More information on changes in Washington’s school finance system made between 1991-92 and 1995-96 and such changes in other states appears in table LIV.1. Information on changes in federal funding between 1991-92 and 1994-95 appears in table LV.1. Additional technical information about Washington appears in appendixes III and IV. Average total funding per weighted pupil Targeting to poor students (added amount allocated per poor student for every dollar allocated for each student) Total funding weight (effect of combined state and federal funding) As table LII.1 shows, in school year 1991-92, total funding (local, state, and federal funding combined) per weighted pupil in West Virginia averaged $5,332. The localities provided about 25 percent of total funding for education; the state provided about 67 percent; federal funds provided about 8 percent. West Virginia’s state funding had the effect of providing districts with an additional $.09 per poor student for every $1 provided to each student. When federal funding was added to the state funding, the combined effect provided an additional $.27 per poor student. (To compare these amounts with those of other states, see table III.6 in app. III.) West Virginia’s targeting efforts and state share of total funding reduced the local funding gap between the lowest and highest poverty groups from about 68 percent to about 13 percent. The addition of federal funding further reduced the funding gap between these groups to about 9 percent. (To compare the total funding gap with those of other states, see table V.2 in app. V. For the funding gap results using a regression analysis, see table V.1.) proportions of poor students made more effort to raise local revenue than districts with the lowest proportions of poor students. Specifically, districts in the highest poverty group made a tax effort that was 114 percent of that made in districts in the lowest poverty group. To put the state’s school finance system in perspective, table LII.2 presents demographic data for school year 1991-92 for five groups of districts with increasing proportions of poor students. Table LII.3 presents data on how local, state, and federal funds were distributed among the five groups of West Virginia districts. (Fig. LII.1 provides table information in graphic form.) Table LII.2: Demographic Information for Districts of Increasing Proportions of Poor Students, School Year 1991-92 Local funding raised for every $1,000 of district income. Percent difference (group 1 compared with group 5) Federal impact aid is considered part of local funding. Not applicable to our analysis. A West Virginia education official reported that the state had not targeted more funding to high-poverty districts since school year 1991-92. More information on changes in West Virginia’s school finance system made between 1991-92 and 1995-96 and such changes in other states appears in table LIV.1. Information on changes in federal funding between 1991-92 and 1994-95 appears in table LV.1. Additional technical information about West Virginia appears in appendixes III and IV. Average total funding per weighted pupil Targeting to poor students (added amount allocated per poor student for every dollar allocated for each student) Total funding weight (effect of combined state and federal funding) As table LIII.1 shows, in school year 1991-92, total funding (local, state, and federal funding combined) per weighted pupil in Wisconsin averaged $6,124. The localities provided about 52 percent of total funding for education; the state provided about 44 percent; federal funds provided about 4 percent. Wisconsin’s state funding had the effect of providing districts with an additional $1.20 per poor student for every $1 provided to each student. When federal funding was added to the state funding, the combined effect provided an additional $1.55 per poor student. (To compare these amounts with those of other states, see table III.6 in app. III.) Wisconsin’s targeting efforts and state share of total funding reduced the local funding gap between the lowest and highest poverty groups from about 75 percent to about 17 percent. The addition of federal funding further reduced the funding gap between these groups to about 11 percent. (To compare the total funding gap with those of other states, see table V.2 in app. V. For the funding gap results using a regression analysis, see table V.1.) proportions of poor students made more effort to raise local revenue than districts with the lowest proportions of poor students. Specifically, districts in the highest poverty group made a tax effort that was 102 percent of that made in districts in the lowest poverty group. To put the state’s school finance system in perspective, table LIII.2 presents demographic data for school year 1991-92 for five groups of districts with increasing proportions of poor students. Table LIII.3 presents data on how local, state, and federal funds were distributed among the five groups of Wisconsin districts. (Fig. LIII.1 provides table information in graphic form.) Table LIII.2: Demographic Information for Districts of Increasing Proportions of Poor Students, School Year 1991-92 Local funding raised for every $1,000 of district income. Percent difference (group 1 compared with group 5) Federal impact aid is considered part of local funding. Not applicable to our analysis. A Wisconsin education official reported that the state had targeted less funding to high-poverty districts since school year 1991-92. More information on changes in Wisconsin’s school finance system made between 1991-92 and 1995-96 and such changes in other states appears in table LIV.1. Information on changes in federal funding between 1991-92 and 1994-95 appears in table LV.1. Additional technical information about Wisconsin appears in appendixes III and IV. In this report, we relied on funding data from the 1991-92 school year. However, many states have subsequently changed their school finance systems in response to legal challenges or to equity concerns. We telephoned officials in the 47 states to determine what changes had taken place in the school finance systems from school years 1991-92 through 1995-96. We specifically asked about changes in targeting that would affect districts with high proportions of poor students and changes in a state’s share of education funding. These two factors affect the size of the funding gap between low- and high-poverty districts—in general, the greater the targeting to high-poverty districts or the greater the state share, or both, the lower the funding gap. We did not verify the state officials’ statements. Relatively few states reported increased targeting to high-poverty districts. Education officials in 19 states reported not targeting high-poverty districts at all, 10 states reported no change in targeting to high-poverty districts, and 2 states reported changes that would result in high-poverty districts receiving less state funding. The remaining 16 states reported making changes that would provide more funds to high-poverty districts. Fewer states had increased the state share of total funding significantly. Officials in 36 states reported that their state’s share had a net increase or decrease of 5 percentage points or less, and 3 states reported a decrease of 6 percentage points or more. Officials in the remaining eight states reported an increase in the state share of 6 percentage points or more. Table LIV.1 summarizes our findings of the changes states have made. Change in state share (percentage points) Did not specifically target high-poverty districts in either school year (continued) Did not specifically target high-poverty districts in either school year (continued) Change in state share (percentage points) Change as of school year 1993-94. Change as of school year 1994-95. In this report, we relied on state, local, and federal funding data from the 1991-92 school year. Federal regulations or legislation since 1991-92, however, may have changed targeting to districts. We telephoned officials in the Departments of Education, Agriculture, Health and Human Services, and the Interior and reviewed relevant documents to determine what regulatory or legislative changes, if any, to the major federally funded elementary and secondary schools programs may have resulted in more or less federal funds being targeted to poor students. The federal government targeted more funding to poor students in the 1995-96 school year than in the 1991-92 school year, according to federal officials, due to changes in title I legislation and regulations. Title I, the largest federal education program, provides funding for disadvantaged students. Changes effective as of July 1995 were expected to provide more title I funding to high-poverty districts through increased targeting. In addition, other federal education programs allocate funds on the basis of title I formulas. For example, vocational education grants are partially based on title I funding formulas. Consequently, vocational education funding has also increased in high-poverty districts. Federal government programs supporting children with disabilities under the Individuals With Disabilities Education Act made changes in 1997 that are expected to result in targeting more funding to poor students. Funding patterns remained relatively unchanged in many other federal programs. Federal officials for the Head Start, bilingual education, Indian education, and child nutrition programs cited no regulatory or legislative changes since 1991-92 that would affect targeting to poor students. Table LV.I summarizes the federal funding provided to the states in school years 1991-92 and 1994-95. These figures include impact aid as part of the totals (we excluded federal impact aid in our analysis of federal targeting). The federal percentages in the table are based on total funding amounts from public sources (private funding is excluded from total funding). Total federal funding (thousands) Total federal funding (thousands) (continued) Total federal funding (thousands) In addition to those named above, the following individuals made important contributions to this report: Jerry Fastrup (202-512-7211) conceived the model for determining the poor student cost weights and helped write the report; Barbara A. Billinghurst (206-287-4867) helped develop and apply the model and write the report; Nancy Purvine and Virginia Vanderlinde contacted state officials about changes in funding patterns; and Alicia Moos researched changes in federal funding patterns. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed state and federal efforts to target poor students and the funding gaps between districts with high and low proportions of poor students, focusing on: (1) the extent to which state and federal funding is targeted to districts on the basis of the number of poor students; and (2) the effect of state and federal funding on the amount of funds available to high-poverty compared with low-poverty districts. GAO noted that: (1) school finance systems in over 90 percent of the states had the effect targeting more state funds to districts with large numbers of poor students in school year 1991-92, regardless of whether the system explicitly intended to do so; (2) the extent of the targeting varied widely, however; (3) New Hampshire targeted poor students the most, providing an additional $6.69 per poor student for every $1 provided to each student; school finance systems in four states (Montana, Nevada, New Mexico, and New York) had the effect of targeting no additional funding per poor student; (4) the national average was $.62 in additional state funding; (5) federal funding was more targeted than state funding, providing an average of $4.73 in additional federal funding per poor student nationwide for every $1 provided for each student; (6) because federal funds were more targeted than state funds, the combination of federal and state funding increased the average additional funding per poor student from $.62 to $1.10 nationwide for every $1 provided for each student; (7) reported changes in federal education programs and state school finance systems since school year 1991-92 would probably result in federal funds being more targeted than state funds; (8) state and federal funding reduced but did not eliminate the local funding gap between high- and low-poverty districts in many states; (9) high-poverty districts had less local funding per weighted pupil in 37 of the 47 states GAO analyzed; (10) when GAO added state and federal funds to local funds for GAO's analysis, only 21 states still had such funding gaps, and these gaps were smaller in each state; (11) nevertheless, about 64 percent of the nation's poor students live in these 21 states; (12) nationwide, total funding levels in low-poverty districts were about 15 percent more than those in high-poverty districts; (13) although targeting helped close the funding gap, the percentage of total funding from state and federal sources was more important in reducing the gap; and (14) gaps were smaller in states whose combined state and federal share of total funding was relatively high.
The demonstration program affords opportunities to collect new and increased fees to the major agencies that provide the public with recreational opportunities on federal land—the Park Service, Bureau of Land Management, and Fish and Wildlife Service (all within the Department of the Interior), and the Forest Service (within the Department of Agriculture). Each agency can experiment with new or increased fees at up to 100 sites. By September 1998, such fees were in place at 312 sites—100 administered by the Park Service, 77 by the Fish and Wildlife Service, 68 by the Bureau of Land Management, and 67 by the Forest Service. The four agencies reported that, because of the program, their combined recreational fee revenues have nearly doubled, from about $93 million in fiscal year 1996 (the last year before the demonstration program was implemented) to about $180 million in fiscal year 1998. The Park Service collected 80 percent of the fee revenue in fiscal year 1998, the Forest Service 15 percent, the Bureau of Land Management 3 percent, and the Fish and Wildlife Service about 2 percent. Visitation appears largely unaffected by the new and increased fees, according to surveys conducted by the four agencies. In fiscal year 1997, visitation at the demonstration sites increased overall by 5 percent compared with 4 percent at other sites. Effects varied somewhat from location to location. Of the 206 sites in the demonstration program in fiscal year 1997, 58 percent had increases in visitation, 41 percent had decreases, and 1 percent were unchanged. However, with data from only 1 year, it is difficult to draw definitive conclusions, either about the lack of a negative effect on visitation at most sites or about whether fees had an impact at sites where visitation declined. Now, I would like to discuss several areas in which we think improvements can be made to the demonstration program. The demonstration program was authorized with the expectation that the four agencies would coordinate their fee collection efforts, both among themselves and with state and local agencies, where it made sense to do so. During our review, we did find examples of such coordination, with demonstrated benefits for the public. In Utah, for example, where the Park Service’s Timpanogos Cave National Monument is surrounded by a recreation area in the Forest Service’s Uinta National Forest, the two agencies decided to charge a single entrance fee for both. Such coordination can reduce agencies’ operating costs, strengthen resource management activities, and provide more agency personnel to assist visitors. We also found, however, that agencies were not taking full advantage of this flexibility. For example, the Park Service and the Fish and Wildlife Service manage sites with a common border on the same island in Maryland and Virginia. The two sites are Assateague Island National Seashore and Chincoteague National Wildlife Refuge. Administratively, the two agencies cooperate on law enforcement matters and run a joint permit program for off-road vehicles, and the Park Service provides staff to operate and maintain a ranger station and bathing facilities on refuge land. However, when the agencies selected the two sites for the demonstration program, they decided to charge separate, nonreciprocal entrance fees of $5 per vehicle. Officials at the refuge told us that visitors are sometimes confused by this lack of reciprocity. Our report discusses other cases in which greater coordination among the agencies would either improve the service to the public or permit greater efficiency in implementing a fee program. These cases included (1) backcountry fees in Olympic National Park and Olympic National Forest in Washington State, and (2) a proposed fee at Park Service and BLM lands located in the El Malpais area of New Mexico. Demonstration sites may be reluctant to coordinate fees partly because the program’s incentives are geared towards increasing their revenues. By contrast, because joint fee arrangements may potentially reduce revenues to specific sites, there may be a disincentive among these sites to coordinate. However, at sites such as Assateague and Chincoteague, the increase in service to the public may be worth a small reduction in revenues. That is why our report recommends that the agencies perform a site-by-site review of their demonstration sites to identify opportunities for greater coordination. In commenting on our report, the agencies generally agreed that more could be done in this area. The demonstration program also encouraged the four agencies to be innovative in setting and collecting their own fees. Such improvements take two main forms: making it as convenient as possible for visitors to pay and making fees more equitable. We found many examples of agencies experimenting with ways to make payment more convenient, including selling entrance passes using machines like automated tellers, selling hiking permits over the Internet, and selling entrance or user permits through vendors such as gas stations, grocery stores, and convenience stores. However, we found fewer examples of the agencies experimenting with different pricing structures that could make the fees more equitable, such as basing fees on (1) the extent of use or (2) whether the visit occurred during a peak visitation period. Most of the experiments with pricing have been done by the Forest Service or the Bureau of Land Management. These two agencies have experimented with setting fees that vary on the basis of (1) how long the visitor will stay or (2) whether the visit occurs during a peak period (such as a weekend) or an off-peak period (such as midweek or during the off season). For example, a 3-day visit to a recreational area might cost $3 per car, compared with $10 per car for a 2-week visit. Such pricing has resulted in greater equity to the visitors, in that visitors who use the area for greater lengths of time pay higher fees. It would appear to have broader applicability in the other agencies as well. By contrast, the Park Service has done little to experiment with different pricing structures. Visitors generally pay the same fee whether they are visiting during a peak period (such as a weekend in the summer) or an off-peak period (such as midweek during the winter) or whether they are staying for several hours or several days. A more innovative fee system would make fees more equitable for visitors and might change visitation patterns somewhat to enhance economic efficiency and reduce overcrowding and its effects on parks’ resources. Furthermore, according to the four agencies, reducing visitation during peak periods can lower the costs of operating recreation sites by reducing (1) the staff needed to operate a site, (2) the size of facilities, (3) the need for maintenance and future capital investments, and (4) the extent of damage to a site’s resources. Because it was one of the goals of the program, and because it could result in more equitable fees to the public, our report recommends that two agencies—the Park Service and Fish and Wildlife Service—look for further opportunities to experiment and innovate with new and existing fees. The demonstration program required the agencies to spend at least 80 percent of the fee revenues at the site where these revenues were generated. However, some demonstration sites are generating so much revenue as to raise questions about their long-term ability to spend these revenues on high-priority items. By contrast, sites outside the demonstration program, as well as demonstration sites that do not collect much in fee revenues, may have high-priority needs that remain unmet. As a result, some of the agencies’ highest-priority needs may not be addressed. For many sites in the demonstration program—particularly in the Park Service—the increased fee revenues equal 20 percent or more of the sites’ annual operating budgets. This large amount of new revenue allows such sites to address past unmet needs in maintenance, resource protection, and visitor services. The Park Service has set a priority on using fee revenues to address its repair and maintenance needs. Some sites with high fee revenues may be able to address these needs within a few years. However, the 80-percent requirement could, over time, preclude the agencies from using fee revenues for more pressing needs at other sites. Two of the sites we visited—Zion and Shenandoah National Parks—are examples of how this issue may surface in the near future. At Zion, park officials told us that the park expected to receive so much new fee revenue in fiscal year 1998 (about $4.5 million) that the park’s budget would be doubled. The park’s current plans call for using this additional money to begin a $20 million alternative transportation system. However, park officials said that if for some reason this particular project did not move forward, they might have difficulty preparing and implementing enough projects to use the available funds in a manner consistent with the program’s objectives. At Shenandoah, fee revenues for fiscal year 1998 were expected to be about $2.9 million—enough money, the park superintendent said, to eliminate the park’s estimated $15 million repair and maintenance backlog in a few years. This is a significant and sensitive issue that involves balancing important features of the program. Specifically, the increased efficiency that would be achieved by allowing the agencies more spending flexibility needs to be balanced with the continued need to demonstrate to the visitors that improvements are being made with the new or increased fees and the need to maintain incentives to collect fees. Our report stated that as the Congress decides on the future of the fee demonstration program, it might wish to consider modifying the current requirement. Providing some further flexibility in the spending of fee revenues would give agencies more opportunities to address their highest-priority needs among all of their field units. If this is not done, undesirable inequities could occur within agencies if and when the current legislation is made permanent. At the same time, however, any change in the requirement would need to be done in such a way that (1) fee-collecting sites would continue to have an incentive to collect fees and (2) visitors who pay the fees will continue to support the program. Visitor surveys show that putting fees to work where they are collected is a popular idea. Through the first 2 fiscal years of the program, the Park Service retained about $182 million in recreational fee revenue, which represents over 80 percent of the total amount of revenue generated by all four of the participating agencies. However, by the end of fiscal year 1998, the agency had obligated only about $56 million, or about 31 percent, of this revenue.This spending rate was by far the lowest among the four agencies participating in the program. Specifically, by the end of fiscal year 1998, the Forest Service had spent about 63 percent of its revenues; the Fish and Wildlife Service about 56 percent; and the Bureau of Land Management about 72 percent. (See app. I for more specific revenue and spending information.) In order to understand why the rate of spending in the Park Service is so far behind the other agencies, we visited four parks. For these parks, the percentages of revenue available to them that was obligated through September 30, 1998, varied from a low of 10 percent at Golden Gate National Recreation Area to a high of 48 percent at Olympic National Park. The other two parks we visited were Rocky Mountain National Park and Grand Canyon National Park, which obligated 41 and 20 percent, respectively. In total, these parks had proposed 101 projects for funding under the demonstration fee program. Projects at the four parks ranged from the planning and construction of major facilities, like a visitor transportation and orientation center at the south rim of the Grand Canyon for $18 million, to small projects, like the rehabilitation of trail signs in Golden Gate for $11,000. They also included other projects, like the replacement of outhouses and campground rehabilitation. Our work indicates that there are two main factors that have contributed to the Park Service’s low rate of spending over the program’s first 2 years. These factors are that (1) the project review and approval process has delayed the start of construction and maintenance projects and (2) the capacity of the agency to handle the large number of projects planned under the program is limited. The large size of some of the projects being funded by the demonstration fee program also contributes to slowing the agency’s spending rate. In 1997, this Subcommittee and others heavily criticized the Park Service because of spending abuses involving an outhouse costing over $300,000 at one park and employee residences costing over $500,000 at another. In response to these criticisms, and in order to avoid similar abuses in the future, the Park Service and the Department of the Interior are paying particularly close attention to how individual park units are using the revenues provided by the demonstration fee program. Park Service headquarters officials review all projects approved by regions before individual parks are permitted to proceed with construction. As of March 1998, the Park Service also required an additional review by top agency officials of all projects costing $500,000 or more. Furthermore, another level of review was added when Department of the Interior officials decided that they too would review all of the projects that the Park Service proposed for its recreational fee revenue. The implementation of this Departmental-level review added more time to the project approval process. Adding these layers of review specifically for Park Service projects helps explain why the rate of spending for the agency has been the lowest among the agencies participating in the program. However, in light of the spending abuses noted earlier, in our opinion, the additional Park Service and Departmental reviews appear prudent. In contrast, the projects being done by the other three agencies—the Bureau of Land Management, the Fish and Wildlife Service, and the Forest Service—have not had these additional levels of scrutiny. In those agencies, determining how the fee money will be spent has been left to on-site and regional managers. Not surprisingly, the spending rates for these other agencies have been substantially higher than for the Park Service. Another factor limiting the pace of the Park Service’s spending relative to the other agencies in the program has been the agency’s ability to handle the large volume of projects that are now in the pipeline. All of the parks we visited have had substantial funding increases in recent years to help them address maintenance and other needs. These increases were due to not only to the increased funding made available from the demonstration fee program but also from appropriated funds such as those for repair and rehabilitation and line item construction projects. This large inflow of funding from a variety of sources has, according to some park managers we interviewed, exceeded their ability to get projects initiated and completed. At most of the parks and regional offices we visited, officials said there was a bottleneck of projects that were both approved and funded but waiting to be initiated. For example, Golden Gate National Recreation Area has 14 projects costing about $4.7 million that have been approved as part of the fee demonstration program. Managers at the site said they have spent little to date on these projects because the current staff cannot prepare plans and manage the large volume of projects now funded. Two of the other parks we visited, Rocky Mountain and Grand Canyon, have similar explanations about why their spending was relatively slow. Another factor that has some impact on the spending rate for the Park Service is the large scale of some of the projects being undertaken by the agency. Some parks must accumulate a substantial amount of funds before they could proceed with these large projects. For example, while Grand Canyon has very high revenue under the program, over $20 million annually, it also has some of the largest projects planned, like a new, multimillion-dollar visitor orientation and transportation center. To begin this project, the park has had to set aside millions of dollars during the first years of the program in order to fund the construction contracts for the new facility in later years. Setting funds aside for later use has the effect of lowering the rate of expenditures in the initial years of the program. Given the substantial increase in funding that the Park Service will receive under the demonstration fee program, now more than ever the agency will have to be accountable for demonstrating its accomplishments in improving the maintenance of Park Service facilities with these additional resources. The agency cannot now do this. The Park Service will need to develop more accurate and reliable information on its deferred maintenance needs (as well as its other park operating needs) and to track progress in addressing them. In administering its recreational fee demonstration program, the Park Service decided that using the revenue to address its maintenance needs is a high priority. However, during hearings before this Subcommittee last year, we reported that the Park Service did not have a common definition of what should be included in its backlog of maintenance needs and did not have a routine, systematic process for determining these needs. As a result, the agency was unable to provide us with a reliable estimate of its deferred maintenance needs. At the same hearing, Interior’s Assistant Secretary for Policy, Management, and Budget made several commitments to address these problems. The commitments were to (1) establish common definitions for deferred maintenance and other key maintenance and construction terms; (2) develop improved data collection processes for accumulating data about annual and deferred maintenance needs, among other things; (3) provide guidance for preparing a 5-year priority maintenance and construction plan for the fiscal year 2000 budget; and (4) issue instructions for reporting deferred maintenance in agency financial statements. To date, the Department of the Interior has made some progress in meeting these commitments. In February 1998, common definitions were developed for deferred maintenance. The Department has also provided guidance for the agencies to use to develop priority maintenance plans. In addition, the Department has issued instructions on how agencies should report deferred maintenance in their financial statements. These are all positive steps that should, if implemented properly, help the Park Service as well as other Interior agencies manage their maintenance activities. Nonetheless, the Park Service still does not have accurate information on its maintenance needs. This is evident from a February 1998 Interior report, which states, among other things, that the deferred maintenance needs of Interior agencies, including the Park Service, have never been adequately documented. To remedy this situation, Interior and its agencies, including the Park Service, are beginning to develop a maintenance management system that can generate consistent maintenance data for all Interior agencies. Interior expects to identify the systems needed to generate better maintenance data needs by June 1999. However, this is just a first step. Interior and its agencies are also in the process of obtaining better information on the condition of their facilities. Any data improvements resulting from this effort will likely be several years away. The Congress has attempted to help the Park Service address its deferred maintenance and other program needs in recent years by providing additional appropriations and revenue from the recreational fee program. Given this substantial increase in funds, the Park Service needs to be held accountable for demonstrating what is being accomplished with these financial resources. To date, however, the agency is not yet able to determine how much these additional funds are helping because it does not know the size of the problem. Accordingly, while we and others have frequently reported on the deteriorating conditions of the agency’s facilities, until accurate, reliable and useful data are developed about the size and scope of the agency’s maintenance needs, the agency will be unable to determine how much progress is being made to address these needs and resolution of the deferred maintenance problem will continue to elude the agency. In closing Mr. Chairman, while our testimony today has focused on improvements that could be made to the fee demonstration program, it is important to remember that this program appears to be working well and meeting many of the law’s intended objectives. So far, the demonstration program has brought over $200 million in additional revenue to recreation areas across the country with no apparent impact on visitation patterns. It has created opportunities for the agencies—particularly the Park Service—to address, and in some cases resolve, their past unmet repair and maintenance needs. There are now more than 2 years remaining in this demonstration program. These 2 years represent an opportunity for the agencies to further the program’s goals by coordinating their efforts more, developing innovative fee structures, and understanding the reactions of the visitors. This concludes my statement. I would be happy to answer any questions you or the other Members of the Subcommittee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed the National Park Service's Recreational Fee Demonstration Program, focusing on the: (1) rate at which the Park Service spends revenue collected under the program in comparison with the Fish and Wildlife Service, the Bureau of Land Management, and the Forest Service; and (2) impact of the fee program on the Park Service's maintenance needs. GAO noted that: (1) the overall message about the demonstration program is positive; (2) the program is providing hundreds of millions of dollars to improve visitor services and address the backlogs of unmet needs in the four land management agencies; (3) in addition, those who pay the fees generally support the program, and it has not appeared to have adversely affected visitation rates; (4) nonetheless, it is appropriate to focus on several areas in which changes or improvements may be needed; (5) specifically, these issues include the need for greater coordination of fees by the agencies, greater innovation, and flexibility in revenue distribution; (6) these issues are important because the demonstration program is still at a stage where experimentation is encouraged; (7) most of GAO's observations relate to doing just that--experimenting more to determine what works best; (8) regarding the Park Service, GAO found that the agency's spending of demonstration program revenue has lagged substantially behind the other three agencies in the first 2 years of the program; (9) this has been due primarily to the larger number and scale of Park Service projects and additional scrutiny these projects are receiving within the agency and the Department of the Interior; (10) further, the Park Service has not yet developed accurate and reliable information on its total deferred maintenance needs; and (11) until this is done, determining the impact that the revenue from the fee program is having on these needs is not possible.
The DI and SSI programs are the two largest federal programs providing assistance to people with disabilities. DI is the nation’s primary source of income replacement for workers with disabilities who have paid Social Security taxes and are entitled to benefits. The DI program also pays benefits to disabled dependents of disabled, retired, or deceased workers—disabled adult children and disabled widows and widowers. SSI provides assistance to disabled people who have a limited or no work history and whose income and resources are below specified amounts.State disability determination service (DDS) agencies, which are funded by SSA, decide whether individuals applying for DI or SSI benefits are disabled. Federal laws specify those who must receive CDRs. The 1980 amendments to the Social Security Act require that SSA review at least every 3 years the status of DI beneficiaries whose disabilities are not permanent to determine their continuing eligibility for benefits. The law does not specify the frequency of the required reviews for beneficiaries with permanent disabilities. The Social Security Independence and Program Improvements Act of 1994 requires that SSA conduct CDRs on one-third of the SSI beneficiaries who reach age 18 and a minimum of 100,000 additional SSI beneficiaries annually in fiscal years 1996 through 1998. The 1996 amendments to the Social Security Act require that SSA conduct CDRs (1) at least every 3 years for children under age 18 who are likely to improve or, at the option of the Commissioner, who are unlikely to improve and (2) on low-birth-weight babies within their first year of life. The 1996 legislation also requires disability eligibility redeterminations, instead of CDRs, for all 18-year-olds beginning on their 18th birthdays, using adult criteria for disability. State DDS agencies set the frequency of CDRs for each beneficiary according to his or her outlook for medical improvement, which is determined on the basis of impairment and age. Beneficiaries expected to improve medically, classified as “medical improvement expected” (MIE), are scheduled for review at 6- to 18-month intervals; beneficiaries classified as medical improvement possible (MIP) are scheduled for review at least once every 3 years; and those classified as medical improvement not expected (MINE) are scheduled for review once every 5 to 7 years. For almost a decade, because of budget and staffing reductions and competing priorities, SSA has been unable to conduct all the DI CDRs required by the Social Security Act. Moreover, the agency has conducted relatively few elective SSI CDRs. (See tables III.1 and III.2 for numbers of previous CDRs conducted and CDR funding.) In 1996, the Congress authorized about $3 billion for CDRs for fiscal years 1996 through 2002. In addition, SSA plans to earmark over $1 billion in its administrative budget for CDRs during that same time period. The DI and SSI programs have about 4.3 million beneficiaries due or overdue for a CDR in fiscal year 1996. About 2.5 million of these reviews are required by law, including about 2.4 million DI CDRs and 118,000 SSI CDRs. SSA is authorized, but not required by law, to conduct the remaining CDRs. As shown in table 1, about half of all beneficiaries are awaiting CDRs, the largest category of which is disabled workers receiving DI benefits. SSA calculated a smaller number of CDRs due or overdue of about 1.4 million DI beneficiaries and 1.6 million SSI beneficiaries. It excluded from its calculation DI worker beneficiaries aged 59 and older, disabled widows and widowers and disabled adult children of DI worker beneficiaries, and SSI beneficiaries aged 59 and older. SSA officials acknowledged that CDRs are required for all of the DI beneficiaries it has excluded, but stated that, because of the backlog, the agency is focusing its attention on the portions of the CDR population that it estimates are more cost-effective to review. In general, DI worker beneficiaries and adult SSI beneficiaries in the backlog have similar characteristics, and SSA estimates a low likelihood of benefit termination as a result of medical improvement. On average, workers receiving DI and adult SSI beneficiaries have been receiving benefits for over 9 years and their predominant disability is mental disorders. While both are middle-aged, the average SSI adult beneficiary is about 9 years younger than the average DI worker beneficiary. In addition, the average estimated likelihood of benefit termination for DI and SSI MIE and MIP beneficiaries under age 60 is less than 5 percent. More data on DI and SSI characteristics are provided in tables IV.1 through IV.12. SSA uses two types of CDRs, a full medical CDR and a mailer CDR, to review beneficiaries’ status. The full medical CDR process is labor-intensive and generally involves (1) one of 1,300 SSA field offices to determine whether the beneficiary is engaged in any substantial gainful activity (SGA) and (2) one of 54 state DDS agencies to determine whether the beneficiary continues to be disabled, a step that frequently involves examination of the beneficiary by at least one medical doctor. Beginning in 1993, questionnaires—called mailer CDRs—replaced full medical CDRs for some beneficiaries to increase the cost-effectiveness of the CDR process. SSA also developed statistical formulas for estimating the likelihood of medical improvement and subsequent benefit termination based on computerized beneficiary information such as age, impairment, length of time on the disability rolls, and date of last CDR. For beneficiaries for whom application of the formulas indicates a relatively low likelihood of benefit termination, SSA uses a mailer CDR; when the formula application indicates a relatively high likelihood of benefit termination, SSA uses a full medical CDR. For those who receive mailer CDRs, SSA takes an additional step to determine whether responses to a mailer CDR, when combined with data used in the formulas, indicate that medical improvement may have occurred; in this small number of cases, the beneficiary is also given a full medical CDR. Individuals who have responded to a mailer CDR and are found to be still disabled are not referred for full medical CDRs, and SSA sets a future CDR date. Currently, SSA estimates that the average cost of a full medical CDR is about $1,000, while the average cost of a mailer CDR is between about $25 and $50. (See app. II for more details on the steps in the CDR process.) SSA does not include in its selection process all DI and SSI beneficiaries. SSA limits its selection process to those beneficiary categories it considers cost-effective to review on the basis of their potential for medical improvement. Approximately one-half of the DI and SSI beneficiaries currently due for CDRs are included in SSA’s process for estimating the likelihood of benefit termination through the use of statistical formulas; these estimates are the basis of selection for CDRs. Adult beneficiaries that SSA includes in its selection process are DI worker and SSI beneficiaries under age 59 who have been classified as MIEs or MIPs. SSA currently excludes MINE beneficiaries, beneficiaries aged 59 and older, and disabled adult children and disabled widows and widowers of DI worker beneficiaries from its estimation process because it considers these categories not cost-effective to review. While SSA considers some SSI child beneficiaries cost-effective to review, children are currently selected for CDRs without the use of formulas to estimate the likelihood of benefit termination. (See fig. 1 and table III.4.) The development and use of formulas reflect SSA’s effort to make the CDR process more cost-effective by using the estimates to identify beneficiaries who should receive a mailer CDR and those who should receive a full medical CDR. However, SSA acknowledges that the formulas are not useful for estimating the likelihood of benefit termination for most beneficiaries in this process. The formulas are primarily useful for identifying beneficiaries who SSA estimates are most or least likely to have their benefits terminated from a CDR. For individuals who fall in the middle category—which constitutes the majority of beneficiaries included in the estimation process—the formulas provide less accurate estimates, according to SSA. At this time, SSA does not select for CDRs any beneficiaries from this middle group because it is unable to determine whether a mailer or a full medical CDR is most appropriate for these beneficiaries. According to SSA, if it conducted mailer CDRs on the middle group, this would likely result in more beneficiaries being subsequently referred for full medical CDRs than the agency can accommodate in its budget. Similarly, if it conducted full medical CDRs on the middle group, it would be using a higher-cost process than SSA believes is necessary for some in this group. (See fig. 2 and table III.5.) Consequently, SSA selects a portion of the beneficiaries with the highest and lowest estimated likelihood of benefit termination for full medical and mailer CDRs, respectively. SSA has not developed statistical formulas to use in selecting SSI child and 18-year-old beneficiaries for CDRs. According to SSA, it selected low-birth-weight babies for CDRs of children for fiscal year 1996 because historically about 40 percent of this category have benefits terminated as a result of a CDR. Selecting low-birth-weight babies for CDRs is also consistent with CDR requirements that take effect in fiscal year 1997. For 18-year-old SSI beneficiaries in fiscal year 1996, SSA selected a judgmental sample classified as either MIE or MIP who had characteristics associated with a high likelihood of benefit termination. For fiscal year 1996, all reviews of child and 18-year-old SSI beneficiaries are to be full medical CDRs. Recognizing the need to improve the current process, SSA plans to expand and enhance its procedures for selecting beneficiaries for CDRs and conducting the reviews. Furthermore, SSA told us that these planned process improvements will limit the extent to which SSA can conduct the planned number of CDRs and reduce the CDR backlog. SSA plans to include more beneficiary categories in its selection process by expanding the use of the statistical formulas for certain MINE-classified beneficiaries and children and enhancing the formulas. Beginning in fiscal year 1997, according to SSA, formulas will be used for those beneficiaries who are classified as MINEs because they are older rather than because of their impairment. SSA also plans to develop formulas to use for children receiving SSI beginning in about fiscal year 1998. According to SSA, postponing the development of formulas for SSI child beneficiaries will allow the agency to integrate this process improvement with the knowledge it will gain about impairments that afflict children as a result of the new requirement to conduct CDRs for children in the SSI program beginning in fiscal year 1997. SSA also plans to pursue two approaches for the collection of medical treatment information about beneficiaries. First, SSA plans to obtain Medicare and Medicaid data and integrate the data into the statistical formulas to increase the validity of the estimated likelihood of benefit termination. SSA expects that the additional information will allow it to better determine the appropriateness of either mailer or full medical CDR for beneficiaries with estimates of benefit termination in the middle range. Second, SSA plans to develop a new type of CDR that would be conducted by mail to obtain current information about a beneficiary’s disability and treatment. Unlike the current mailer CDR, the new type of CDR would collect information directly from beneficiaries’ physicians and other medical treating sources. This information will be combined with computerized beneficiary data to help identify the beneficiaries in the middle range who are most likely to be no longer disabled and therefore warrant full medical CDRs. In the past year, new legislation has increased authorized funding for CDRs to about $3 billion by 2002, but has also required CDRs for some SSI beneficiaries for whom the reviews were previously elective. Because SSA has not finished incorporating the new CDR requirements into its plans, it is too early to determine whether the authorized funding will be adequate for all required CDRs. However, exclusions from the estimates SSA used regarding the size of the backlog in fiscal year 1996, SSA’s need to complete process improvements in order to conduct a greater number of CDRs, and other challenges all contribute to the uncertainty that SSA will be able to be current with required CDRs within 7 years. Funding for CDRs from all sources could exceed $4 billion by 2002. The bulk of the funding for CDRs is authorized by the Contract With America Advancement Act of 1996, which authorized about $2.7 billion between 1996 and 2002. While the funding is primarily for DI CDRs, a portion can be used for SSI CDRs. Most recently, the 1996 amendments to the Social Security Act authorized a total of about $250 million for SSI CDRs and medical eligibility redeterminations in fiscal years 1997 and 1998. For the first time in 1996, SSA designated $200 million of its administrative budget to be used solely to conduct CDRs. By comparison, SSA spent almost $69 million to conduct CDRs in fiscal year 1995. SSA expects to continue to earmark moneys in future budgets at the same level as fiscal year 1996. (See table III.2 for SSA’s CDR spending in past years.) SSA’s plan to conduct CDRs on 8,182,300 beneficiaries between 1996 and 2002 is ambitious. The plan, as of August 1, 1996, called for SSA to conduct nearly twice as many CDRs as it has conducted over the past 20 years combined. If the plan is fully implemented, SSA will conduct the CDRs for DI worker beneficiaries under age 59, the beneficiary category the plan defines as constituting the DI backlog. In addition, it will conduct about 350,000 SSI CDRs required under the Social Security Independence and Program Improvements Act of 1994 and about 2 million additional elective SSI CDRs. (See table III.6 for the number of full medical and mailer CDRs SSA plans to conduct.) SSA’s plan reflects increased authorizations from the Contract With America Advancement Act but does not yet account for the increased authorizations or increased CDRs and related work required by the 1996 amendments to the Social Security Act. SSA’s estimate of the size of the DI CDR backlog in fiscal year 1996 excludes about 848,000 beneficiaries, composed of disabled widows and widowers, disabled adult children, and workers aged 59 and older. SSA officials acknowledge that CDRs are required for these beneficiaries, but SSA has excluded them from the plan because it focuses on those categories SSA considers more cost-effective to review. In addition, an SSA official said that a large number of beneficiaries in the excluded categories are expected to leave the program because either they will die or convert to retirement benefits before SSA can conduct their CDRs. However, SSA has not estimated the proportion of excluded categories who may leave the program, nor does it include in its plan beneficiaries in these categories who will come due for CDRs in fiscal years 1997 through 2002. Process improvements are critical to SSA’s ability to implement the portion of the plan that relies on the mailer CDR, a component whose use is planned to triple in fiscal year 1998. SSA’s success with the mailer CDR will rely on yet-to-be-tried improvements. Although plans to expand the formulas to more beneficiary categories and collect medical treatment information appear promising, some improvements are in the earliest stages of development with only about 1 year available for completion. Thus, SSA will need to develop these initiatives more quickly than it did previous improvements. The integration of Medicare and Medicaid data into the formulas used to estimate the likelihood of benefit termination, and the development of a new type of CDR that collects information from physicians and other medical treating sources, are expected to allow SSA to begin conducting CDRs on beneficiaries with an estimated benefit termination in the middle range. SSA said that it currently is unable to determine whether the beneficiaries with estimates in the middle range should have a full medical CDR or a mailer CDR. Without that ability, SSA cannot determine the most cost-effective type of CDR to use, and its planned expansion of the use of the mailer CDR will be in jeopardy. SSA faces a variety of other challenges to the implementation of its plan and the elimination of the backlog of required CDRs: First, SSA must incorporate into its workload SSI CDRs and disability eligibility redeterminations required by the 1996 amendments to the Social Security Act. These requirements include performing CDRs once every 3 years for children under 18 years old who are likely to medically improve and for all low-birth-weight babies by their first birthday. This law also requires SSA to conduct disability eligibility redeterminations on all child beneficiaries who turn 18 years old, within 1 year of their birthday, and for between 300,000 and 400,000 children who qualified for SSI under individualized functional assessments (IFA). These reviews of children would take precedence over required CDRs and may shift resources away from other CDRs. The law also changes SSI eligibility for legal aliens who have not resided in this country for 5 years before receiving benefits, necessitating CDRs of the beneficiaries to determine continuing eligibility. Second, other recent legislation poses a competing priority. The law eliminates drug and alcohol abuse as a basis for receiving disability benefits; as a result, benefits will terminate for many of an estimated 196,000 DI and SSI beneficiaries whose primary impairments are drug abuse and/or alcoholism. SSA expects many of those terminated to reapply on the basis of other impairments, thus increasing SSA’s workload of initial claims for benefits. Previous increases in initial claims adversely affected the number of CDRs conducted as resources were shifted away from that activity to process initial applications. Third, SSA’s plan includes doing CDRs for many of the estimated 3.7 million SSI beneficiaries whose CDRs may be conducted at SSA’s discretion. While conducting these discretionary SSI reviews may be warranted largely because relatively few SSI CDRs have been conducted in the past, it shifts resources away from conducting required DI reviews. Fourth, the daunting effort to gear up for the unprecedented CDR workload will include negotiations between SSA and 50 state DDS agencies to increase CDR workloads and DDS efforts to hire, train, and supervise additional staff. In the Contract With America Advancement Act, the Congress emphasized maximizing the combined savings from CDRs under the DI and SSI programs. SSA has been working to improve its ability to identify beneficiaries for whom conducting CDRs would be most cost-effective. Other alternatives exist, however, that would likely make CDRs more cost-effective and improve program integrity. The current system of periodic CDRs for all beneficiaries, including those with virtually no potential for medical improvement, is a costly approach for identifying the approximately 5 percent of beneficiaries who medically improve to the point of being found ineligible for benefits. Furthermore, the frequency of CDRs is currently based on medical improvement classifications that do not clearly differentiate between those most and least likely to have their benefits terminated as a result of a CDR. Our analysis found that the estimated likelihood of benefit termination, as determined by SSA’s formulas, was very similar for beneficiaries classified as MIEs and MIPs. Although millions of dollars are spent annually to conduct periodic CDRs, some beneficiaries, especially those in the DI program, have received benefits for years without having any contact with SSA regarding their disability or their ability to return to work despite continuing disability. An alternate approach could build on SSA’s efforts to identify those beneficiaries whose CDRs are likely to be cost-effective and also increase contact with beneficiaries who remain in the program. Such an approach involves requiring (1) CDRs of beneficiaries with the greatest potential for medical improvement, (2) CDRs of a random sample from all other beneficiaries, and (3) regular contact with the remainder of the beneficiaries to increase program integrity. Less rigid requirements regarding the frequency of CDRs are necessary if reviews are to be conducted primarily on those beneficiaries whose cases are cost-effective to review—that is, those beneficiaries with the greatest potential for medical improvement—and for SSA to still be in compliance with laws governing CDRs. According to SSA, one of the best indicators of whether beneficiaries will remain on disability rolls is whether they have previously undergone a CDR. If an initial CDR finds that the beneficiary continues to be medically eligible for disability benefits, subsequent CDRs may not be cost-effective or appropriate. Because few CDRs actually result in benefit terminations, periodic reviews, even at the maximum 3- and 7-year intervals currently used, may not be appropriate for certain beneficiaries if further reviews are not warranted after the initial CDR and at least several years on the disability rolls. Conducting CDRs on a random sample of beneficiaries from among those whose cases are believed by SSA to be less cost-effective to review is consistent with a more cost-effective and flexible approach to scheduling CDRs. It also addresses a weakness in SSA’s current process by ensuring overall program integrity. SSA’s current process excludes some categories of beneficiaries from portions of the selection process. As a result, about one-half of all beneficiaries due for a CDR will go without oversight unless SSA changes its selection process. If periodic CDRs are not conducted for all beneficiaries, it is increasingly important to develop a strategy to ensure overall program integrity. Contact with beneficiaries, in addition to the contact that occurs in the CDR process, can improve program integrity by reminding beneficiaries that their medical conditions are being monitored and serving as a deterrent to abuse by those no longer medically eligible for benefits. It could also support SSA’s process improvement efforts, particularly within the next year. We believe that a new type of brief mailed contact would, at a minimum, in the year it is implemented, allow SSA to contact a majority of beneficiaries with overdue CDRs to remind them of their responsibility to report medical improvements and to inquire about their interest in returning to work. By collecting CDR-related information as part of this new contact, it could also speed the development of SSA’s planned improvements to the CDR process. For example, SSA could gather information on physicians and other treating sources seen by beneficiaries since their last CDR. Such information is needed to implement SSA’s new medical treating source CDR. SSA has not evaluated this three-pronged proposal for improving the CDR process, but in our discussions with agency officials, some provided comments on one aspect of it. In discussing additional, more frequent contact with beneficiaries in addition to that which occurs during a CDR, several officials raised the issue of the cost of such an initiative. Although some administrative funds would be used for this contact, it should result in significant savings because a considerable number of beneficiaries, on the basis of SSA’s experience, can be expected to refuse repeatedly to provide requested information and, as a result, will have their benefits terminated after a prescribed due-process procedure is followed.According to SSA, those who fail to cooperate generally do so because they believe that they are no longer eligible for benefits. On the basis of SSA’s experience with CDRs and financial eligibility redeterminations, we assumed that .71 percent of the DI beneficiaries and 1 percent of the SSI beneficiaries who were contacted would have their benefits terminated for noncooperation after all due-process procedures were followed. These termination rates represent an estimated one-time net federal savings of over $1.4 billion from contacting beneficiaries in the CDR backlog, with DI savings accounting for about $1.2 billion and SSI savings accounting for about $230 million. If extended to all beneficiaries not receiving CDRs or financial eligibility redeterminations, the costs and subsequent savings from such a contact would likely be higher. See appendix I for a further discussion of our estimated savings. Time-limiting disability benefits has been proposed as a way to reduce beneficiaries’ dependence on cash benefits by removing them from the rolls after set periods of time. Time limits are intended to encourage beneficiaries to obtain treatment and pursue rehabilitation to overcome their disabling conditions and obtain productive employment. Proposals for time-limited benefits generally establish criteria for deciding which categories of beneficiaries would be subject to time limits and no longer subject to required CDRs. Some believe that such broad application of time limits could significantly reduce the number of people who would continue on the rolls indefinitely and eliminate the CDR backlog. However, others believe that it could create a large backlog of disability claims when those who are terminated because of the time limit reapply for benefits. Time limits are also thought to increase the number of people on the rolls because SSA and DDS staff may, in certain cases, be more likely to award benefits because of the limited payment period. Instead of subjecting all beneficiaries with nonpermanent impairments to time limits, some believe that time limits should be applied to certain subsets or categories of beneficiaries—those with impairments that are likely to improve with treatment or surgery. Such impairments include affective disorders, tuberculosis, certain fractures, and orthopedic impairments for which surgery can restore or improve function. However, our analysis of the characteristics of those in the CDR backlog suggests that implementing time-limited benefits on the basis of either medical improvement classifications or specific impairments is not currently feasible. As explained earlier, on the basis of our analysis of available CDR population characteristics, there is little correlation between the MIE and MIP classifications and the estimated likelihood of benefit termination. Moreover, our analysis did not associate any specific impairment or other characteristic with a greater likelihood of benefit termination. Furthermore, SSA and the NASI disability policy panel concluded that the MIE, MIP, and MINE classifications do not accurately reflect the likelihood of medical improvement and subsequent benefit termination. The CDR process has the potential to be used to further SSA’s return-to-work initiatives, strengthening that effort and offering greater opportunity for beneficiaries to become self-sufficient despite their continuing disabilities. While the Social Security Act states that as many individuals as possible applying for benefits under the DI program should be rehabilitated into productive activity, only about 8 percent of DI and SSI beneficiaries are referred for vocational rehabilitation (VR) services. SSA generally does little during the CDR process to determine beneficiaries’ VR needs and provide assistance to help beneficiaries become self-sufficient. Although in conducting full medical CDRs SSA obtains information from the beneficiary on VR services received since the initial application or last CDR, SSA and DDS staff are neither required nor instructed to assess beneficiaries’ work potential, make beneficiaries aware of rehabilitation opportunities, or encourage them to seek VR services. When conducting mailer CDRs, SSA provides beneficiaries the opportunity to indicate an interest in VR services. In our April 1996 report, we noted that medical advances and new technologies are creating more opportunities than ever for disabled people to work, and some beneficiaries who do not medically improve may nonetheless be able to engage in substantial gainful activity. Yet, weaknesses in the design and implementation of DI and SSI program components have limited SSA’s capacity to identify and assist in expanding beneficiaries’ productive capacities. Beneficiaries receive little encouragement to use rehabilitation services. We recommended in that report that the Commissioner of Social Security take immediate action to place greater priority on return to work, including designing a more effective means to identify and expand beneficiaries’ work capacities and better implementing existing return-to-work mechanisms. Our analysis of the characteristics of beneficiaries awaiting DI and SSI CDRs supports SSA’s conclusion that there is little likelihood a large proportion of beneficiaries will show sufficient medical improvement to no longer be disabled. Therefore, if SSA is to decrease long-term reliance on these programs as the primary source of income for the severely impaired, it will need to shift its emphasis. It must rely less on assessing medical improvement and more on return-to-work programs to better gauge the potential for self-sufficiency despite the lack of medical improvement. SSA’s plan to conduct repeated CDRs at regularly scheduled intervals may not be warranted for some beneficiaries, given the large number of beneficiaries with little likelihood of benefit termination and the emphasis on cost-effectiveness in the Contract With America Advancement Act. A more cost-effective approach might incorporate (1) a focus on conducting CDRs for beneficiaries with the greatest likelihood of benefit termination due to medical improvement, (2) conducting CDRs on a random sample of all other beneficiaries to correct a weakness in SSA’s process, and (3) contact with beneficiaries not selected for a CDR or a financial eligibility redetermination to strengthen program integrity. However, for this cost-effective approach to work, SSA needs to be able to accurately estimate the likelihood of benefit termination for all beneficiaries. Currently, our analysis shows that about one-half of all beneficiaries due or overdue for a CDR have been excluded from SSA’s process that utilizes formulas to estimate the likelihood of benefit termination. Furthermore, for many beneficiaries, the formulas result in less accurate estimates. If SSA is to be current with CDRs by 2002, it will need to meet many challenges, including expanding the use of its mailer CDR. Because such an expansion is dependent upon SSA’s ability to implement at least two of its planned process improvements, this raises further questions about SSA’s ability to implement its plan. We recommend that, to the extent SSA is authorized to act, the Commissioner of SSA replace the routine scheduling for CDRs of all who receive DI and SSI program benefits with a more cost-effective process that would (1) select for review beneficiaries with the greatest potential for medical improvement and subsequent benefit termination, (2) correct a weakness in SSA’s CDR process by conducting CDRs on a random sample from all other beneficiaries, and (3) help ensure program integrity by instituting contact with beneficiaries not selected for CDRs. As part of this effort, the Commissioner should develop a legislative package to obtain the authority the agency needs to enact the new process for those portions of the DI and SSI populations that are subject to required CDRs. To enable as many disabled individuals as possible to become self-sufficient, SSA should test the use of CDR contacts with beneficiaries to determine individuals’ rehabilitation service needs and help them obtain the services and employment assistance they need to enter or reenter the workforce. In commenting on a draft of this report, SSA agreed to test the use of CDR contacts with beneficiaries to determine individuals’ rehabilitation service needs and help them obtain the services and employment assistance they need to enter or reenter the workforce. SSA also agreed to begin to consider changing the current statutory requirements for CDRs as part of its effort to continually seek ways to maintain stewardship of the disability program in the most cost-effective manner. However, it disagreed with our recommendation on specific changes it should make to the CDR process. In particular, it disagreed with conducting CDRs on random samples of beneficiaries who are less cost-effective to review and with making more frequent contact with all beneficiaries. We continue to believe that ensuring program integrity requires that all beneficiaries have an opportunity to be selected for a CDR. In addition, we believe that efforts to monitor disability status will serve as a deterrent to abuse by those no longer medically eligible for benefits, and that maintaining periodic contacts with all beneficiaries is a sound management practice. SSA also made technical comments on our report, which we incorporated as appropriate. The full text of SSA’s comments and our responses are contained in appendix V. As arranged with your office, unless you announce its contents earlier, we plan no further distribution of the report until 7 days after the date of this letter. At that time, we will send copies to the Commissioner of Social Security. We will make copies available to others on request. Please contact me at (202) 512-7215 if you or your staff have any questions about this report. Other GAO contacts and staff acknowledgments are listed in appendix VI. This appendix provides additional details concerning our methodology. Information is included about databases used in estimating for the DI and SSI programs the number of beneficiaries due or overdue for a CDR in fiscal year 1996 and analyzing their characteristics. We also include information on our calculations of the potential one-time savings from our proposed mailed contact to collect CDR-related information from beneficiaries. We analyzed the electronic databases as provided to us by SSA officials but did not evaluate the validity of the databases or the SSA formulas used to estimate the likelihood of benefit termination. We did our review from September 1995 to August 1996 in accordance with generally accepted government auditing standards. To determine the number of DI worker beneficiaries currently due or overdue for a CDR, we used SSA’s Office of Disability’s (OD) CDR database and the Master Beneficiary Record (MBR). OD’s database contains information on all beneficiaries SSA has determined were due or overdue for a CDR in fiscal year 1996. We eliminated records for DI beneficiaries who were included in OD’s database but whose MBR could not be found or who did not meet the definition of being due or overdue for a CDR in fiscal year 1996. The eliminated records primarily involved cases that were not due for a CDR until the next century and were incorrectly included in the backlog population. Table I.1 contains initial and final population sizes after adjustments. OD provided the number of disabled widows and widowers and disabled adult children in the backlog but did not supply other information about them. To determine the number of SSI beneficiaries currently due or overdue for a CDR, we used OD’s database that contains information on all SSI beneficiaries SSA has determined were due or overdue for a CDR in fiscal year 1996. We drew a random sample of 15 percent of these beneficiaries stratified by whether the (1) beneficiary was an adult or a child and (2) state disability determination services (DDS) had classified the likelihood of medical improvement as expected (MIE), possible (MIP), or not expected (MINE). We eliminated from our sample beneficiaries whose CDR due dates were after fiscal year 1996 or who were over 65. On the basis of our sample data, we estimated the size of the population with these exclusions. Table I.2 contains initial population and sample sizes and final sizes after adjustments. For the population of DI workers, we obtained information on characteristics from the MBR and OD’s CDR database. From the MBR, we obtained information on age, gender, race, impairment, time receiving benefits, and time overdue for a CDR. Because information obtained from OD did not differentiate between MIE and MIP beneficiaries, we used MBR data to classify beneficiaries in the two categories. From OD’s CDR database, we obtained information on (1) records for all those classified as MINE and (2) estimates of the likelihood of benefit termination for MIE and MIP beneficiaries, the only categories for which likelihood data were available. We did not analyze the characteristics of DI beneficiaries who are disabled widows and widowers and disabled adult children because we did not have sufficient information to identify them in the MBR. For the sample of SSI beneficiaries, we obtained information on characteristics from SSA’s Supplemental Security Income Record Description (SSIRD) and OD’s CDR database. From the SSIRD, we obtained information on age, gender, race, impairment, time receiving benefits, and time overdue for a CDR. We also used SSIRD data to classify adults into MIE and MIP categories. From OD’s CDR database, we obtained information on (1) medical improvement classifications for all children and MINE adults; (2) records for all adults classified as MINE; and (3) estimates of the likelihood of benefit termination for adult MIE and MIP beneficiaries, the only categories for whom likelihood data were available. Because we used a sample to estimate characteristics of the universe of SSI beneficiaries due or overdue for CDRs in fiscal year 1996, the reported estimates in tables IV.7 through IV.12 have sampling errors associated with them. Sampling error is variation that occurs by chance because a sample was used rather than the entire population. The size of the sampling error reflects the precision of the estimate—the smaller the sampling error, the more precise the estimate. The tables in appendix IV contain sampling errors for reported estimates calculated at the 95-percent confidence level. This means that the chances are about 95 out of 100 that the range defined by the estimate, plus or minus the sample error, contains the true percentage. With few exceptions, the sampling errors were less than 1 percentage point. This means that for most percentages, there is a 95-percent chance that the actual percentage falls within plus or minus 1 percentage point of the estimated percentage. Our estimate of a one-time savings associated with our recommendation to begin a process for more frequent contact with beneficiaries who are not selected for either a CDR or a financial eligibility redetermination during the year is based on the following SSA costs and savings estimates and assumptions. The number of DI beneficiaries who would be contacted by this initiative was estimated by subtracting the number of DI CDRs planned for fiscal year 1996 from the DI population due or overdue for CDRs as of fiscal year 1996. For the SSI program, the number of beneficiaries who would be contacted by this initiative was estimated by subtracting the estimated number of SSI beneficiaries who would receive either a financial eligibility redetermination or a CDR from the SSI population currently due or overdue for CDRs as of fiscal year 1996. We assumed that the percentage of beneficiaries who would fail to cooperate with this initiative would be the same as the most recent SSA estimates for DI CDRs and SSI financial eligibility redeterminations. We used savings estimates resulting from DI benefit terminations as provided by the Office of the Actuary. To estimate federal savings from SSI benefit terminations, we used estimates provided by SSA’s Office of the Actuary and the Department of Health and Human Services’ Health Care Financing Administration for adult beneficiaries, and offsetting cost estimates to account for the resultant increase in food stamps. Because these SSI beneficiaries would be contacted for financial eligibility redeterminations within the next 5 years, the SSI estimates we used reflect only 5 years of savings and offsetting food stamps. Because many DI beneficiaries who have been receiving benefits for years may never have been contacted for CDRs, the DI estimates we used reflect a lifetime of savings. As a proxy for the cost of the mailer, we used an SSA estimate of the cost of the current nonscannable mailer. Because this figure overestimates the cost of a scannable mail contact, it provides a conservative estimate, including some administrative and developmental costs. Calculation of number of beneficiaries expected to be dropped from the programs Beneficiaries due or overdue for CDRs in fiscal year 1996 Less: planned financial eligibility redeterminations for those who are not receiving a CDR Beneficiaries not contacted during the year Multiplied by: percentage of beneficiaries who fail to cooperate Total beneficiaries expected to be dropped from the program Per-beneficiary savings and offsetting costs Gross savings to DI trust fund/SSI program Gross savings to Medicare/federal portion of Medicaid Less: offsetting costs of additional food stamps Net savings per beneficiary dropped from the program Total estimated savings to the federal government Net program savings (number of beneficiaries dropped multiplied by net savings per beneficiary) Less: cost of sending scannable mailer (number of beneficiaries contacted at $25) Total estimated net savings from proposed initiative (combined total = $1,477,236,040) This appendix provides details on SSA’s procedures for conducting CDRs. More specifically, we (1) outline the process for conducting full medical CDRs and (2) discuss SSA’s use of mailer CDRs. Generally, a full medical CDR is used to determine with certainty whether a beneficiary has medically improved to the point that the person is no longer disabled and should be removed from the disability rolls. The full medical CDR process is labor-intensive and generally involves (1) one of 1,300 SSA field offices to determine whether the beneficiary is engaged in any substantial gainful activity (SGA), and (2) one of 54 state DDS agencies to determine whether the beneficiary continues to be disabled, a step that frequently involves examination of the beneficiary by at least one medical doctor. A full medical CDR generally follows an eight-step evaluation process (see fig. II.1). Figure II.1: Eight-Step Evaluation Process for a Full Medical CDR Step 1 Is beneficiary engaged in substantial gainful activity? Step 2 Does impairment meet or equal severity as defined in medical listing? Step 3 Has medical improvement (MI) occurred? Step 5 Does an exception to MI apply? Step 4 Is MI related to ability to work? Step 6 Is impairment severe? Step 7 Is beneficiary able to perform work done in past? Step 8 Based on SSA guidelines, is beneficiary able to perform other work? Determination Beneficiary remains disabled and benefits continue. Determination Beneficiary is no longer disabled and benefits terminate. If an exception to MI applies in which the initial determination was fraudulently obtained or the beneficiary does not cooperate with SSA, benefits are terminated. At step one, the SSA field office determines whether the beneficiary is engaged in SGA. Field office staff contact the beneficiary, often through a face-to-face meeting, and obtain information on the person’s condition, medical treating sources, and the effect of the impairment on the beneficiary’s ability to perform SGA. This information describes any changes that have occurred since the initial application or most recent CDR and includes types of treatment received, medicines received, specialized tests or examinations, vocational rehabilitation services received, and any schools or training classes attended since the last medical determination. The SSA field office also obtains information on any work activities since the person became disabled, whether the condition continues to interfere with the ability to work, and whether the beneficiary has been released for work by the treating physician. Benefits are terminated for beneficiaries engaged in SGA, regardless of medical condition. A beneficiary found to be not working or working but earning less than SGA has his or her case forwarded to the state DDS office. At step two, the DDS compares the beneficiary’s condition with the Listing of Impairments developed by SSA. The listings contain over 150 categories of medical conditions that, according to SSA, are severe enough ordinarily to prevent a person from engaging in SGA. The DDS obtains medical evidence from the sources who treated the beneficiary during the 12 months prior to the CDR. If the medical evidence provided is insufficient for a disability decision, the DDS will arrange for a consultative examination by an independent doctor. A beneficiary whose impairment is cited in the listings or whose impairment is at least as severe as those impairments in the listings, and who is not engaged in SGA, is found to be still disabled. At step three, a beneficiary whose impairment is not cited in the listings or whose impairment is less severe than those cited in the listings is evaluated further to determine whether there has been medical improvement (MI). MI is defined as any decrease in medical severity of the impairment(s) present at the time of the most recent medical determination. In deciding whether MI has occurred, the DDS considers changes in symptoms, signs, and/or laboratory findings and determines whether these changes reflect decreased medical severity of the impairment(s). If MI has not occurred, the DDS skips step four and proceeds to step five to consider whether any exceptions to MI apply. At step four, for beneficiaries for whom MI has occurred, the DDS determines whether MI is related to the ability to work. MI relates to the ability to work when there is an increase in a person’s residual functional capacity (RFC) to do basic work activities compared with the person’s RFC at the last medical determination. When MI does not relate to the ability to work, the DDS proceeds to step five. If MI relates to the ability to work, the DDS goes to step six. At step five, the DDS determines whether exceptions to MI apply. Exceptions provide a way for SSA to find a beneficiary no longer disabled in certain limited situations even though there is no decrease in the severity of the impairment. There are two exceptions to MI. The first exception applies to certain situations in which the person can engage in SGA—for example,when substantial evidence shows that advances in medical or vocational therapy or technology have favorably affected the severity of a beneficiary’s impairment or RFC to do basic work activities. The second exception can apply without regard to the person’s ability to engage in SGA—for example, in situations in which the prior determination was fraudulently obtained or in which the beneficiary fails to cooperate with SSA in providing information or in having an examination. At any point in the eight-step evaluation process, if the second exception applies, benefits are terminated. If no exceptions apply, disability benefits are continued. At step six, when either the first exception applies or MI is determined to be related to the ability to work, the DDS determines whether the beneficiary’s current impairment is severe. According to SSA standards, a severe impairment is one that significantly limits a person’s ability to do basic work activities, such as standing, walking, speaking, understanding and carrying out simple instructions, using judgment, responding appropriately to supervision, and dealing with change. If the DDS determines that the impairment is not severe, benefits are terminated. At step seven, for beneficiaries with severe impairments, the DDS determines whether the beneficiary can still perform work he or she has done in the past. This determination is based on an assessment of the beneficiary’s current RFC. If the person is found to be able to do past work, benefits are terminated. At step eight, for beneficiaries found unable to perform work done in the past, the DDS determines whether the beneficiary can do other work that exists in the national economy. Using SSA guidelines, the DDS considers the person’s age, education, vocational skills, and RFC to determine what other work, if any, the beneficiary can perform. Unless the DDS concludes that the person can perform work that exists in the national economy, benefits are continued. Mailer CDRs enable SSA to conduct more CDRs without performing labor-intensive full medical reviews. The mailer CDR is a questionnaire through which a beneficiary provides information about health, medical care, work history, and training (see fig. II.2 for the questionnaire currently used). Currently, SSA sends mailer CDRs to a portion of beneficiaries with the lowest estimated likelihood of benefit termination. In conjunction with data on the beneficiaries’ impairment, age, and other characteristics, SSA uses responses to mailer CDRs to help identify those beneficiaries most likely to have medically improved who thus should receive full medical reviews. For example, if the beneficiary indicates that his or her health is better, SSA will generally conduct a full medical CDR. In mental impairment cases, SSA may decide that a full medical CDR is unwarranted even if the beneficiary reports MI. If, however, the beneficiary indicates that his or her health is the same or worse, SSA then reviews the beneficiary’s response to the next question on whether, within the last 2 years, a doctor has indicated that the person can return to work. On the basis of the beneficiary’s responses to the CDR mailer and characteristics, SSA assesses the potential effects of any hospitalizations or surgeries on the beneficiary’s health status and the importance of ongoing medical treatment or its absence to the beneficiary’s health condition. If necessary, SSA will contact the beneficiary for additional information or clarification. If SSA’s analysis indicates possible MI, the beneficiary is referred for a full medical CDR. Otherwise, the beneficiary is rescheduled for a future CDR. Not available. Average age (mean) Average age (median) Endocrine, nutritional, and metabolic diseases Disorders of blood and blood-forming organs Mental disorders, excluding mental retardation Skin and subcutaneous tissue disorders Estimated likelihood of benefit termination 14,212 (continued) Average likelihood (mean) Average likelihood (median) Number of years receiving benefits Average years (mean) Average years (median) Matured over 10 years ago Average years (mean) Average years (median) Average age (mean) Average age (median) Endocrine, nutritional, and metabolic diseases Disorders of blood and blood-forming organs Mental disorders, excluding mental retardation Skin and subcutaneous tissue disorders Estimated likelihood of benefit termination 2 (continued) Average likelihood (mean) Average likelihood (median) Number of years receiving benefits Average years (mean) Average years (median) Matured over 10 years ago Average years (mean) Average years (median) Workers 60 years and older (continued) Workers 60 years and older (continued) Workers 60 years and older SSA does not estimate the likelihood of benefit termination for MIE and MIP workers aged 60 and over or for MINE workers. Therefore, the total number with an estimated likelihood of benefit termination is less than the total for the column. Workers 60 years and older (continued) Workers 60 years and older (continued) Workers 60 years and older SSA does not estimate the likelihood of benefit termination for MIE and MIP workers aged 60 and over or for MINE workers. Therefore, the total number with an estimated likelihood of benefit termination is less than the total for the column. Average age (mean) Average age (median) Endocrine, nutritional, and metabolic diseases Disorders of blood and blood-forming organs Mental disorders, excluding mental retardation Skin and subcutaneous tissue disorders 201,148 5-24% 25-49% 50-74% 5-24% 25-49% 50-74% (continued) Number of years receiving benefits Average years (mean) Average years (median) Matured over 10 years ago Average years (mean) Average years (median) Average age (mean) Average age (median) Endocrine, nutritional, and metabolic diseases Disorders of blood and blood-forming organs Mental disorders, excluding mental retardation Skin and subcutaneous tissue disorders 0 5-24% 25-49% 50-74% 5-24% 25-49% 50-74% (continued) Number of years receiving benefits Average years (mean) Average years (median) Matured over 10 years ago Average years (mean) Average years (median) Average age (mean) Average age (median) Endocrine, nutritional, and metabolic diseases Disorders of blood and blood-forming organs Mental disorders, excluding mental retardation Skin and subcutaneous tissue disorders 3,820 (continued) Estimated likelihood of benefit termination Average likelihood (mean) Average likelihood (median) Number of years receiving benefits Average years (mean) Average years (median) Due over 10 years ago Average years (mean) Average years (median) Largest sampling error in column at the 95-percent confidence level Average age (mean) Average age (median) Endocrine, nutritional, and metabolic diseases Disorders of blood and blood-forming organs Mental disorders, excluding mental retardation Skin and subcutaneous tissue disorders 0 (continued) Estimated likelihood of benefit termination Average likelihood (mean) Average likelihood (median) Number of years receiving benefits Average years (mean) Average years (median) Due over 10 years ago Average years (mean) Average years (median) SSA does not estimate the likelihood of benefit termination for MIEs and MIPs aged 60 and over. Therefore, the total with an estimated likelihood of benefit termination is less than the total number for the column. Furthermore, SSA does not estimate the likelihood of benefit termination for children or MINEs. Average age (mean) Average age (median) Endocrine, nutritional, and metabolic diseases Disorders of blood and blood-forming organs Mental disorders, excluding mental retardation (continued) Skin and subcutaneous tissue disorders Estimated likelihood of benefit termination Average likelihood (mean) Average likelihood (median) Number of years receiving benefits Average years (mean) Average years (median) (continued) Due over 10 years ago Average years (mean) Average years (median) Largest sampling error in column at the 95-percent confidence level (continued) Average age (mean) Average age (median) Endocrine, nutritional, and metabolic diseases Disorders of blood and blood-forming organs Mental disorders, excluding mental retardation Skin and subcutaneous tissue disorders Estimated likelihood of benefit termination Average likelihood (mean) Average likelihood (median) (continued) Number of years receiving benefits Average years (mean) Average years (median) Due over 10 years ago Average years (mean) Average years (median) Average age (mean) Average age (median) Endocrine, nutritional, and metabolic diseases Disorders of blood and blood-forming organs Mental disorders, excluding mental retardation Skin and subcutaneous tissue disorders (continued) Number of years receiving benefits Average years (mean) Average years (median) Due over 10 years ago Average years (mean) Average years (median) (Table notes on next page) Largest sampling error in column at the 95-percent confidence level Average age (mean) Average age (median) Endocrine, nutritional, and metabolic diseases Disorders of blood and blood-forming organs Mental disorders, excluding mental retardation (continued) Skin and subcutaneous tissue disorders Number of years receiving benefits Average years (mean) Average years (median) Due over 10 years ago (continued) Average years (mean) Average years (median) The following are GAO’s comments on the Social Security Administration’s letter dated September 23, 1996. 1. When SSA considers legislative changes that would make the CDR process more cost-effective, we believe that it must reassess the requirements of the existing schedule for conducting CDRs. According to SSA officials, if an initial CDR finds that a beneficiary is still disabled, subsequent CDRs are likely to result in the same conclusion. We question whether additional CDRs for that beneficiary are appropriate or cost-effective. Similarly, predictive formulas for DI worker beneficiaries allow SSA to determine those workers most likely to medically improve. Other groups not now included in the selection process may yield additional groups that are cost-effective to review. 2. While we recognize that the use of the formulas established the cases that fall into the “middle group,” SSA officials told us that SSA does not know which type of CDR—full medical or mailer—is more appropriate for those beneficiaries. SSA has at least two efforts under way to improve its ability to determine which type of CDR would be the more cost-effective. 3. We agree that SSA is currently testing the feasibility of expanding the use of formulas to the MINEs, and the report states that such an effort is under way. 4. While cost-effectiveness is an important aspect of the CDR process, we also believe that to ensure program integrity, all beneficiaries should have some likelihood of selection for a CDR. Such a program weakness is particularly troubling given that SSA has been unable to conduct all required CDRs for almost a decade and it estimates that the backlog will not be eliminated for another 7 years. 5. Our recommendation provides a comprehensive approach to program management that focuses on cost-effectiveness, program integrity, and increased contact with beneficiaries. Increased beneficiary contact is valuable to remind beneficiaries that their disability status is being monitored and that they are responsible for reporting medical improvement. We believe that such a contact also offers an additional opportunity for SSA to further its program improvement efforts. For example, it could be used to identify medical treating sources that should receive the medical treating source mailer currently under development. 6. We believe that ongoing periodic contact with beneficiaries is essential to a well managed program and should be done even if such an activity is considered a program operating cost. However, in estimating the costs of increased contact with beneficiaries, we considered a number of factors, including administrative and other costs. Because SSA could not provide us with estimates for these costs, we used the cost of the CDR mailer process to approximate the costs. The cost of the mailer reflects a more expensive manual process; thus we believe that it overstates the true cost of a scannable mail contact. In addition, because of the significant cost savings likely to result from the termination of benefits for individuals who do not respond—a net federal savings of over $1.4 billion—we believe that there is sufficient latitude to cover the cost of such an initiative. 7. Given the challenges that SSA faces, we continue to believe that its ability to eliminate the backlog of all required CDRs is uncertain. It may be possible for SSA to conduct the number of CDRs in its plan. However, the plan excludes about 848,000 required CDRs that are currently due or overdue. In addition, it does not include new CDRs and disability eligibility redeterminations required by the 1996 amendments to the Social Security Act, which take precedence over other required CDRs. Additional challenges are cited in our report. 8. We are pleased that SSA agrees with our recommendation to integrate return-to-work initiatives and the CDR process and that SSA has efforts under way to elicit the assistance of federal and private sector partners in the development of a return-to-work strategy. In our report, we acknowledge that field office employees play a limited role in providing information on VR opportunities to beneficiaries when they apply, but we also note that these staff take VR-related actions during a full medical CDR, and that state VR agencies have a role in limiting candidates for rehabilitation. In addition to those named above, the following persons made important contributions to this report: Susan E. Arnold, Senior Evaluator; Christopher C. Crissman, Assistant Director; Julian M. Fogle, Senior Evaluator; Elizabeth A. Olivarez, Evaluator; Susan K. Riggio, Evaluator; Vanessa R. Taylor, Senior Evaluator (Computer Science); and Ann T. Walker, Evaluator (Database Manager). Supplemental Security Income: Some Recipients Transfer Valuable Resources to Qualify for Benefits (GAO/HEHS-96-79, Apr. 30, 1996). SSA Disability: Program Redesign Necessary to Encourage Return to Work (GAO/HEHS-96-62, Apr. 24, 1996). PASS Program: SSA Work Incentives for Disabled Beneficiaries Poorly Managed (GAO/HEHS-96-51, Feb. 28, 1996). SSA’s Rehabilitation Programs (GAO/HEHS-95-253R, Sept. 7, 1995). Supplemental Security Income: Disability Program Vulnerable to Applicant Fraud When Middlemen Are Used (GAO/HEHS-95-116, Aug. 31, 1995). Social Security Disability: Management Action and Program Redesign Needed to Address Long-Standing Problems (GAO/T-HEHS-95-233, Aug. 3, 1995). Supplemental Security Income: Growth and Changes in Recipient Population Call for Reexamining Program (GAO/HEHS-95-137, July 7, 1995). Disability Insurance: Broader Management Focus Needed to Better Control Caseload (GAO/T-HEHS-95-164, May 23, 1995). Supplemental Security Income: Recipient Population Has Changed as Caseloads Have Burgeoned (GAO/T-HEHS-95-120, Mar. 27, 1995). Social Security: Federal Disability Programs Face Major Issues (GAO/T-HEHS-95-97, Mar. 2, 1995). Supplemental Security Income: Recent Growth in the Rolls Raises Fundamental Program Concerns (GAO/T-HEHS-95-67, Jan. 27, 1995). Social Security: Rapid Rise in Children on SSI Disability Rolls Follows New Regulations (GAO/HEHS-94-225, Sept. 9, 1994). Social Security: New Continuing Disability Review Process Could Be Enhanced (GAO/HEHS-94-118, June 27, 1994). Disability Benefits for Addicts (GAO/HEHS-94-178R, June 8, 1994). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Disability Insurance (DI) and Supplemental Security Income (SSI) programs, focusing on: (1) the backlog of DI and SSI cases due for continuing disability reviews (CDR); (2) the number and characteristics of individuals who are due for CDR; (3) whether adequate resources are available for conducting CDR; and (4) ways to improve the CDR process. GAO found that: (1) almost half of all DI and SSI beneficiaries are due or overdue for CDR in fiscal year 1996; (2) the typical beneficiary awaiting CDR is under age 59, has been receiving benefits for an average of 8 years, is unlikely to medically improve, and has been overdue for CDR for 3 years; (3) SSA uses either a full medical examination or a mail-in form to conduct CDR, depending on the likelihood that it will find reason to terminate a recipient's benefits; (4) it is too soon to tell if authorized funding will be adequate to conduct all required CDR through 2002; (5) the SSA plan to conduct over 8 million CDR over 7 years is ambitious; and (6) SSA could improve the CDR process by reviewing beneficiaries most likely to medically improve, conducting CDR on a random sample from all other beneficiaries, and using CDR contact to determine beneficiaries' rehabilitation needs.
According to the Centers for Disease Control and Prevention, about 1 in 68 children were identified as having ASD in 2012 (about 1.5 percent of 8-year olds). ASD is a complex developmental disorder with characteristics that can range from mild to more pronounced (see fig. 1). Each autism characteristic may vary in type and degree from person to person and can fluctuate over time. The combination of characteristics results in a highly individualized condition, as illustrated in figure 2. Certain medical or mental health conditions—called comorbid conditions—often occur with autism. For example, data from the 2011 Survey of Pathways to Diagnosis and Services showed that over half of autistic youth aged 15-17 had also been diagnosed with an attention deficit disorder (53 percent) or anxiety (51 percent), nearly one quarter had depression, and 60 percent had at least two comorbid conditions. Other common comorbid conditions include sleep disorders, intellectual disability, seizure disorders, and gastrointestinal ailments. To support the educational needs of children with disabilities, Congress originally passed IDEA in 1975. IDEA requires states and local educational agencies to identify and evaluate children with disabilities and provide special education and related services to those who are eligible. Such services and supports are formulated in an Individualized Education Program (IEP) and may include speech or occupational therapy and behavioral supports, among others. The 2004 reauthorization of IDEA required that, beginning no later than age 16, a student’s IEP must include measurable postsecondary goals, based on age-appropriate transition assessments, related to training, education, employment, and, where appropriate, independent living skills. The IEP must specify the transition services needed to assist the student in reaching those goals. Not all youth with autism received timely transition planning services, according to an analysis of data from the Department of Education’s (Education’s) National Longitudinal Transition Study-2 (NLTS2). That analysis showed that in 2009, 58 percent of young adults with ASD reported that they had been given a transition plan by the federally required age. This percentage was lower for youth from lower income households, African-American youth, and youth with the highest conversation skills. Upon exiting high school, youth with autism may obtain services by applying as adults and establishing eligibility for a number of programs. Specifically, we reported in 2012 that four federal agencies—Education, the Department of Health and Human Services (HHS), the Department of Labor, and the Social Security Administration—administer the key federal programs that provide services to youth with disabilities as they transition from high school. In addition, these and other federal agencies fund a number of other programs through grants to states, localities, and nongovernmental organizations, which often have flexibility on how to administer services. For example, Education’s Rehabilitation Services Administration funds state vocational rehabilitation agencies through formula grants, which have a state matching requirement, to help people with disabilities prepare for and engage in gainful employment, and some states use Medicaid funds to provide home and community-based services for individuals with certain types of disabilities who might otherwise be cared for in an institutional setting. In our 2012 report, we found that youth and their families faced challenges in identifying, navigating, and establishing eligibility for services for adults with disabilities, including autism. In addition, we found that the adult service system did not routinely provide a coordinated plan of services or objectives for youth making the transition to adulthood and—unlike the special education system for younger children—did less to ensure that needed services would be provided. Youth with autism who are entering the adult service system may have to apply to multiple agencies for services and establish their eligibility for each agency’s services. The difficulty obtaining adult services has been called “falling off a cliff” by the autism community. The panel told us that youth with ASD may need services addressing individual autism characteristics—or a combination of autism characteristics and other health conditions—that affect their ability to attain their goals for adulthood. The services needed to address any specific characteristic may differ depending on the goal. The panel told us that some characteristics of autism can be strengths that may help individuals with ASD achieve their goals—for example, intense focus on a specific interest can be very productive in the workplace. The panel discussed the importance of valuing the characteristics that may facilitate goals and warned against assuming that autism characteristics need to be “fixed.” The panel discussion, however, focused primarily on services and supports needed to address autism characteristics that can hinder progress toward goal attainment. The panel told us that, like their peers without autism, many youth with ASD pursue postsecondary education or training, although not all of them finish their academic programs. According to NLTS2, in 2009, 36 percent of young adults with ASD had attended some type of postsecondary education institution. Of those youth, 32 percent had attended a 4-year college, 70 percent had attended a 2-year college, and 33 percent had attended a vocational, business, or technical school. According to the panel, in addition to more traditional classroom settings, some youth with ASD also attend online programs and an increasing number of specialized colleges designed to meet the specific needs of students with disabilities. According to the panel, youth with autism may need some of the same supports as other postsecondary students to help them succeed in the higher education environment, but may require increased intensity or providers with training in autism. For example, many students need help with organizational skills, such as managing time or prioritizing tasks, to keep up with the pace of academic demands. The panel cited mental health conditions, especially anxiety, as some of the biggest impediments to success in postsecondary education and said these conditions can have a greater effect than youths’ autism characteristics. Youth may also need supports to help them navigate social demands of college life, such as relationships with roommates, dating, and issues with alcohol and drug use. The panel noted that self-care is especially important because poor personal hygiene can lead to isolation. The panel also discussed the need to have stronger supports at the beginning of college that taper down as students experience success and learn to navigate the environment. Table 1 shows some of the services and supports that may be needed in a postsecondary education environment. Young adults with ASD have lower rates of employment than some other people with disabilities. For example, in 2009, 58 percent of youth with autism in their early 20s had ever worked for pay outside the home, compared to 91 percent of youth with emotional disturbances and 74 percent of youth with intellectual disabilities. The diverse nature of autism means no single workplace setting or set of supports is appropriate for all autistic individuals. Examples of various workplace settings include the following: Full-time or part-time employment with market wages and responsibilities, with or without long-term supports—such as a job coach to help with communication and social navigation; Self-employment, which may offer the flexibility to tailor the job to the individual’s strengths and the work environment to their needs; and A workplace that primarily or exclusively employs individuals with disabilities—with wages that can fall below the federal minimum wage. Some workplaces provide services, supervision, or training in life skills or vocational skills. Customized employment may provide an opportunity for some individuals with autism who might otherwise not be able to find employment. Employers create a job specifically designed for an individual’s strengths, abilities and support needs. The panel told us that the type and level of supports that youth may need to succeed in the workplace may vary over time. For example, youth may need training in job-seeking, interviewing, or organizational skills at the beginning of their employment, while they may need job coaching throughout their employment. The panel noted that while some programs may teach youth with autism how to interview and some of the logistics of work, youth also need to learn about navigating the social aspect of employment and may require ongoing support in this area. The panel noted that social difficulties in the workplace could lead to isolation, marginalization, or job termination. Additionally, according to the panel, some vocational supports could help youth find jobs that build on some of their autism strengths, such as the ability to focus intensely on a problem or activity or skills in following routines and performing tasks consistently. Some of the services and supports the panel discussed in conjunction with successful employment are highlighted in table 2. One of the goals of IDEA is to prepare students for independent living. The level of independence that autistic youth may achieve varies widely— some are able to live in their own home (with or without supports), while others require 24-hour care. According to an analysis of NLTS2, young adults with autism are less likely to live independently than youth with other disabilities, including intellectual disabilities and emotional disturbances. Specifically, in 2009, 19 percent of autistic adults in their early 20s had lived independently at some point, either with or without supports. Fourteen percent had lived in a supervised setting, such as a group home or medical facility, which may have provided services such as life skills education or vocational supports. The panel discussed two key aspects of independence—performing daily living activities and making and carrying out decisions. Daily living activities may include tasks such as cleaning, shopping, paying bills, maintaining personal hygiene, and preparing meals. Again, while some youth with ASD do not need help with these activities, some may need to learn these skills step by step and then practice them in real life to gain proficiency. For example, some youth may need a list of each step of washing hair—shampoo in hand, lather into hair, rinse, etc. The panel said that some youth may be able to perform tasks in isolation, but may have trouble combining them with employment or social demands. Youths’ abilities to make their own decisions may also vary, depending on their skills in communication and self-advocacy—identifying and expressing their needs to others. Regardless of their level of disability, the panel said that it is critically important that all youth be given the opportunities to state their own preferences to the extent of their capabilities. Table 3 shows some of the services and supports that the panel said may help youth reach their maximum independence. Health and safety are essential to maintaining a quality life for all people, including youth with ASD. According to the panel, health and safety are the primary goals for some youth with high support needs. The panel said that in order for youth with ASD to achieve health and safety, there need to be enough medical and mental health caregivers who are trained in the unique needs of patients with ASD and prepared to accommodate them. The panel noted that physical or mental comorbid conditions are sometimes more of a concern than autism. For example, unaddressed mental health issues like depression or anxiety can lead to lower social and life skills and increased self-injurious behavior, family stress, and suicidal thoughts and behaviors. The panel also discussed sleep issues— such as difficulty going to sleep or remaining asleep—and noted that youth with ASD can have sleep disruptions lasting several days. Extended sleep deprivation could have severe effects on a person’s ability to function and manage their autism characteristics. Finally, the panel said that reproductive and sexual health care were often overlooked due to mistaken assumptions that autistic youth would not engage in sexual relationships. In addition, the panel discussed several reasons that youth with ASD may need supports to help them maintain their safety. For example, the panel told us that youth with ASD are more likely to be bullied or abused than other youth; some youth with ASD experience aggressive or self-injurious behavior, especially when they are frustrated or in pain; and some autistic youth wander away from their caregivers and are hit by cars or drown. The panel also discussed how some of the social and communication difficulties of ASD can make it hard for youth to tell when a situation or person is dangerous, and gave examples of youth who were coerced by gang members into committing crimes, unintentionally provoked police officers, or didn’t know when they were being sexually abused or bullied. Some of the services and supports the panel discussed in conjunction with health and safety are highlighted in table 4. Community integration includes both individual social interactions and broader community participation. In 2009, according to NLTS2 data, about one-third of young adults with autism did not participate in any community activities, and one-quarter had not had any contact with friends for at least a year. The panel noted that community integration is closely tied to higher education and employment because these are where adults tend to establish their social circles. The panel said that having housing, jobs, services, and social opportunities that are located in the community can facilitate interaction with neighbors who do not have ASD and help increase societal acceptance of autism. The panel also described transportation as one of the most important factors in achieving this goal, as it enables physical access to the community, and said that many youth with ASD can use public transportation independently if they receive adequate training. Table 5 shows some of the services and supports that the panel said may be needed for community integration. The panel said that, like other transitioning youth with disabilities, autistic youth need a personalized mix of services that address their unique support needs. Given the individualized and complex nature of ASD, no single combination of supports and services will prepare all youth with ASD for success as adults, according to the panel. For example, for a successful transition to a higher education environment, one student may need organizational coaching in order to develop study skills, while another may need mental health support for anxiety and peer mentoring in order to attend class every day. According to the panel, having the right combination of supports to meet each person’s individual needs may contribute to success in higher education, employment, community integration, and overall quality of life. The panel also told us that the creation of an individualized support system begins with a comprehensive assessment of a person’s skills and abilities. On Individualized Supports “Education, employment, health care, access to communication and self-determination, and support for overall community integration are all critical components for our community. A comprehensive approach which provides individualized, consumer- directed supports in all of these areas can enable all autistic people to participate in their communities, self-advocate, and live meaningful and productive lives with a high quality of life, regardless of level of disability.” The panel said that individuals with ASD need flexible services and supports that can adapt to changes in their needs. Autism is a lifelong disability with characteristics that can shift over time, especially as personal circumstances and comorbid health conditions change. For example, the panel described people whose verbal skills fluctuated from nonverbal to highly fluent during their lifetimes. During their nonverbal periods, they would need communication aids that were not needed during their fluent periods. Additionally, the panel said that the ability to fall and remain asleep could fluctuate over time, and youth may need behavioral therapy or medication during sleepless periods. The panel also cautioned against removing supports that may be needed over the long term. For example, a job coach may be used at the beginning of an autistic person’s employment to help them acclimate to their employer and vice versa. However, the individual may also benefit from continued support—either ongoing or occasional—from a job coach to help address new issues that arise over the course of their employment. Flexible job coaching that was available when needed could help the individual retain the job. The panel said that youth with autism need timely access to services and supports, beginning with having enough service providers to meet the demand. For example, according to the panel, youth receiving care from service providers who primarily work with children, such as developmental pediatricians and speech and language therapists, may need access to adult providers after their transition, because their needs for the service do not necessarily end when they reach a certain age. The panel also said that waiting lists for services have gotten longer due to a “tsunami” of increased demand. Furthermore, due to the unpredictable nature of ASD, there may be times when autistic youth need immediate access to services. For example, if autistic youth become aggressive or self-injurious, they and their families may need such services as emergency rooms, hospitals with specialized units, residential behavioral treatment programs, or direct support providers. Our panel described situations where families lacked immediate access to these services, and hence tried to care for aggressive youth at home, which can risk the safety and well-being of both the person with ASD and others present in the home. The panel told us that in order for autistic youth to have adequate access to supports, providers of adult services—including basic services important to all adults—would ideally have the expertise necessary to serve clients with autism. For example, dentists may need to be prepared for behaviors often associated with autism, such as the inability to sit in the dental chair, agitation, or self-injury (e.g., head banging). Additionally, according to the panel, youth with autism may have difficulty developing new relationships with service providers and are negatively affected by high turnover rates. The panel noted that support providers who work directly with autistic individuals have a physically and emotionally demanding job and that turnover among these providers is likely related to prevailing wages for this work. On Provider Expertise “One of the things we need to do is teach the adult physicians how to deal with adults who have developmental disabilities. Beyond checking their blood sugar and their cholesterol levels and blood pressure, you need to be checking their life. How are they doing psychologically? How is their self-esteem? How are they doing occupationally? And how is their living environment with their families or away from their families?” The panel cited research noting that any gaps or loss of services during the transition from a school-based support system to adulthood could have long-lasting detrimental effects on youths’ health, employment, educational attainment, and family stability. To avoid gaps, according to the panel, the various programs for adults should have compatible eligibility requirements so that youth do not have to re-establish eligibility. Specifically, the panel said there should be a single point of entry into adult services, much as the school provides a single point of entry for many services for children. The panel also suggested that transition planning start earlier—as early as middle school—and end when youth turn 25. Finally, the panel noted that autistic youth and their parents need information to help them navigate the adult service system and find the supports they need. The panel said that transitioning youth with ASD should have access to the services they need regardless of their income, geographical location, race, or gender. For instance, the panel told us that low- and middle- income youth with ASD whose families cannot afford to pay for their services out-of-pocket would ideally be able to rely on public programs to help them pay for their adult services—which is not always the case currently. The panel also stated that access to services should not vary depending on the states and communities where youth live. For example, in rural areas, support services like transportation or telemedicine— wherein providers serve patients remotely, such as via video conference—may help improve access to services that are more common in large cities. The panel also discussed the need to maintain a continuum of care as youth move across state lines, noting that even though eligibility rules and service provision may differ from state to state, youths’ needs for services and funding do not necessarily change when they move. The panel suggested a portable and flexible funding mechanism that would allow youth to pay for services in any location. Finally, in order to have equitable opportunities to pursue their adult goals, the panel said that some demographic groups, such as females or minorities, may have specific needs. For example, according to the panel, workplaces may have higher social expectations for women than men, and mentoring from other women on the spectrum could help them learn how to react to these expectations. The panel also noted that girls and minority students are diagnosed with ASD at a later age than other youth, on average—sometimes after they have left high school. As a result, the panel said, they may have received fewer services and may need more help as they transition to adulthood. Additionally, the panel said that minority students—especially those living in low-income, urban areas— may face issues compounded by having less preparation for the transition during their school years, including less parental education on available resources as well as less access to medical care. To address their needs, the panel discussed services such as additional transition planning, education and training for parents on services youth are entitled to, and medical case managers who can help these youth locate, access, and communicate with medical care providers. On Access to Services “Without transportation, it can be difficult or impossible for autistic youth to hold jobs, pursue higher education, go to community events, meet with peers, access healthcare, or develop relationships, hobbies, or interests outside of the home.” According to the panel, services for transitioning youth with ASD should be well-coordinated, with service providers, youth, and their families agreeing on common goals and communicating regularly. The panel told us that, because autism can affect so many aspects of an individual’s health and behavior, coordinated care that supports the whole person is particularly important for youth with ASD. For example, according to the panel, an individual needing care from both a mental health practitioner and a developmental disability doctor may not be adequately supported by either provider, because each may assume the other has primary responsibility and thus neither may take a holistic view of the individual’s care. Additionally, the panel said that some medical personnel find that other service providers are among the best sources of information about their patients. The panel advocated for a comprehensive approach in which service providers work together to simultaneously address issues of employment, housing, health care, and behavioral health. The panel specifically mentioned two examples of holistic supports: (1) mental health services should be coordinated with other services, such as employment, housing, or education; and (2) medication should be managed in coordination with a person’s behavioral health goals and current status. On Holistic Services “I think the need is for holism. You really need a team approach with health care, behavioral health, and supported employment. It kind of becomes a necessity that that’s a package deal at this point.” The panel emphasized the importance of providing services in the community, both by providing some educational services outside the classroom and by locating services in the neighborhoods where youth live. First, the panel noted that many life skills, such as grocery shopping or taking a bus, are best learned experientially—by practicing them in the community—especially for youth with ASD who may have difficulty transferring skills learned in one location, such as the classroom, to another environment, such as the community. Additionally, the panel told us that some youth with ASD may have difficulty making decisions that depend on changing input. For example, when deciding whether to cross the street, it is not enough to know whether a car is coming. The decision also depends on judging how far away the car is, a factor that continually changes. It is difficult to gain experience making these types of decisions without practicing them experientially. On Experiential Learning “Curricula are just words on paper without the supportive environments needed to practice these.” Second, the panel also discussed several benefits of providing services close to youths’ homes: Local services may be easier to reach for youth with limited access to transportation. Local venues may be more familiar, which may help mitigate difficulties navigating new situations. Local services may help youth better integrate into their community. Rather than youth with ASD becoming known as those who are bused away, receiving services near home may help their neighbors get to know them as members of the community. On Community-Level Solutions “I think we need to be fostering community-level solutions. If all of our focus is entirely on clinical interventions for one person at a time, we're going to be missing the more important questions about how do we create the communities, neighborhoods, and social institutions that are welcoming and inclusive of people with disabilities.” Consistent with our recent work on autism research, the panel emphasized that limited research exists on the service needs of transitioning youth or adults with ASD. In a June 2015 report, we found that research into the needs of adults consistently received less federal funding than other areas of autism research, such as the biology of autism. Also consistent with our work, the panel said that services should be grounded in evidence-based research and programs should be held accountable for delivering results, but they underscored that more research was needed for this to be feasible. The panel recommended studying the developmental disability system as a whole to determine what large-scale changes may be needed. Some of the specific areas of research they recommended included: the needs of transitioning youth and adults with ASD, effective adult services, comorbid conditions and health needs, improving collaboration among service providers, and efficient uses of funding. On Need for Research “I would rather see investments in performance measurement, quality improvement, and accountability that promise to help all people by incentivizing learning systems that aim to continuously experiment and improve…for everyone. The really shocking thing in all this is that we spend roughly $130 billion per year on services for autism across the lifespan but we have so little insight into who gets what and with what effect.” To improve the ability of youth with ASD to fully participate in society, the panel cited the need for a paradigm shift, including a new approach to supports. In this new paradigm, society and autistic youth alike would share the responsibility for inclusion, and the public and service providers would have better understanding of autistic youths’ potential. The panel likened this new paradigm to the magnitude of the changes made to physical infrastructure, such as curb cuts, ramps, and elevators, to improve mobility for people with physical disabilities. The panel told us that one element of the new paradigm would be the understanding that the responsibility for the inclusion of autistic youth rests on both the individual with autism and society. The panel said society expects individuals with autism to continually adapt to situations that are difficult for them—even though adaptation itself is difficult for many people with ASD. The panel acknowledged that people with autism are responsible for learning about social expectations for school, work, and social settings, but said that institutions and communities should also learn how best to support individuals with ASD, given the characteristics of autism. For example, the panel said that explaining the “unwritten rules” of workplace social behavior to youth with autism may facilitate their inclusion. On the other hand, the panel emphasized that society should acknowledge that some unwritten rules may lead to the exclusion of autistic youth. For example, the panel noted that it may not be reasonable to expect people with autism to smile or make small talk even though smiling and chatting are often expected in certain situations, such as in the workplace. On a New Approach to Supporting Youth with Autism: And if you think about social navigation and autism, and you think about physically navigating a space if you have a mobility difference, what are universal design standards? I don't mean just the structure of a building but what kinds of things could be universally implemented to make those places more accessible for us with our specific disability…so that the onus isn't always on the individual to have to mimic social behaviors that are expected in order to have a standard of living of some kind. The panel said that some of the changes they envision might already be supported by federal laws that govern services to people with disabilities, including youth with autism. For example, the panel told us that although existing law requires the provision of experiential learning opportunities for youth with disabilities, including autism, some teachers provide instruction only in a classroom setting despite the benefits of experiential learning. The panel described how greater public understanding of the characteristics, potential, and challenges of autism would be key to the new paradigm. According to the panel, lack of knowledge about autism can result in misinterpretations of the meanings behind some behaviors. For example, an autistic student with anxiety and depression may miss classes due to those comorbid conditions and may not seek help due to communication difficulties caused by autism. Without an understanding of autism and the behavioral effects of its interaction with comorbid conditions, professors or university personnel may attribute the student’s absences to a lack of interest or motivation, underestimate the severity of the situation, and miss an opportunity to help that student. The panel also said a public awareness campaign could deepen the public’s understanding of autism and help people think about how they can better support people with ASD. The panel suggested educational sessions that describe the diversity among people with autism, promote tolerance, and help create a safe environment to support youth on the spectrum. In addition to the wider public, the panel said that professionals who interact with people with autism need education and training about autism. The panel cited a wide range of fields—such as health care, justice, education, transportation, mental health, public safety, and public services—whose members need to understand autism characteristics and be prepared to support them. Specifically, the panel noted that, while youth with autism have clear needs for supports, people should not make assumptions about an individual’s competence based on a single characteristic of autism. The panel told us that some people may base their assumptions about the capabilities of an autistic youth on the youth’s ability to communicate verbally. For example, people may erroneously equate a lack of verbal communication skills with an inability to make decisions or express preferences, while assuming that those autistic youth who can speak have no communication issues and no need for supports. Consistent with this, the panel noted that those people closest to youth with autism should foster high expectations when preparing them for adulthood. For example, one panelist said that parents and teachers should expect youth to learn to be more independent, noting that people often adopt a risk-averse approach to teaching autistic youth, coming to the rescue when someone does not succeed at a task. This can create a “learned helplessness” and missed opportunities to teach the valuable life skill of perseverance. Rather, the panel said that youth should begin building self-advocacy skills—identifying and asking for what they need— in middle and high school. For example, one panelist said that youth should learn to identify their sensory, social, and communication differences and strengths. The new paradigm would include fostering a better understanding of the potential of youth with ASD, according to the panel. Specifically, the panel told us that recognizing the strengths of individuals with ASD can greatly improve their opportunities, while focusing on the difficulties associated with autism may limit them. On Contributing to Society These aren't just people who are service users and need things all the time. These are people who can contribute to society, their campus communities. And how do we create opportunities for them to be leaders, to be mentors, as part of the service delivery process? The panel said some characteristics of autism, such as a tendency toward repetitive actions and a preference for routine, could be an advantage in the workplace. For example, the panel told us about a youth who enjoyed stacking boxes—an interest that translated into a job skill needed in the community’s home improvement store—and another young man whose intense focus and interest resulted in encyclopedic knowledge of heating, ventilation and air conditioning. The second youth became the most reliable worker in his internship and reported to work early every day. The panel discussed a national financial services company that initiated an agricultural program for employing youth with disabilities, including autism, because the company saw their comfort with routine and reliability as strengths. The panel also discussed a national retailer that hired workers with disabilities, including autism, and found that those workers had lower job turnover rates and were just as productive as workers without disabilities. The panel noted that better inclusion of people with ASD could help reduce the stigma associated with autism and reduce bullying, which the panel suggested occurred frequently among youth with ASD. The panel told us that some youth miss out on supports, including helpful medical devices, that may be available to them because they fear being stigmatized if they disclose their autism. A greater understanding of autism in general could decrease the sense of stigma, similar to the way that physical disabilities are becoming less stigmatized over time. In addition, people may be less likely to make flawed judgments or act on stereotypes if they have an increased understanding and acceptance of the different interaction styles characteristic of individuals with autism. The panel noted that one way to reduce bullying was to implement school-based anti-bullying training, that focused on respect and acceptance from pre-kindergarten to 12th grade. Greater public awareness of autism could also improve the safety of autistic youth when they are in the community. For example, the panel discussed the prevalence of abuse, including sexual abuse, among youth with autism and noted that increased awareness of the risks to autistic youth in the community would increase their safety. On Benefits of Community Awareness and Integration If someone abused our son, we wouldn't know. He cannot tell us. …The best safety and security system is more eyes and ears. And so the more people that are coming and going, the more eyes and ears, the more than we are part of community and integrated into community, the better off we are and the safer we will be. Additionally, the panel described the benefits of police officers knowing how to appropriately aid an autistic person who is acting erratically, as well as teaching youth with autism how to interact with the police. The panel said that because some youth with autism may not appropriately respond to verbal commands, increased awareness of autism among police officers was critical given the potential consequences of miscommunication—such as incarceration or violence. We provided a draft of this report to Education and HHS for comment. They provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to interested congressional committees and to the Department of Education. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (617) 788-0580 or nowickij@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. This is the first in a series of reports on youth with autism transitioning to adulthood. In this report, we focus on describing the needs of transitioning youth with Autism Spectrum Disorder (ASD). A second engagement will examine the services provided to transitioning youth with ASD and any challenges they may face obtaining them. To describe the needs of transitioning youth with ASD, we addressed the following questions: 1. What services and supports do transitioning youth with autism need to attain their goals for adulthood? 2. What are the characteristics of needed services and supports? 3. How can youth with autism be fully integrated into society? To answer our research questions, we convened a roundtable discussion on March 3 and 4, 2016. We selected a total of 24 panelists, including: service providers, including teachers, developmental pediatricians, transition specialists, behavioral therapists, and leaders of programs providing employment supports, college supports, legal representation, social supports, and residential supports, among others; employers who provide supports for workers with autism; parents of youth and adults with autism with a range of ages and support needs. Most panelists fell into more than one of these categories. We chose panelists based on the following: their expertise about autism, as measured by their personal or professional experience with autism or participation on relevant Boards of Directors or the Interagency Autism Coordinating Committee; recommendations from others with subject matter knowledge, including researchers and organizations we interviewed during our background research as well as other panelists; and backgrounds and perspectives reflecting the diversity of the autism community, including variations in service areas, income, race, gender, geographic location, and urbanicity. We considered panelists’ personal backgrounds as well as the autistic populations they worked with. The full list of panelists is included in the agenda in appendix II. We asked the panel about the services that transitioning youth across the autism spectrum need to help them achieve five goals for adulthood: 1. Postsecondary education. 2. Employment. 3. Maximizing independent living. 4. Health and safety. 5. Maximizing community integration. We chose these goals because they were either listed in the transition planning requirements for high school students receiving special education services (goals 1, 2, and 3) or described by potential panelists in pre-selection interviews as particularly important (goals 4 and 5). To guide our discussion, we asked the panel the following questions about each of the five goals: What autism characteristics impede or facilitate achieving the goal? What services/supports/institutional changes help youth manage or leverage those characteristics? What happens when youth receive each service/support? What happens when they do not receive it? Have we discussed youth with the greatest, moderate, and least needs for each service/support? Are there special considerations for young women, minorities, low- income youth, rural youth, or other groups? We asked the panel to describe services ideally needed for a successful transition to adulthood, regardless of what services may currently be available or feasible. We did not verify the accuracy of the information they provided. The panel also pointed out that many of the services needed to support youth with ASD may also support youth with other disabilities. While we focus on how these services would particularly address characteristics associated with autism, the panel also cautioned against singling out autistic youth unnecessarily. Appendix II includes the agenda for the panel including a list of invited panelists. Of the 24 panelists, 2 were not present at the actual event, including an autistic individual who does not use spoken language. We interviewed them afterward to obtain their thoughts on the same topics covered during the panel and included this information in our analysis. We asked panelists to provide comments on a draft of this report, which we incorporated as appropriate. We conducted a content analysis of the transcript of the 2-day event as well as documents the panelists submitted in writing to clarify and support information discussed during the panel meeting. Specifically, we used the NVivo qualitative analysis software to classify each sentence into one or more categories, depending on whether the speaker was discussing: One or more of the five goals for adulthood; One or more services; or One or more of several cross-cutting needs—issues that applied across services or goals. We identified these categories by creating lists of all the services and cross-cutting issues the panel mentioned at least once during the discussion or in the documents. For brevity and organizational purposes, we then grouped them into broader categories based on their relatedness and our understanding of autism services garnered from literature and interviews. This process resulted in 14 broad categories of services, which are listed and described in appendix III, and 13 categories of cross-cutting needs: public awareness of autism; service provider awareness of autism; sufficient access to supports; adequate and flexible funding; coordination among service providers; staff quality and training; equity issues for specific groups; and other cross-cutting issues. Two analysts categorized the sentences in NVivo. To ensure consistency, both analysts classified the same sample of the transcript and a reliability analysis determined that they chose the same categories over 95 percent of the time. Each analyst also spot checked the other’s work. After categorizing the sentences, we analyzed the text in each category to identify the emerging themes. We attribute these themes, as well as specific examples given for illustrative purposes, to “the panel” throughout this report. For convenience, we use the term “youth” in this report to describe individuals aged 14 to 24, the age range when they may be eligible for transition services, depending on their state. This age range is in keeping with our previous work as well as other federal programs and organizations. During the transition years, youth may receive services for children, adults, or both. The panel identified 14 broad categories of services and supports that may help youth with Autism Spectrum Disorders (ASD) attain the goals of education, employment, health/safety, independent living, and community integration as they transition to adulthood. Table 6 shows which services may support each goal, according to the panel. Figures 3 through 16 illustrate the same information, as we describe each service. In addition to the contact named above, Nagla’a El-Hodiri (Assistant Director); Brittni Milam (Analyst in Charge), Sandra Baxter, Farrah Graham, and Walter Vance made key contributions to this report. Also contributing to this report were: James Bennett, Holly Dye, David Forgosh, Flavio Martinez, Daniel Meyer, Sheila McCoy, Arthur Merriam, Vernette Shaw, and Adam Wendel.
About a half a million youth with ASD will enter adulthood over the next decade. As they exit high school, they must obtain services as adults. Previous GAO work has shown that students with disabilities who are transitioning to adulthood face challenges identifying and obtaining adult services. GAO was asked to study the services and supports youth with ASD need during the transition to adulthood. This is the first in a series of reports. GAO studied (1) the services and supports transitioning youth with ASD need to attain their goals for adulthood, (2) the characteristics of these services and supports, and (3) how youth with ASD can be fully integrated into society. To address these objectives, GAO convened a roundtable discussion on March 3 and 4, 2016. GAO selected 24 panelists, including adults with ASD, service providers, researchers, and parents of youth with ASD. GAO interviewed prospective panelists in advance of the discussion and selected a panel with a broad base of expertise reflecting the diversity of the autism community. The panel described the services and supports that youth with ASD may need to help them achieve five goals for adulthood: postsecondary education; employment; maximizing independent living; health and safety; and maximizing community integration. GAO analyzed the transcripts of the panel as well as documents provided by panelists. GAO is not making recommendations in this report. Youth with Autism Spectrum Disorder (ASD) transitioning to adulthood may need a wide range of services and supports to help them achieve their goals, according to a panel GAO convened in March 2016. ASD is a highly individualized condition with characteristics that vary in degree and type from person to person. Autism characteristics may hinder or help youth achieve their goals—such as postsecondary education and community integration. For each goal, the panel described services and supports that youth (ages 14-24) with ASD transitioning to adulthood may need to address autism characteristics and other health conditions that affect their ability to attain the goal. GAO grouped these services into 14 broad categories. To support a successful transition into adulthood, the panel said youth need to be able to access services that are individualized, timely, equitable, and community- and evidence-based, among other things. The panel discussed the need for timely, individualized services that address the variation in autism characteristics and any changes over a person's lifetime. For example, a person's verbal abilities may change over time, and their needs for communication services would also change. The panel said transitioning youth with ASD need equitable access to services regardless of their race, gender, family income, or location. For instance, the panel said that female and minority youth may be diagnosed at a later age and thus receive fewer services during school and may need additional transition planning services. The panel also emphasized the need for services within youths' local communities in order to foster access and community involvement. In addition, the panel said that while services should be evidence-based, more research into program efficacy is needed. To improve the ability of autistic youth to fully integrate into society, the panel cited the need for a new approach to providing supports and better public understanding of autism. Such an approach would place a shared responsibility for inclusion on both society and youth with ASD. For example, according to the panel, youth with ASD should learn workplace social expectations and meet them to the extent they can, but employers should also recognize that some social rules, such as expecting individuals to smile, can be difficult for some individuals with autism. The panel also said that widespread knowledge of autism could lead to better understanding of autistic youths' potential and enhance their chances of attaining it.
In their efforts to modernize their health information systems and share medical information, VA and DOD begin from different positions. As shown in table 1, VA has one integrated medical information system, VistA (Veterans Health Information Systems and Technology Architecture), which uses all electronic records. All 128 VA medical sites thus have access to all VistA information. (Table 1 also shows, for completeness, VA’s planned modernized system and its associated data repository.) In contrast, DOD has multiple medical information systems (see table 2). DOD’s various systems are not integrated, and its 138 sites do not necessarily communicate with each other. In addition, not all of DOD’s medical information is electronic: some records are paper- based. For almost a decade, VA and DOD have been pursuing ways to share data in their health information systems and create comprehensive electronic records. However, the departments have faced considerable challenges, leading to repeated changes in the focus of their initiatives and target dates for accomplishment. As shown in figure 1, the departments’ efforts have involved a number of distinct initiatives, both long-term initiatives to develop future modernized solutions, and short-term initiatives to respond to more immediate needs to share information in existing systems. As the figure shows, these initiatives often proceeded in parallel. The departments’ first initiative, known as the Government Computer-Based Patient Record (GCPR) project, aimed to develop an electronic interface that would let physicians and other authorized users at VA and DOD health facilities access data from each other’s health information systems. The interface was expected to compile requested patient information in a virtual record (that is, electronic as opposed to paper) that could be displayed on a user’s computer screen. In 2001 and 2002, we reviewed the GCPR project and noted disappointing progress, exacerbated in large part by inadequate accountability and poor planning and oversight, which raised doubts about the departments’ ability to achieve a virtual medical record. We determined that the lack of a lead entity, clear mission, and detailed planning to achieve that mission made it difficult to monitor progress, identify project risks, and develop appropriate contingency plans. We made recommendations in both years that the departments enhance the project’s overall management and accountability. In particular, we recommended that the departments designate a lead entity and a clear line of authority for the project; create comprehensive and coordinated plans that include an agreed- upon mission and clear goals, objectives, and performance measures; revise the project’s original goals and objectives to align with the current strategy; commit the executive support necessary to adequately manage the project; and ensure that it followed sound project management principles. In response, the two departments revised their strategy in July 2002, refocusing the project and dividing it into two initiatives. A short- term initiative (the Federal Health Information Exchange or FHIE) was to enable DOD, when service members left the military, to electronically transfer their health information to VA. VA was designated as the lead entity for implementing FHIE, which was successfully completed in 2004. A longer term initiative was to develop a common health information architecture that would allow the two-way exchange of health information. The common architecture is to include standardized, computable data, communications, security, and high-performance health information systems (these systems, DOD’s CHCS II and VA’s HealtheVet VistA, were already in development, as shown in the figure). The departments’ modernized systems are to store information (in standardized, computable form) in separate data repositories: DOD’s Clinical Data Repository (CDR) and VA’s Health Data Repository (HDR). The two repositories are to exchange information through an interface named CHDR. In March 2004, the departments began to develop the CHDR interface, and they planned to begin implementation by October 2005. However, implementation of the first release of the interface (at one site) occurred in September 2006, almost a year later. In a review in June 2004, we identified a number of management weaknesses that could have contributed to this delay and made a number of recommendations, including creation of a comprehensive and coordinated project management plan. In response, the departments agreed to our recommendations and improved the management of the CHDR program by designating a lead entity with final decision-making authority and establishing a project management structure. As we noted in later testimony, however, the program did not develop a project management plan that would give a detailed description of the technical and managerial processes necessary to satisfy project requirements (including a work breakdown structure and schedule for all development, testing, and implementation tasks), as we had recommended. In October 2004, the two departments established two more short- term initiatives in response to a congressional mandate. These were two demonstration projects: the Laboratory Data Sharing Interface, aimed at allowing VA and DOD facilities to share laboratory resources, and the Bidirectional Health Information Exchange (BHIE), aimed at allowing both departments’ clinicians access to records on shared patients (that is, those who receive care from both departments). As demonstration projects, both initiatives were limited in scope, with the intention of providing interim solutions to the departments’ need for more immediate health information sharing. However, because BHIE provided access to up-to-date information, the departments’ clinicians expressed strong interest in increasing its use. As a result, the departments began planning to broaden BHIE’s capabilities and expand its implementation considerably. Until the departments’ modernized systems are fully developed and implemented, extending BHIE connectivity could provide each department with access to most data in the other’s legacy systems. According to a VA/DOD annual report and program officials, the departments now consider BHIE an interim step in their overall strategy to create a two-way exchange of electronic medical records. Most recently, the departments have announced a further change to their information-sharing strategy. In January 2007, they announced their intention to jointly develop a new inpatient medical record system. According to the departments, adopting this joint solution will facilitate the seamless transition of active-duty service members to veteran status, as well as making inpatient health-care data on shared patients immediately accessible to both DOD and VA. In addition, the departments consider that a joint development effort could allow them to realize significant cost savings. We have not evaluated the departments’ plans or strategy in this area. Throughout the history of these initiatives, evaluations beyond ours have also found deficiencies in the departments’ efforts, especially with regard to the need for comprehensive planning. For example, in fiscal year 2006, the Congress did not provide all the funding requested for HealtheVet VistA because it did not consider that the funding had been adequately justified. In addition, a recent presidential task force identified the need for VA and DOD to improve their long-term planning. This task force, reporting on gaps in services provided to returning veterans, noted problems with regard to sharing information on wounded service members, including the inability of VA providers to access paper DOD inpatient health records. According to the report, although significant progress has been made on sharing electronic information, more needs to be done. The task force recommended that VA and DOD continue to identify long-term initiatives and define scope and elements of a joint inpatient electronic health record. VA and DOD have made progress in both their long-term and short- term initiatives to share health information. In the long-term project to develop modernized health information systems, the departments have begun to implement the first release of the interface between their modernized data repositories, among other things. The two departments have also made progress in their short-term projects to share information in existing systems, having completed two initiatives and making important progress on another. In addition, the two departments have undertaken ad hoc activities to accelerate the transmission of health information on severely wounded patients from DOD to VA’s four polytrauma centers. However, despite the progress made and the sharing achieved, the tasks remaining to achieve the goal of a shared electronic medical record remain substantial. In their long-term effort to share health information, VA and DOD have completed the development of their modernized data repositories, agreed on standards for various types of data, and begun to populate the repositories with these data. In addition, they have now implemented the first release of the CHDR interface, which links the two departments’ repositories, at seven sites. The first release has enabled the seven sites to share limited medical information: specifically, computable outpatient pharmacy and drug allergy information for shared patients. According to DOD officials, in the third quarter of 2007 the department will send out instructions to its remaining sites so that they can all begin using CHDR. According to VA officials, the interface will be available across the department when necessary software updates are released, which is expected this July. Besides being a milestone in the development of the departments’ modernized systems, the interface implementation provides benefits to the departments’ current systems. Data transmitted by CHDR are permanently stored in the modernized data repositories, CDR and HDR. Once in the repositories, these computable data can be used by DOD and VA at all sites through their existing systems. CHDR also provides terminology mediation (translation of one agency’s terminology into the other’s). VA and DOD plans call for developing the capability to exchange computable laboratory results data through CHDR during fiscal year 2008. Although implementing this interface is an important accomplishment, the departments are still a long way from completion of the modernized health information systems and comprehensive longitudinal health records. While DOD and VA had originally projected completion dates for their modernized systems of 2011 and 2012, respectively, department officials told us that there is currently no scheduled completion date for either system. Further, both departments have still to identify the next types of data to be stored in the repositories. The two departments will then have to populate the repositories with the standardized data, which involves different tasks for each department. Specifically, although VA’s medical records are already electronic, it still has to convert these into the interoperable format appropriate for its repository. DOD, in addition to converting current records from its multiple systems, must also address medical records that are not automated. As pointed out by a recent Army Inspector General’s report, some DOD facilities are having problems with hard-copy records. In the same report, inaccurate and incomplete health data were identified as a problem to be addressed. Before the departments can achieve the long-term goal of seamless sharing of medical information, all these tasks and challenges will have to be addressed. Consequently, it is essential for the departments to develop a comprehensive project plan to guide these efforts to completion, as we have previously recommended. In addition to the long-term effort described above, the two departments have made some progress in meeting immediate needs to share information in their respective legacy systems by setting up short-term projects, as mentioned earlier, which are in various stages of completion. In addition, the departments have set up special processes to transfer data from DOD facilities to VA’s polytrauma centers, which treat traumatic brain injuries and other especially severe injuries. DOD has been using FHIE to transfer information to VA since 2002. According to department officials, over 184 million clinical messages on more than 3.8 million veterans had been transferred to the FHIE data repository as of March 2007. Data elements transferred are laboratory results, radiology results, outpatient pharmacy data, allergy information, consultation reports, elements of the standard ambulatory data record, and demographic data. Further, since July 2005, FHIE has been used to transfer pre- and post-deployment health assessment and reassessment data; as of March 2007, VA had access to data for more than 681,000 separated service members and demobilized Reserve and National Guard members who had been deployed. Transfers are done in batches once a month, or weekly for veterans who have been referred to VA treatment facilities. According to a joint DOD/VA report, FHIE has made a significant contribution to the delivery and continuity of care of separated service members as they transition to veteran status, as well as to the adjudication of disability claims. One of the departments’ demonstration projects, the Laboratory Data Sharing Interface (LDSI), is now fully operational and is deployed when local agencies have a business case for its use and sign an agreement. It requires customization for each locality and is currently deployed at nine locations. LDSI currently supports a variety of chemistry and hematology tests, and work is under way to include microbiology and anatomic pathology. Once LDSI is implemented at a facility, the only nonautomated action needed for a laboratory test is transporting the specimens. If a test is not performed at a VA or DOD doctor’s home facility, the doctor can order the test, the order is transmitted electronically to the appropriate lab (the other department’s facility or in some cases a local commercial lab), and the results are returned electronically. Among the benefits of LDSI, according to VA and DOD, are increased speed in receiving laboratory results and decreased errors from manual entry of orders. The LDSI project manager in San Antonio stated that another benefit of the project is the time saved by eliminating the need to rekey orders at processing labs to input the information into the laboratories’ systems. Additionally, the San Antonio VA facility no longer has to contract out some of its laboratory work to private companies, but instead uses the DOD laboratory. Developed under a second demonstration project, the BHIE interface is now available throughout VA and partially deployed at DOD. It is currently deployed at 25 DOD sites, providing access to 15 medical centers, 18 hospitals, and over 190 outpatient clinics associated with these sites. DOD planned to make current BHIE capabilities available departmentwide by June 2007. The interface permits a medical care provider to query patient data from all VA sites and any DOD site where it is installed and to view that data onscreen almost immediately. It not only allows DOD and VA to view each other’s information, it also allows DOD sites to see previously inaccessible data at other DOD sites. As initially developed, the BHIE interface provides access to information in VA’s VistA and DOD’s CHCS, but it is currently being expanded to query data in other DOD databases (in addition to CHCS). In particular, DOD has developed an interface to the Clinical Information System (CIS), an inpatient system used by many DOD facilities, which will provide bidirectional views of discharge summaries. The BHIE-CIS interface is currently deployed at five DOD sites and planned for eight others. Further, interfaces to two additional systems are planned for June and July 2007: An interface to DOD’s modernized data repository, CDR, will give access to outpatient data from combat theaters. An interface to another DOD database, the Theater Medical Data Store, will give access to inpatient information from combat theaters. The departments also plan to make more data elements available. Currently, BHIE enables text-only viewing of patient identification, outpatient pharmacy, microbiology, cytology, radiology, laboratory orders, and allergy data from its interface with DOD’s CHCS. Where it interfaces with CIS, it also allows viewing of discharge summaries from VA and the five DOD sites. DOD staff told us that in early fiscal year 2008, they plan to add provider notes, procedures, and problem lists. Later in fiscal year 2008, they plan to add vital signs, scanned images and documents, family history, social history, and other history questionnaires. In addition, at the VA/DOD site in El Paso, a trial is under way of a process for exchanging radiological images using the BHIE/FHIE infrastructure. Some images have successfully been exchanged. Through their efforts on these long- and short-term initiatives, VA and DOD are achieving exchanges of various types of health information (see attachment 1 for a summary of all the types of data currently being shared and those planned for the future, as well as cost data on the initiatives). However, these exchanges are as yet limited, and significant work remains to be done to expand the data shared and integrate the various initiatives. In addition to the information technology initiatives described, DOD and VA have set up special activities to transfer medical information to VA’s four polytrauma centers, which are treating active-duty service members severely wounded in combat. Polytrauma centers care for veterans and returning service members with injuries to more than one physical region or organ system, one of which may be life threatening, and which results in physical, cognitive, psychological, or psychosocial impairments and functional disability. Some examples of polytrauma include traumatic brain injury (TBI), amputations, and loss of hearing or vision. When service members are seriously injured in a combat theater overseas, they are first treated locally. They are then generally evacuated to Landstuhl Medical Center in Germany, after which they are transferred to a military treatment facility in the United States, usually Walter Reed Army Medical Center in Washington, D.C.; the National Naval Medical Center in Bethesda, Maryland; or Brooke Army Medical Center, at Fort Sam Houston, Texas. From these facilities, service members suffering from polytrauma may be transferred to one of VA’s four polytrauma centers for treatment. At each of these locations, the injured service members will accumulate medical records, in addition to medical records already in existence before they were injured. However, the DOD medical information is currently collected in many different systems and is not easily accessible to VA polytrauma centers. Specifically: 1. In the combat theater, electronic medical information may be collected for a variety of reasons, including routine outpatient care, as well as serious injuries. These data are stored in the Theater Medical Data Store, which can be accessed by unit commanders and others. (As mentioned earlier, the departments have plans to develop a BHIE interface to this system by July 2007. Until then, VA cannot access these data.) In addition, both inpatient and outpatient medical data for patients who are evacuated are entered into the Joint Patient Tracking Application. (A few VA polytrauma center staff have been given access to this application.) 2. At Landstuhl, inpatient medical records are paper-based (except for discharge summaries). The paper records are sent with a patient as the individual is transferred for treatment in the United States. 3. At the DOD treatment facility (Walter Reed, Bethesda, or Brooke), additional information will be recorded in CIS and CHCS/CDR. When service members are transferred to a VA polytrauma center, VA and DOD have several ad hoc processes in place to electronically transfer the patients’ medical information: ● DOD has set up secure links to enable a limited number of clinicians at the polytrauma centers to log directly into CIS at Walter Reed and Bethesda Naval Hospital to access patient data. ● Staff at Walter Reed collect paper records, print records from CIS, scan all these, and transmit the scanned data to three of the four polytrauma centers. DOD staff said that they are working on establishing this capability at the Brooke and Bethesda medical centers, as well as the fourth VA polytrauma center. According to VA staff, although the initiative began several months ago, it has only recently begun running smoothly as the contractor became more skilled at assembling the records. DOD staff also pointed out that this laborious process is feasible only because the number of polytrauma patients is small (about 350 in all to date); it would not be practical on a large scale. ● Staff at Walter Reed and Bethesda are transmitting radiology images electronically to three polytrauma centers. (A fourth has this capability, but at this time no radiology images have been transferred there.) Access to radiology images is a high priority for polytrauma center doctors, but like scanning paper records, transmitting these images requires manual intervention: when each image is received at VA, it must be individually uploaded to VistA’s imagery viewing capability. This process would not be practical for large volumes of images. ● VA has access to outpatient data (via BHIE) from 25 DOD sites, including Landstuhl. Although these various efforts to transfer medical information on seriously wounded patients are working, and the departments are to be commended on their efforts, the multiple processes and laborious manual tasks illustrate the effects of the lack of integrated health information systems and the difficulties of exchanging information in their absence. In summary, through the long- and short-term initiatives described, as well as efforts such as those at the polytrauma centers, VA and DOD are achieving exchanges of health information. However, these exchanges are as yet limited, and significant work remains to be done to fully achieve the goal of exchanging interoperable, computable data, including agreeing to standards for the remaining categories of medical information, populating the data repositories with all this information, completing the development of HealtheVet VistA and AHLTA, and transitioning from the legacy systems. To complete these tasks, a detailed project management plan continues to be of vital importance to the ultimate success of the effort to develop a lifelong virtual medical record. We have previously recommended that the departments develop a clearly defined project management plan that describes the technical and managerial processes necessary to satisfy project requirements, including a work breakdown structure and schedule for all development, testing, and implementation tasks. Without a plan of sufficient detail, VA and DOD increase the risk that the long-term project will not deliver the planned capabilities in the time and at the cost expected. Further, it is not clear how all the initiatives we have described today are to be incorporated into an overall strategy toward achieving the departments’ goal of comprehensive, seamless exchange of health information. This concludes my statement. I would be pleased to respond to any questions that you may have. If you have any questions concerning this testimony, please contact Valerie C. Melvin, Director, Human Capital and Management Information Systems Issues, at (202) 512-6304 or melvinv@gao.gov. Other individuals who made key contributions to this testimony are Barbara Oliver, Assistant Director; Barbara Collier; and Glenn Spiegel. Table 3 summarizes the types of health data currently shared through the long- and short-term initiatives we have described, as well as types of data that are currently planned for addition. While this gives some indication of the scale of the tasks involved in sharing medical information, it does not depict the full extent of information that is currently being captured in health information systems and that remains to be addressed. Table 4 shows costs expended on these information sharing initiatives since their inception. Computer-Based Patient Records: Better Planning and Oversight by VA, DOD, and IHS Would Enhance Health Data Sharing. GAO- 01-459. Washington, D.C.: April 30, 2001. Veterans Affairs: Sustained Management Attention Is Key to Achieving Information Technology Results. GAO-02-703. Washington, D.C.: June 12, 2002. Computer-Based Patient Records: Short-Term Progress Made, but Much Work Remains to Achieve a Two-Way Data Exchange Between VA and DOD Health Systems. GAO-04-271T. Washington, D.C.: November 19, 2003. Computer-Based Patient Records: Sound Planning and Project Management Are Needed to Achieve a Two-Way Exchange of VA and DOD Health Data. GAO-04-402T. Washington, D.C.: March 17, 2004. Computer-Based Patient Records: VA and DOD Efforts to Exchange Health Data Could Benefit from Improved Planning and Project Management. GAO-04-687. Washington, D.C.: June 7, 2004. Computer-Based Patient Records: VA and DOD Made Progress, but Much Work Remains to Fully Share Medical Information. GAO-05- 1051T. Washington, D.C.: September 28, 2005. Information Technology: VA and DOD Face Challenges in Completing Key Efforts. GAO-06-905T. Washington, D.C.: June 22, 2006. DOD and VA Exchange of Computable Pharmacy Data. GAO-07- 554R. Washington, D.C.: April 30, 2007. Information Technology: VA and DOD Are Making Progress in Sharing Medical Information, but Are Far from Comprehensive Electronic Medical Records, GAO-07-852T. Washington, D.C.: May 8, 2007. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Department of Veterans Affairs (VA) and the Department of Defense (DOD) are engaged in ongoing efforts to share medical information, which is important in helping to ensure high-quality health care for active-duty military personnel and veterans. These efforts include a long-term program to develop modernized health information systems based on computable data: that is, data in a format that a computer application can act on--for example, to provide alerts to clinicians of drug allergies. In addition, the departments are engaged in short-term initiatives involving existing systems. GAO was asked to summarize its recent testimony on the history and current status of these long- and short-term efforts to share health information. To develop that testimony, GAO reviewed its previous work, analyzed documents, and interviewed VA and DOD officials about current status and future plans. For almost a decade, VA and DOD have been pursuing ways to share health information and create comprehensive electronic medical records. However, they have faced considerable challenges in these efforts, leading to repeated changes in the focus of their initiatives and target dates. In prior reviews of the departments' efforts, GAO noted management weaknesses, including the lack of a detailed project management plan to guide their efforts. Currently, the two departments are pursuing both long- and short-term initiatives to share health information. Under their long-term initiative, the modern health information systems being developed by each department are to share standardized computable data through an interface between data repositories associated with each system. The repositories have now been developed, and the departments have begun to populate them with limited types of health information. In addition, the interface between the repositories has been implemented at seven VA and DOD sites, allowing computable outpatient pharmacy and drug allergy data to be exchanged. Implementing this interface is a milestone toward the departments' long-term goal, but more remains to be done. Besides extending the current capability throughout VA and DOD, the departments must still agree to standards for the remaining categories of medical information, populate the data repositories with this information, complete the development of the two modernized health information systems, and transition from their existing systems. While pursuing their long-term effort to develop modernized systems, the two departments have also been working to share information in their existing systems. Among various short-term initiatives are a completed effort to allow the one-way transfer of health information from DOD to VA when service members leave the military, as well as ongoing demonstration projects to exchange limited data at selected sites. One of these projects, building on the one-way transfer capability, developed an interface between certain existing systems that allows a two-way view of current data on patients receiving care from both departments. VA and DOD are now working to link other systems via this interface and extend its capabilities. The departments have also established ad hoc processes to meet the immediate need to provide data on severely wounded service members to VA's polytrauma centers, which specialize in treating such patients. These processes include manual workarounds (such as scanning paper records) that are generally feasible only because the number of polytrauma patients is small. These multiple initiatives and ad hoc processes highlight the need for continued efforts to integrate information systems and automate information exchange. However, it is not clear how all the initiatives are to be incorporated into an overall strategy focused on achieving the departments' goal of comprehensive, seamless exchange of health information.
The U.S. Transportation Command (USTRANSCOM) was created in 1987 to unify defense transportation under a single manager during war, contingencies, and exercises. In 1992, the Secretary of Defense directed that USTRANSCOM’s mission be expanded to provide air, land, and sea transportation in time of peace and war. The mission of USTRANSCOM is to provide air, land, and sea transportation for DOD, both in time of peace and war. USTRANSCOM executes its mission through the three transportation component commands—Military Traffic Management Command (MTMC), Military Sealift Command (MSC), and Air Mobility Command (AMC). The responsibilities of each of the component commands are as follows: • MTMC, the DOD manager for traffic management, land transportation, ocean terminals, and intermodal container management, manages freight movement, personal property shipment, and passenger traffic worldwide, operates water terminals throughout the world, and monitors traffic movements through all terminals. • MSC, the DOD manager for sealift, is a ship operator and contracting agency for commercial shipping necessary to deliver cargo and petroleum worldwide and manages the Afloat Prepositioning Force that is used for forward deployment and early on-site availability of supplies and equipment. • AMC, the DOD manager for airlift, is an airlift operator and the contracting agency for commercial augmentation airlift for wartime deployment of fighting forces and support of peacetime activities. Figure 1.1 shows that the organizational structure of USTRANSCOM is divided into three separate transportation component commands: AMC, MTMC, and MSC. USTRANSCOM is also the DOD financial manager for all defense transportationin peace and war and is responsible for managing the transportation portion of the Defense Business Operations Fund (DBOF) as of October 1, 1992. The portion of DBOF attributable to transportation is called DBOF-Transportation (DBOF-T). Through DBOF-T, the component commands establish the rates they charge customers for services that are rendered through the defense transportation system. These rates are intended to cover all the costs of providing services, including the cost of overhead, such as USTRANSCOM headquarters’ operating costs. Customers pay for transportation services and the associated overhead through their appropriated funds. DBOF guidance requires that USTRANSCOM recover its total costs from its customers, including total operating costs for all organizations. However, DBOF policy requires that the prices customers pay for transportation services are to reflect peacetime operating costs only. Mobilization costs are to be funded through direct appropriations. The former Chairman of the Subcommittee on Readiness, House Committee on Armed Services, asked us to determine whether the Department of Defense (DOD) can provide efficient and effective defense transportation in a changing national security environment. Specifically, our objectives were to determine (1) whether DOD is providing cost-effective and efficient transportation, (2) what factors drive transportation costs, and (3) whether any actions are necessary to ensure a successful reengineering of defense transportation that will improve efficiency and reduce costs. To determine whether DOD is providing cost-effective and efficient transportation and to identify what factors drive transportation costs, we analyzed data on the current USTRANSCOM and component command structure including costs and number of DOD transportation personnel. We also analyzed staffing and operational data for organizations within both the USTRANSCOM headquarters and the component commands. In addition, we analyzed (1) current cost and billing data to identify trends and (2) DBOF-T data to determine how defense transportation costs are charged back to customers. With regard to customer prices, we compared what the components were charging their customers and what the components were paying for the underlying transportation. In doing so, we analyzed a number of representative shipments made through MTMC and MSC. We traced the history of USTRANSCOM and the component command structure, including reviews of prior presidential commission, congressional committee, DOD, independent consultant, and USTRANSCOM studies and reports on defense transportation. To identify potential duplication of functions among the various organizational levels, we analyzed functional, process, and support tasks according to roles and mission statements and conducted extensive headquarters and field level data collection interviews and discussions to categorize functions, processes, and tasks as hands-on, transportation information process support, and/or administrative/management. We performed the same analyses to identify whether tasks being performed at various locations had to be performed at those specific locations. To determine whether any actions are necessary to ensure a successful reengineering of defense transportation that will improve efficiency and reduce costs, we reviewed data and reports on USTRANSCOM and Office of the Secretary of Defense reengineering efforts, plans, and status; reviewed and analyzed reports regarding successful reengineering techniques and guidelines for assessing reengineering efforts; contacted and interviewed various defense transportation system commercial cargo carriers and customers, including representatives of the U.S. European Command, the U.S. Air Force-Europe, the U.S. Army-Europe, the U.S. Central Command USTRANSCOM Liaison Officer, and the Army-Air Force Exchange Service. We also reviewed data and documents from these representatives to obtain a broad base perspective of carrier and customer comments regarding the current defense transportation system structure and results. Work was conducted at: • Headquarters, USTRANSCOM, Scott Air Force Base, Illinois; • Headquarters, AMC, Scott Air Force Base, Illinois; • AMC, 15th Air Force, 60th Aerial Port Squadron, Travis Air Force Base, • Headquarters, MTMC, Falls Church, Virginia; • MTMC-Eastern Area, Bayonne, New Jersey; • MTMC-Western Area, Oakland Army Base, California; • MTMC-1302nd Major Port Command, Oakland, California; • MTMC-Europe, Rotterdam, Netherlands; • MTMC-1318th Medium Port Command, BENELUX, Rotterdam, Netherlands; • MTMC-Wheeler Army Air Field, Hawaii; • MTMC-1316th Medium Port Command, Yokohama, Japan; • Headquarters, MSC, Navy Yard, Washington, D.C.; • MSC-Atlantic, Bayonne, New Jersey; • MSC-Pacific, Oakland, California; • MSC-Far East, Yokohama, Japan; • MSC-Europe, London, United Kingdom; • Headquarters, U.S. European Command, Stuttgart, Germany; • USTRANSCOM Liaison Officer for the European Command, Stuttgart, • Headquarters, U.S. Army-Europe, Heidelberg, Germany; • Headquarters, U.S. Air Force-Europe, Ramstein Air Base, Germany; • Office of the Deputy Chief of Staff for Logistics, Army Transportation, • Office of the Assistant Commander for Navy Material Transportation, Navy Supply Systems Command, Arlington, Virginia; • Office of the Deputy Chief of Staff for Installations & Logistics, U.S. Marine Corps, Arlington, Virginia; • Office of the Assistant Executive Director for Transportation, Defense Logistics Agency, Alexandria, Virginia; • Army-Air Force Exchange Service-Pacific Transportation Center, Oakland, • Army-Air Force Exchange Service-Atlantic Transportation Center, Bayonne, New Jersey; • Office of the Joint Chiefs of Staff, Washington, D.C.; and • offices of commercial ocean carriers—American President Lines, Ltd., and Sea-Land Service, Inc., Oakland, California. We also contacted and interviewed members of the private sector responsible for reengineering efforts within corporate entities, and collected and analyzed reengineering information they provided as it relates to defense transportation. For example, we spoke with officials from American President Lines and Aetna Casualty and Life. We also reviewed and analyzed pertinent logistics, reengineering, and transportation studies prepared by major consulting firms, and discussed observations and conclusions in those reports with representatives responsible for preparing them. DOD provided written comments on a draft of this report. These comments are presented and evaluated in chapters 2, 3, and 4, and are reprinted in appendix IV. DOD also provided other detailed comments for our consideration. We considered these comments and made changes as appropriate to our report. We conducted our work between August 1994 and September 1995 in accordance with generally accepted government auditing standards. Customers using defense transportation services pay substantially more than USTRANSCOM’s component commands do for basic commercial transportation. What USTRANSCOM’s component commands charge their customers to meet transportation requirements often far exceeds, sometimes two and three times, what the commands pay to obtain those services. Our analysis of the key cost factors making up these charges is discussed in chapter 3. Defense transportation services for sustainment and, to a somewhat lesser extent, deployment are for the most part arranged through and provided by the commercial transportation sector. Most cargo is shipped on commercial U.S.-flag ships, moved pursuant to contracts with commercial ocean carriers, and is loaded on and off their ships by private sector or carrier labor. To the extent the cargo can be shipped in intermodal containers, it is shipped in containerized service. The rates the component commands charge for services must cover the expenses USTRANSCOM incurs for the commercial services plus all other direct, indirect, and overhead expenses. For example, MSC is responsible for negotiating the rates and terms of carriage with the ocean carriers and paying their invoices. MTMC is responsible for booking service for individual shipments, preparing shipment documentation, clearing customs, and supporting MSC’s payment processes. Each must develop a budget and determine how much to charge its customers for each service to be provided. The rates are developed before the component commands know what the commercial carriers will charge DOD. Moreover, in any given year, the rates may include a factor to recover losses or return profits from prior years’ operations. Consequently, rates for a given shipping route from year to year may double or be cut in half even when the commercial carriers’ rates showed little or no change. Our analysis of the component commands’ charges for arranging cargo movements on many of DOD’s high-volume, container cargo routes shows that the charges, in total, are substantially higher—from 24 to 201 percent higher—than the amounts carriers charge DOD. Although the charges of an individual component command may not always be higher than what the component command pays the carriers, the total (combined MSC/MTMC) charges for each shipment in our analysis were substantially higher than DOD’s carrier costs. We developed a number of case examples that illustrate the high costs customers pay. (See table 2.1.) They were based on charges for typical DOD shipments, each consisting of general (dry) cargo, 47 measurement tons each, transported in commercial carrier 40-foot containers, at rates for the low-cost carrier on each route, during the first quarter of fiscal year 1995.The examples reflect charges MSC and MTMC bill their customers for the costs they incur for negotiating rates with commercial carriers used to move DOD shipments; for contracting with the underlying carrier and paying its charges; and for the administrative expenses incurred to document the shipments and handle booking, manifesting, receiving, and customs clearance. The MSC/MTMC charges are compared with the costs of the underlying carrier—in each case, the low-rate carrier. It should be noted that we did not add to the carrier charges any costs that the customers might otherwise incur were they to do the work themselves or have some third party do it for them because customers may not have needed such services for every shipment or they may have been able to provide such services at little or no additional cost using existing traffic management staff. A more comprehensive set of examples is shown in appendix I. MSC and MTMC move or handle other types of shipments, such as in other sized containers, in noncontainerized service, or in import service. The charges for these moves will vary accordingly. However, most DOD cargo moves as general cargo, in containerized export service, in 40-foot containers, making the examples in the table representative of DOD shipments. DOD partially concurred with our findings. DOD acknowledged the difference between the defense transportation system and private industry charges. It attributed the difference to readiness/mobilization and overhead costs for the entire defense transportation system, which was designed to support both peacetime and mobilization/wartime transportation. According to DOD, peacetime, industrial policy, and readiness/mobilization costs are not always severable; and, if all readiness/mobilization costs were excluded, the difference between defense transportation costs and private industry charges would be reduced. It further pointed out that billing rates are established 18 months prior to budget execution. Such stabilized rates are affected by accumulated operating result factors, which can result in lower, or higher, charges for some movements. We basically agree with DOD’s comments. With regard to the last comment, our report acknowledges that individual component command charges were on occasion lower than the underlying transportation costs. However, it is the total cost that the customer pays that is of concern. Regardless of what the underlying transportation cost is, the customer is always billed more to cover the costs for the excessive defense transportation infrastructure, as well as the costs associated with maintaining a mobilization/readiness capability. Major factors that drive USTRANSCOM’s defense transportation costs higher are (1) USTRANSCOM’s fragmented and inefficient organizational structure and management processes and (2) the need to maintain a mobilization capability. Separate processes are a product of separate commands. Much of DOD cargo today moves intermodally, by air, land, and sea transport. However, USTRANSCOM retains an outdated and inefficient, modally oriented, organizational structure, with many collocated facilities. In fiscal year 1994, USTRANSCOM’s total expenses for defense transportation services were $5.6 billion. Neither USTRANSCOM nor the component commands, however, collect financial information in a manner that allows actual and total cost of the organizational structure to be developed. Mobilization and readiness requirements are also key cost drivers, but again, financial information is not collected and reported in a manner that clearly distinguishes the associated costs for these factors. Factors driving costs of defense transportation higher include (1) the costs associated with having fragmented transportation processes, (2) multiple organizational elements to implement these processes, (3) component command organizational structure that requires duplicative administrative support at multiple locations and maintenance of personnel in locations where they may no longer be necessary to support intermodal transportation processes. Fragmentation refers to the fact that no one organization has responsibility for all aspects of traffic management or is able to meet customers’ needs regardless of transportation mode. This situation reflects the fact that management processes were developed independently of each other. Although USTRANSCOM was established to consolidate functions within one organization, because various component commands retain modal responsibilities, fragmentation remains, particularly with such areas as negotiating rates to move cargo, shipment routing, documenting shipments for control and payment, and customer billing. The rate negotiation process is inefficient and not designed to facilitate customer services. USTRANSCOM employs five separate systems and strategies for negotiating rates, and each system reflects a particular service’s approach to procurement. Accordingly, it can take as many as five separate USTRANSCOM units to negotiate the rates for a single shipment. Thus, a customer may need to contact five separate units to get all the rates needed to move a single shipment. As a result, the customer often experiences delays in getting needed services and may become interested in circumventing the system. Circumvention may be to the customer’s immediate advantage but not advantageous to DOD overall. Moreover, separate negotiation units add more people and costs to the system. The following are various types of shipments and the units used for negotiating the rates. • For domestic continental United States (CONUS) freight shipments, and the CONUS portion of international shipments not moving as part of a through-intermodal move, MTMC’s Office of the Deputy Chief of Staff for Operations, which has a staff of about 20 traffic management specialists, negotiates for land transportation, inland waterway transportation, and less-than-plane load air transportation with U.S. motor carriers, railroads, freight forwarders, barge carriers, and air cargo companies. • For international freight shipments, MSC’s Central Technical Activity, Contracts and Business Directorate and its staff of 36, who are primarily contracting specialists, negotiate for ocean transportation with ocean carriers. • For foreign transportation, MTMC’s overseas commands, such as MTMC-Europe, Directorate of Inland Theater Transportation, negotiate for land, inland waterway, and air rates, as required, in their areas of responsibilities. • For stevedore and terminal services, MTMC’s Office of the Principal Assistant Responsible for Contracting, with a staff of seven contracting specialists, negotiates contracts with port interests. Other units negotiate for such services overseas. • For personal property shipments, another part of MTMC’s Office of the Deputy Chief of Staff for Operations, which has a staff of about 10 traffic management specialists, negotiates household goods and unaccompanied baggage freight rates for CONUS land and international water and air transportation with through-bill-of-lading commercial moving van companies and freight forwarders. MTMC’s overseas commands also negotiate rates with overseas movers and forwarders for intratheater personal property movement. Customers cannot go to one, single unit within USTRANSCOM to obtain information on carriers and routing for all modes or for the movement of a particular shipment through several modes. Instead, customers must deal with multiple organizations and offices. To the customers, this adds time, causes delays, and is inconvenient. Moreover, the fragmentation adds more people to the organizational structure than needed. First, for domestic CONUS shipments, MTMC is responsible for providing information to customers requiring routing advice. Generally, this information is provided to installation transportation offices and others, through the CONUS Freight Management system. This system is intended to be a comprehensive freight management information system to standardize and automate freight traffic management by providing the capability to perform cost evaluations, select the best value carrier, and perform prepayment audits of government bills of lading. Second, for international shipments, MTMC provides the routing in consultation with MSC that has negotiated the movement contract. The sealift cargo routing/booking and contract administration functions were performed by MSC, but in October 1981, following the Harbridge House, Inc., study, these functions were transferred to MTMC. MSC still retains oversight over contractual provisions requiring certain minimum allocation of cargo to other than the low-cost carriers on certain ocean liner trade routes and over the statutory regulations governing the use or nonuse of U.S.-flag ocean carriers. Third, for personal property shipments, MTMC has routing systems separate from its cargo routing systems to route shipments within CONUS and to and from overseas locations. AMC Aerial Port Operations also handles airlift shipment routing/carrier selection. In the first quarter of fiscal year 1995, there were 89 positions in these offices. The offices included four sections: the Cargo Management Section, the Passenger and Traffic Management Section, the Passenger Reservation Section, and the Air Transportation Traffic Negotiations Section. The Cargo Management Section develops and implements policies and procedures relative to the movement of cargo and mail to and from AMC bases and other DOD activities, overseas and domestic, by AMC organic and commercial contract aircraft. The Passenger and Traffic Management Section directs and controls the AMC worldwide traffic management program and passenger service system. The Passenger Reservation Section develops, disseminates, and implements policy and procedural guidance for establishment and operation of the worldwide AMC passenger reservation system. The Air Transportation Traffic Negotiations Section serves as the command point of contact with the commercial carriers to move DOD passengers and specified cargo. Cargo documentation is a process long noted for its fragmentation. Depending on the type of move and the component command managing it, different types of documents are used. Multiple documents may be necessary to move a single shipment, but it is often confusing to customers and commercial carriers. By not using one standard system, or not using documentation standard in the private sector, customers’ costs are increased. Moreover, carriers often have to set up separate systems, different from those used for their commercial business, just to service DOD. For domestic CONUS shipments, the documentation system is managed by MTMC using government bills of lading. In fiscal year 1994, DOD moved more than 1.2 million shipments, at a cost of nearly $600 million, under the government bill of lading system. The documentation system used for international shipments is a combination of Military Standard Transportation and Movement Procedures and the government bill of lading system. Most international cargo shipments move under the Military Standard Transportation and Movement Procedures system that uses DOD-unique documents, such as the Transportation Control and Movement Documents and ocean cargo manifests. Because this system is unique to DOD, commercial carriers must set up a system only for DOD if they want its business. A portion of the international cargo program, shipments moving to and from Hawaii, Guam, and Puerto Rico, and shipments by foreign-flag carriers, uses the government bill of lading system. In fiscal year 1994, DOD moved 7.5 million measurement tons of freight, at a cost of $735 million, using the Military Standard Transportation and Movement Procedures system, and about 0.9 million measurement tons of cargo, at a cost of $106 million, using the government bill of lading system. For personal property shipments, both domestic and international, DOD uses the government bill of lading system. The documentation system, however, is separate from the cargo documentation system. In fiscal year 1994, DOD spent $540 million for international personal property shipments. DOD has no single customer billing policy, procedure, or system for defense transportation. Customers receive a bill from each component command for each mode of transportation, rather than a single intermodal bill from only one component. Consequently, when a noncontainerized freight shipment moves from some interior point in the United States to an interior point overseas, a customer pays for the services USTRANSCOM has provided in five parts. For example, for one shipment, a customer may have one charge related to shipping cargo to the port of embarkation, a second charge for MTMC’s port handling, a third charge for MSC’s ocean service, a fourth charge for MTMC’s custom clearance and receipt of cargo overseas, and a fifth charge for line-haul transportation overseas to final point of destination. Separate billing systems are inefficient, adding people and cost, and confusing to customers who pay for the inefficiencies. The billing process for different shipments is as follows. • For domestic CONUS shipments and shipments destined overseas but not part of a through-container move, there is no reimbursement billing per se, because the Defense Finance and Accounting Service pays the carriers’ charges citing the customers’ own appropriations. MTMC’s administrative expenses related to these shipments are paid to MTMC in lump sum, not shipment-by-shipment. • For international shipments, customers reimburse MTMC and MSC for the services. Customers have to pay MTMC twice, once for the booking and documentation service at origin and again for clearing customs overseas and managing the shipments through to final point of destination. Customers also pay MSC for the ocean and related drayage or inland line-haul services. • For the overseas portion of an international shipment not part of a through-container move, there is no reimbursement billing per se, because the local theater finance office pays the carriers’ charges citing the customers’ own appropriations. If the movement of cargo or passengers requires AMC organic or arranged commercial airlift capability, customers reimburse AMC for the services. Another major factor driving higher costs is the organizational structure. The February 1995 USTRANSCOM DBOF budget justifications submitted to the Congress show USTRANSCOM’s costs for fiscal year 1994 as $5.614 billion. Table 3.1 shows a breakdown by component command. The figures shown represent all costs, contracted transportation and port handling/terminal services; expenses for salaries and wages, travel, supplies, and equipment; contracted services, such as data processing; payments to other federal agencies, such as the Defense Finance and Accounting Service; maintenance to facilities; depreciation for capital assets; expenses for the headquarters of USTRANSCOM; general and administrative expenses; and overhead. In fiscal year 1994, USTRANSCOM spent about $1.2 billion for salaries and wages of civilian and military personnel. Because there are three component commands, there are many instances of staff performing work in the same functional area. We focused our work on MTMC and MSC as examples because they basically have similar organizational structures. However, as noted earlier in this chapter, AMC also has staff performing transportation functions in areas similar to MTMC and MSC, such as shipment routing and billing. The organizational charts for MTMC and MSC, with numbers of staff authorized, are shown in appendix II. In summary, MTMC has • 1 headquarters office; • 1 field operating activity office; • 3 subordinate command headquarters offices; • 2 subordinate command, subcommand headquarters offices; • 4 major port command offices; • 14 medium port command offices; • 6 port detachments; • 1 river terminal; • 1 outport; • 4 ocean cargo clearance authority offices; • 5 ocean cargo booking offices; • 1 overseas inland theater transportation directorate; • 2 privately owned vehicle processing centers; • 2 regional storage management offices; and • 2 Army garrisons. It has an authorized staff of 3,511, including 329 military personnel and 3,182 civilians. MSC, for its strategic sealift, or DBOF-T, mission, has • 1 headquarters office; • 1 central technical activity office; • 4 subordinate command headquarters offices; • 3 subordinate command, subarea offices; • 8 MSC port offices; • 3 MSC detachment offices; • 1 subordinate command representative office; and • 4 MSC unit or Fast Sealift Squadron offices. It has an authorized staff (DBOF-T only) of 362, including 69 military personnel and 293 civilians. Many MTMC and MSC offices are located at the same site or in close proximity to each other. Of the 25 MSC offices related to its DBOF-T mission around the world, 24 are collocated, or in close proximity to MTMC offices. Some of these offices are shown in table 3.2. Many administrative activities are duplicated between MTMC and MSC. Each command has its own headquarters command, subordinate commands, field operating agencies, and field offices with their own administrative functions. Within these units, personnel are responsible for the same or similar administrative functions. For example, each command has staff assigned to carry out public affairs, internal review, and equal employment opportunity matters. Each also has units responsible for legal matters, resource management/comptroller, information management/computer services, and plans. Resource management and comptroller personnel are responsible for developing and implementing policies, programs, and standards for using manpower and for controlling the allocation and prioritization of manpower resources. They also are responsible for (1) managing the budgetary operations of the command, including preparing and executing the budget and developing and defending the command’s DBOF and, where applicable, appropriated fund budgets and (2) developing and publishing ocean terminal port handling or ocean service billing rates, after obtaining USTRANSCOM and Office of the Secretary of Defense approval. MTMC has nearly 200 personnel in the resource management and comptroller areas. MSC has 75 positions authorized for its DBOF-T mission for carrying out resource management and/or/comptroller functions. Information management and computer service personnel are responsible for communications, automation, audio-visual, publications, and records management, including the development, testing, and fielding of systems that automate transportation functionality for the movement of deploying units and freight. MTMC has over 400 personnel in the information management and computer areas. MSC has 15 positions authorized for its DBOF-T mission for carrying out information management and computer service functions. Plans personnel are responsible for the transportation planning necessary to support the component commands’ missions related to strategic mobility and contingency readiness. MTMC has about 100 personnel involved in this area. MSC has about 25 positions authorized for its DBOF-T mission for carrying out transportation planning functions. MTMC maintains an extensive worldwide port structure to service DOD cargo that moves almost entirely through commercial channels. It operates 26 port and terminal facilities around the world, with more than 1,200 staff, with a support cost, based on fiscal year 1994 data, exceeding $70 million dollars (not including contract stevedore costs). About 3 decades ago, all transportation moved modally, meaning that transportation companies typically handled only a single mode of transportation. Trucking firms or railroads handled land transportation, steamship companies handled the ocean transportation, and air cargo companies handled air movements. Today, a single transportation company will pick up materials at the point of origin, truck them to a seaport, ship them across the ocean, and truck them to the point of destination, all as a single intermodal move. Modal transportation required large numbers of personnel at points where cargo was transferred from one mode of transportation to another. For surface transportation (land and sea), intermodal transportation became possible when standardized containers could be transferred between modes without unpacking at transfer points. When the transportation industry began moving cargo intermodally, it required fewer personnel to transfer cargo between modes. Today, the majority of cargo shipped by land and sea is moved intermodally in standardized containers. When cargo moves intermodally, containers are packed at the point of origin, moved by truck to the port of embarkation, loaded on a ship, unloaded at the port of debarkation, moved by truck to the point of destination, and unpacked. With intermodal movements on land and sea, fewer personnel are needed at the ports for warehousing, packaging, and loading than were required for modal movements. For example, according to transportation studies, a container port requires about 85 percent less labor than a noncontainerized (breakbulk) port. The loading and unloading of containers on ships, rail cars, and trucks are now achieved with large cranes that require much less manpower than what was required to pack and unpack crates for shipment. MTMC still maintains a heavily staffed worldwide port infrastructure. The work performed at the ports has changed from cargo handling activities to various traffic management activities. The principal missions of MTMC units at ports are to accomplish the expeditious movement and documentation of DOD-sponsored cargo and privately owned vehicles through the military and commercial terminals and piers in the command’s or unit’s area of responsibility and, as assigned, cargo booking functions. Generally, these units are organized substantially the same as they were more than a decade ago and reflect an era prior to containerization. Each has an office of the commander, an administration division, and a combination of divisions for cargo operations, cargo documentation, and traffic management. This is little different from December 1979 when MTMC began setting up its terminals in a standardized organization of no more than four divisions to provide a more streamlined, better understood structure while still preserving sufficient latitude to provide flexibility to meet local conditions. Staff are dispersed as follows: • 461 located in U.S. East Coast facilities, • 88 located in U.S. Gulf Coast facilities, • 176 located in U.S. West Coast facilities, • 42 located in Caribbean/Central America facilities, • 244 located in European facilities, and • 282 located in Far East facilities. The facilities include 4 major port commands, 14 medium port commands, 6 port detachments, 1 outport, and 1 river terminal. “The Bayonne and Oakland terminals have been outmoded by transportation distribution technology and are increasingly underutilized. The advent of containerization has had a tremendous impact on DOD and commercial cargo transportation, with many commercial facilities converting to or adding container handling equipment. In 1970, MTMC elected to move DOD container cargo through commercial container facilities on the east and west coasts, rather than install duplicate facilities at the Bayonne and Oakland terminals. The commercial facilities can meet DOD contingency and support requirements.” The 1995 Defense Base Closure and Realignment Commission justified a recommendation to close two MTMC terminal facilities—Military Ocean Terminal, Bayonne, New Jersey; and the Oakland Army Base, California—because the normal workload at these terminals did not justify continued military operation of the facilities and commercial ports could handle military cargo requirements. Table 3.3 shows the current number of MTMC port and terminal staff by unit and location. MSC, as part of its DBOF-T mission, also maintains personnel at ports around the world. It has 14 port-related offices with 50 positions authorized for its DBOF-T missions. Costs for these offices are several million dollars annually. (See table 3.4 for location of the positions.) Most of these offices are maintained primarily for Navy fleet-related missions that are funded directly by the Navy. The DBOF-T missions are secondary and include exercising local operational control of MSC-controlled ships in port and maintaining liaison with service, local government, and commercial activities concerned with MSC activities. Another factor driving costs higher is the need to maintain a transportation mobilization capability. Although DOD policy mandates direct appropriation funding for maintaining capability to expeditiously respond to mobilization conditions and the services do use direct appropriations to fund certain AMC and MTMC mobilization costs, other mobilization costs are passed to customers. As discussed earlier, MTMC operates an extensive port structure, supported by more than 1,200 staff and costing over $70 million for salaries and wages alone in fiscal year 1994. While this structure may be needed to provide a mobilization capability, it may not be necessary to move cargo during peacetime. These ports are largely unused during peacetime because cargo moves by commercial carriers through commercial ports, although many of the personnel are actively engaged in documenting shipments and other management areas. Additionally, MSC, for some high-volume shipping routes, uses other than the low-cost carrier to maintain a mobilization capability. The costs of MTMC’s port structure and MSC’s use of other than low-cost carriers are paid by the customers. “4. United States Transportation Command (USTRANSCOM). Because a capability must be maintained by the USTRANSCOM DBOF Transportation business area to expeditiously respond to requirements to transport personnel, material, or other elements required to satisfy a mobilization condition, direct appropriation funding will be provided for: a. Air Mobility Command (AMC). Airlift flying hours and associated costs are based on the requirement to maintain the capability of the airlift system, including crew training (and concurrent mobilization) requirement. The airlift system training generated capacity is used by DOD to move air eligible cargo and passengers. In order to extend air eligibility and increase capacity utilization, rates are generally established to be competitive with commercial carriers. However, resulting contributed revenue does not cover the costs of operations due to the mobilization requirement. This requirement will be recorded/ budgeted as follows: (1) . . . Military personnel within the Air Mobility Command will be direct funded by a Military Personnel appropriation. Although the cost shall be recorded as a DBOF cost, it shall be recorded so that it is not required to be recovered in customer rates. (2) The balance of the mobilization requirement costs will be funded through a direct appropriation to the Air Force and will be placed as an order with the DBOF. This will assure that revenue is reflected to offset the costs.” Accordingly, the Air Force uses appropriated funds to reimburse the DBOF-T account an amount that it estimates will cover the difference between a calculated competitive commercial rate total and the total costs AMC incurs in providing airlift. As a result, the amount reimbursed, which is considered an Air Force readiness cost, is not passed on to defense transportation system customers. In fiscal year 1994, the Air Force reimbursed the DBOF-T account about $1.5 billion. “b. Military Traffic Management Command (MTMC). The MTMC shall plan for and maintain a Reserve Industrial Capacity (RIC) to transport personnel resources, material and other elements required to satisfy a mobilization requirement. The costs of RIC will be funded by Army Operation and Maintenance.” Accordingly, the Army directly funded about $52 million for readiness in fiscal year 1994, through the Reserve Industrial Capacity budget line item. However, the Army did not clearly show what this funding was used for. No specific guidance exists for Navy support of MSC. Yet, MSC charges customers through its billing rates for what amounts to mobilization costs. MSC contractually agrees to book some cargo to other than the low-cost ocean carriers. It does this, in part, to maintain a sufficient number of ships in the maritime mobilization base to meet the continuing requirement to augment emergency sealift capacity. The additional costs for using other than the low-cost carriers are paid for by the customers. Three factors drive USTRANSCOM defense transportation costs higher: process fragmentation, organizational redundancy, and mobilization requirements. In each of these areas, there are opportunities to improve effectiveness and efficiency. As discussed in chapter 4, DOD and USTRANSCOM are reengineering fragmented transportation business processes, but they are delaying organizational structure change. Recommendations relative to improvements in areas discussed in this chapter are addressed in chapter 4 in the context of our overall recommendations regarding reengineering the entire defense transportation process. DOD partially concurred with our findings. DOD acknowledged the impact of defense transportation business processes and readiness/mobilization costs on the charges DOD customers pay. It also agreed that MSC often uses other than the low-cost carrier to meet its customers’ needs. DOD said that this practice serves to maintain a mobilization capacity and ensure retention of more carriers, thereby fostering competition among the carriers and resulting in lower costs to its customers. DOD stated that the fragmented business processes and infrastructure will be reviewed as part of its planned reengineering effort. DOD further stated that as the processes are reengineered and the infrastructure assessed, a joint, global, seamless, intermodal transportation system will emerge that emphasizes origin to destination movement and visibility, supports customer requirements, and is an integral part of the entire logistics process. DOD also stated that another objective of the reengineering effort is to separate the readiness/mobilization costs of providing peacetime transportation so that customers will pay for peacetime costs only. We agree with the stated goals of the reengineering effort and discuss it further at the end of chapter 4. Various studies, commissions, and task forces dating back as far as 1949 have recommended changes in the defense transportation system organizational structure. Both USTRANSCOM and DOD have also recognized the need for fundamental changes in defense transportation processes and structures. However, over time, recommendations to change the structure have not been implemented because several key players were reluctant to allow change. Even after its designation as the single manager of defense transportation, USTRANSCOM retained the same component command structure that existed prior to its establishment. As recently as May 1995, DOD initiated a task force to reengineer the defense transportation processes, but that task force’s plan does not involve a review of organizational structure until after DOD completes all other defense transportation reengineering efforts. By delaying structural change, DOD runs the risk of superimposing reengineered processes on a fragmented, inefficient, and costly component command organizational structure. Given the long-standing reluctance to change, it is unlikely that the component commands would adopt any new processes that would necessitate changes to that structure. It is essential that DOD consider organizational structure as an integral part of its reengineering efforts if it is to achieve the optimum results. Over the years, studies have recommended unifying traffic management in one organization to improve defense transportation and reduce costs. However, these recommendations were not implemented because of opposition from component commands, services, the Joint Chiefs of Staff, or the Congress. (App. III provides a history of attempts to realign defense transportation.) In 1992, the Commander-in-Chief, USTRANSCOM, stated before the House Committee on Appropriations, Subcommittee on Defense, that moving cargo in peacetime the same way they are moved during a contingency would simplify the process. It would require no change in procedures to “gear up” for a deployment, just an increase in the level of operations. He added that the single manager assignments of the component commands—MTMC, MSC, and AMC —would be integrated into USTRANSCOM, making USTRANSCOM the single manager for all defense transportation. Operations Desert Shield/Desert Storm deployment experience highlighted the need for centralized transportation management as the most effective and flexible way to manage and coordinate air, sea, and land movements, while retaining the ability to react quickly to changing priorities and efficiently schedule and employ transportation resources. Studies dating back to 1949 also concluded that an integrated transportation system was a critical element of an efficient and effective transportation system. In 1986, a Blue Ribbon Commission on Defense Management (the Packard Commission) recommended establishing a single unified command to integrate global air, land, and sea transportation. This recommendation was acted upon with passage of the Goldwater-Nichols DOD Reorganization Act of 1986, which ordered the Secretary of Defense to consider creation of a unified transportation command, to include MTMC, MSC, and AMC. In 1987, the Secretary of Defense established the unified transportation command—USTRANSCOM. However, USTRANSCOM retained the same component command structure that existed prior to its establishment. In 1994, a USTRANSCOM study, Reengineering the Defense Transportation System, The “Ought to Be” Defense Transportation System for the Year 2010, concluded that more can and must be done to better integrate traffic management and to provide more effective support, at lower cost, both in peace and war. The study found that the defense transportation system continued to be replete with redundant organizational structure and inefficient and costly processes. As a result, USTRANSCOM and the Office of Secretary of Defense are taking steps to reengineer the defense transportation system. These efforts have concluded that a fundamental restructuring of business practices and organizational structure is needed for the defense transportation system to keep pace in a volatile and resource-constrained operating environment. Both efforts include actions to improve and consolidate fragmented processes such as procurement and financial management. However, both efforts postpone any actions related to organizational structure issues until after process changes are completed. By delaying organizational structure change, DOD runs the risk of superimposing reengineered processes on a fragmented, inefficient, and costly component command organizational structure. In response to a 1988 DOD Inspector General report recommendation to eliminate transportation component command headquarters and to transfer all defense transportation functions to USTRANSCOM, the command cited three reasons for not implementing the recommendation. The reasons cited were (1) by law, the services have the authority to train, equip, and manage their assigned forces; (2) addition of the peacetime mission to USTRANSCOM would detract from its primarily wartime mission; and (3) removal of the services and their departments from the resource allocation process would significantly complicate programming and budgeting. These reasons for not reorganizing are not valid today. First, although the services have the statutory responsibility to, among other things, train, equip, and manage their assigned forces, the Secretary of Defense is authorized under 10 U.S.C. 125(a) to transfer, reassign, consolidate or abolish any function, duty or power not vested by law in an official of DOD in order to provide more effective, efficient, and economical operation of DOD. We are not aware of any provision of law that would preclude the Secretary from exercising this authority to abolish the transportation component command headquarters. In addition, realigning defense transportation activities under USTRANSCOM would be consistent with USTRANSCOM’s current mission. At the time of its activation, USTRANSCOM was the single manager for defense transportation during war. The service secretaries retained their single manager charters over peacetime transportation functions. However, Desert Storm highlighted the disadvantages of fragmentation between wartime and peacetime transportation activities. Therefore, in 1992, DOD made USTRANSCOM the single manager for defense transportation in both peace and war. Finally, since USTRANSCOM is the DOD financial manager for all defense transportation through the DBOF, realigning defense transportation under USTRANSCOM would create a more efficient resource allocation process. Currently, each component command develops its own DBOF-T budget submission. USTRANSCOM consolidates the separate budget submissions to create a single DBOF-T budget submission. If defense transportation activities were aligned under USTRANSCOM, there would be no need for each component to develop a separate DBOF-T budget submission. The ongoing DOD and USTRANSCOM efforts to reengineer fragmented transportation processes are a step in the right direction. However, these efforts continue to delay organizational structure changes. Even these current reengineering efforts run a significant risk of reengineering processes to operate a fragmented and costly defense transportation organization. In order for any defense transportation reengineering effort to achieve the maximum improvement in processes and reduction in costs possible, it must include as an integral part changes to organizational structure. We recommend that the Secretary ensure that the defense transportation reengineering efforts simultaneously address process and organizational structure improvements. Specifically, the reengineering efforts should confront, at a minimum, • need for separate traffic management component command headquarters • consolidation of separate field subordinate command traffic management • elimination of all remaining duplicative field-based subordinate command support staff. We also recommend that the Secretary clarify which USTRANSCOM mobilization costs should be passed along to its customers. The amounts and purpose of any such costs should be contained in transportation component annual financial statements and in the budget justification statements submitted annually to the Congress. DOD generally concurred with our findings and recommendations. It indicated that it has already begun addressing our concerns and pursuing the objectives of our recommendation related to business processes and organizational improvements through its Reengineering Transportation Action Plan, established at the direction of the Deputy Secretary of Defense by memorandum of May 3, 1995. Under the plan, prepared on June 30, 1995, DOD is establishing Integrated Product Process Teams, comprised of representatives from the Military Services, Joint Staff, Defense Logistics Agency, Under Secretary of Defense (Comptroller), Under Secretary of Defense (Acquisition and Technology), Defense Finance and Accounting Service, DOD Inspector General, and USTRANSCOM. These teams are charged with developing a transportation vision, reengineering transportation processes, reengineering transportation financial management processes, and assessing the infrastructure required to support the proposed reengineered processes. The first initiative, developing a transportation vision, was completed on October 25, 1995. DOD said that the organizational structure will be assessed in concert with reengineering the business processes and the handling of readiness/mobilization costs will be reviewed by the task force. If the Reengineering Transportation Action Plan is carried out as described and it results in a consolidated, global, seamless, intermodal transportation system that eliminates and reduces infrastructure, thereby lowering overall system costs and charges to DOD customers, it is responsive to our concerns. As we noted earlier, however, many other DOD efforts have had similar goals but the recommended changes to the defense transportation organization were never implemented because key defense transportation interests were reluctant to allow them to occur. In the near future, we will be reviewing the results of the current reengineering initiatives to see whether DOD is successful in implementing necessary changes this time.
Pursuant to a congressional request, GAO reviewed whether the Department of Defense (DOD) is providing cost-effective and efficient transportation operations, focusing on: (1) the factors that increase DOD transportation costs; and (2) DOD efforts to reengineer its transportation operations. GAO found that: (1) defense transportation costs are substantially higher than necessary; (2) DOD customers frequently pay prices for transportation services that are double or triple the cost of the basic transportation; (3) key factors driving these costs are the U.S. Transportation Command's (USTRANSCOM) fragmented and inefficient organizational structure and management processes, and the need to maintain a mobilization capability; (4) much of defense cargo today moves intermodally, by air, land, and sea transport, but USTRANSCOM retains an outdated and inefficient, modally oriented, organizational structure, with many collocated facilities; (5) each separate component command incurs operational and support costs, and customers receive bills from each component command for each mode of transportation, rather than a single intermodal bill from only one component; (6) separate billing systems are inefficient, adding people and costs, and confusing to customers who pay for the inefficiencies; (7) USTRANSCOM maintains an extensive water port structure, employing more than 1,200 people, at a cost in fiscal year 1994 of over $70 million; (8) the ports are largely unused during peacetime because most cargo moves commercially, but the port facilities do provide capacity that may be needed for a wartime surge; (9) DOD's guidance for handling the cost of maintaining a mobilization capability doe not cover all situations in which USTRANSCOM components charge their customers for costs that appear to be for mobilization requirements; (10) while DOD has recently begun reengineering the defense transportation system to improve its processes and reduce costs, it is not concurrently looking at how the organizational structure should be redesigned; (11) DOD will address organizational structure only after the process changes have been completed; and (12) GAO's work shows that the inefficiency of the organizational structure has been a long-standing issue in addressing the effectiveness of defense transportation, and waiting to address this issue until process improvements are made will likely represent a significant barrier to achieving the full benefits of the reengineering efforts.
The data that we are reporting today provide a demographic snapshot of the career SES as well as the levels that serve as the SES developmental pool for October 2000 and September 2007. Table 1 shows the number of career SES as well as those in the developmental pool, including the percentages of women and minorities. For more information on demographic data governmentwide, see appendix I. Table 2 shows a further breakdown of the number of SES members, including the percentages of women and minorities, by Chief Financial Officers (CFO) Act agency. For more information on demographic data by CFO Act agency, see appendix I. As we reported in 2003, the gender, racial, and ethnic profiles of the career SES at the 24 CFO Act agencies varied significantly in October 2000. The representation of women ranged from 13.7 percent to 41.7 percent, with half of the agencies having 27 percent or fewer women. For minority representation, rates varied even more and ranged from 3.1 percent to 35.6 percent, with half of the agencies having less than 15 percent minorities in the SES. In 2007, the representation of women and minorities, both overall and for most individual agencies, was higher than it was in October 2000. The representation of women ranged from 19.9 percent to 45.5, percent with more than half of the agencies having 30 percent or more women. For minority representation, rates ranged from 6.1 percent to 43.8 percent, with more than half of the agencies having over 16 percent minority representation, and more than 90 percent of the agencies having more than 13 percent minority representation in the SES. For this testimony, we did not analyze the factors that contributed to the changes from October 2000 through September 2007 in representation. OPM and the Equal Employment Opportunity Commission (EEOC), in their oversight roles, require federal agencies to analyze their workforces, and both agencies also report on governmentwide representation levels. Under OPM’s regulations implementing the Federal Equal Opportunity Recruitment Program (FEORP), agencies are required to determine where representation levels for covered groups are lower than the civilian labor force and take steps to address those differences. Agencies are also required to submit annual FEORP reports to OPM in the form prescribed by OPM. EEOC’s Management Directive 715 (MD-715) provides guidance and standards to federal agencies for establishing and maintaining effective equal employment opportunity programs, including a framework for executive branch agencies to help ensure effective management, accountability, and self-analysis to determine whether barriers to equal employment opportunity exist and to identify and develop strategies to mitigate or eliminate the barriers to participation. Specifically EEOC’s MD-715 states that agency personnel programs and policies should be evaluated regularly to ascertain whether such programs have any barriers that tend to limit or restrict equitable opportunities for open competition in the workplace. The initial step is for agencies to analyze their workforce data with designated benchmarks, including the civilian labor force. If analysis of their workforce profiles identifies potential barriers, agencies are to examine all related policies, procedures, and practices to determine whether an actual barrier exists. EEOC requires agencies to report the results of their analyses annually. In our 2003 report, we (1) reviewed actual appointment trends from fiscal years 1995 to 2000 and actual separation experience from fiscal years 1996 to 2000; (2) estimated by race, ethnicity, and gender the number of career SES who would leave government service from October 2000 through October 2007; and (3) projected what the profile of the SES would be if appointment and separation trends did not change. We estimated that more than half of the career SES members employed in October 2000 will have left service by October 2007. Assuming then-current career SES appointment trends, we projected that (1) the only significant changes in diversity would be an increase in the number of white women with an essentially equal decrease in white men and (2) the proportions of minority women and men would remain virtually unchanged in the SES corps, although we projected slight increases among most racial and ethnic minorities. Table 3 shows SES representation as of October 2000, our 2003 projections of what representation would be at the end of fiscal year 2007, and actual fiscal year 2007 data. We projected increases in representation among both minorities and women. Fiscal year 2007 data show that increases did take place among those groups and that those increases generally exceed the increases we projected. The only decrease among minorities occurred in African American men, whose representation declined from 5.5 percent in 2000 to 5.0 percent at the end of fiscal year 2007. For more information on our projections, see appendix II. Table 4 shows developmental pool representation as of October 2000, our 2003 projections of what representation would be at the end of fiscal year 2007, and actual fiscal year 2007 data. We projected increases in representation among both minorities and women. Fiscal year 2007 data show that increases did generally take place among those groups. For more information on our projections, see appendix II. As stated earlier, we have not analyzed the factors contributing to changes in representation; therefore care must be taken when comparing changes in demographic data since fiscal year 2000 to the projections we made in 2003, as we do in tables 3 and 4. For example, we have not determined whether estimated retirement trends materialized or appointment and separation trends used in our projections continued and the impact these factors may have had on the diversity of the SES and its developmental pool. Considering retirement eligibility and actual retirement rates of the SES is important because individuals normally do not enter the SES until well into their careers; thus SES retirement eligibility is much higher than for the workforce in general. As we have said before, as part of a strategic human capital planning approach, agencies need to develop long-term strategies for acquiring, developing, motivating, and retaining staff. An agency’s human capital plan should address the demographic trends that the agency faces with its workforce, especially retirements. In 2006, OPM reported that approximately 60 percent of the executive branch’s 1.6 million white-collar employees and 90 percent of about 6,000 federal executives will be eligible for retirement over the next 10 years. If a significant number of SES members were to retire, it could result in a loss of leadership continuity, institutional knowledge, and expertise among the SES corps, with the degree of loss varying among agencies and occupations. This has important implications for government management and emphasizes the need for good succession planning for this leadership group. Rather than simply recreating the existing organization, effective succession planning and management, linked to the strategic human capital plan, can help an organization become what it needs to be. Leading organizations go beyond a “replacement” approach that focuses on identifying particular individuals as possible successors for specific top- ranking positions. Rather, they typically engage in broad, integrated succession planning and management efforts that focus on strengthening both current and future capacity, anticipating the need for leaders and other key employees with the necessary competencies to successfully meet the complex challenges of the 21st century. Succession planning also is tied to the federal government’s opportunity to affect the diversity of the executive corps through new appointments. In September 2003, we reported that agencies in other countries use succession planning and management to achieve a more diverse workforce, maintain their leadership capacity, and increase the retention of high-potential staff. Racial, ethnic, and gender diversity in the SES is an important component for the effective operation of the government. As we have testified before the House Subcommittee on Federal Workforce, Postal Service, and the District of Columbia, Committee on Oversight and Government Reform, the Postal Service expects nearly half of its executives to retire within the next 5 years, which has important implications and underscores the need for effective succession planning. This presents the Postal Service with substantial challenges for ensuring an able cadre of postal executives and also presents opportunities for the Postal Service to affect the composition of the PCES. Table 5 updates information we provided last year for the PCES and EAS levels 22 and above, from September 1999 to September 2007, showing increases in the representation of women and minorities. Since last year’s testimony, we have studied the pools of potential successors that the Postal Service can draw from in selecting PCES promotions. The Service’s policy encourages selecting employees from the CSP program when it promotes employees to the PCES. The current CSP program—which first accepted participants in 2004—is intended to identify pools of potential successors for PCES positions and develop these employees so that they can promptly and successfully assume PCES positions as these positions become available. Nearly 87 percent of postal employees promoted to the PCES in fiscal years 2004 through 2007 were participating in the CSP program, and nearly 7 in 10 promotions were drawn from CSP program participants in EAS levels 25 and above. Table 6 shows increases in the representation of women and minorities in the CSP program from September 2004 to September 2007 among program participants at EAS level 25 and above. We also have not analyzed factors that contributed to changes in the representation levels in the PCES, EAS, or CSP program. The Postal Service, like executive branch agencies, has responsibility for analyzing its workforce to determine (1) where representation levels for covered groups are lower than the civilian labor force and take steps to address those differences and (2) whether barriers to equal employment opportunity exist and to identify and develop strategies to mitigate or eliminate the barriers to participation. The Postal Accountability and Enhancement Act, enacted in 2006, expressed Congress’s interest in diversity in the Postal Service. It required the Postal Service Board of Governors to report on the representation of women and minorities in supervisory and management positions, which is a different focus from this statement on the PCES, EAS, and CSP program. This Board of Governors’ report provided trend data for supervisory and management positions for fiscal years 2004 through 2007, as well as for the career workforce as a whole. In this regard, the report highlighted data for all career employees in the Service’s workforce, noting that from fiscal years 2004 through 2007 the percentage of women increased from 38.3 percent to 39.7 percent, while the percentage of minorities increased from 36.8 percent to 38.3 percent over the same period. Executive branch agencies have processes for selecting members into the SES and developmental programs that are designed to create pools of candidates for senior positions. The Postal Service also has processes for selecting PCES members and participants in its CSP program from which potential successors to the PCES could come. OPM regulations require federal executive agencies to follow competitive merit staffing requirements for initial career appointments to the SES or for appointment to formal SES candidate development programs, which are competitive programs designed to create pools of candidates for SES positions. Each agency head is to appoint one or more Executive Resources Boards (ERB) to conduct the merit staffing process for initial SES career appointments. ERBs review the executive and technical qualifications of each eligible candidate and make written recommendations to the appointing official concerning the candidates. The appointing official selects from among those candidates identified by the ERB as best qualified and certifies the executive and technical qualifications of those candidates selected. Candidates who are selected must have their executive qualifications certified by an OPM-administered Qualifications Review Board (QRB) before being appointed to the SES. According to OPM, it convenes weekly QRBs to review the applications of candidates for initial career appointment to the SES. QRBs are independent boards of three senior executives that assess the executive qualifications of all new SES candidates. Two criteria exist for membership on a QRB: at least two of three members must be career appointees, and each member must be from a different agency. In addition, OPM guidance states that QRB members cannot review candidates from their own agencies. An OPM official stated that an OPM official acts as administrator, attending each QRB to answer questions, moderate, and offer technical guidance but does not vote or influence voting. OPM guidance states that the QRB does not rate, rank, or compare a candidate’s qualifications against those of other candidates. Instead, QRB members judge the overall scope, quality, and depth of a candidate’s executive qualifications within the context of five executive core qualifications—leading change, leading people, results driven, business acumen, and building coalitions—to certify that the candidate’s demonstrated experience meets the executive core qualifications. To staff QRBs, an OPM official said that OPM sends a quarterly letter to the heads of agencies’ human capital offices seeking volunteers for specific QRBs and encourages agencies to identify women and minority participants. Agencies then inform OPM of scheduled QRB participants, without a stipulation as to the profession of the participants. OPM solicits agencies once a year for an assigned quarter and requests QRB members on a proportional basis. The OPM official said that OPM uses a rotating schedule, so that the same agencies are not contacted each quarter. Although QRBs generally meet on a weekly basis, an OPM official said that QRBs can meet more than once a week, depending on caseload. The official said that because of the caseload of recruitment for SES positions recently, OPM had been convening a second “ad hoc” QRB. According to another OPM official, after QRB certification, candidates are officially approved and can be placed. In addition to certification based on demonstrated executive experience and another form of certification based on special or unique qualities, OPM regulations permit the certification of the executive qualifications of graduates of candidate development programs by a QRB and selection for the SES without further competition. OPM regulations state that for agency candidate development programs, agencies must have a written policy describing how their programs will operate and must have OPM approval before conducting them. According to OPM, candidate development programs typically run from 18 to 24 months and are open to GS-15s and GS-14s or employees at equivalent levels from within or outside the federal government. Agencies are to use merit staffing procedures to select participants for their programs, and most program vacancies are announced governmentwide. OPM regulations provide that candidates who compete governmentwide for participation in a candidate development program, successfully complete the program, and obtain QRB certification are eligible for noncompetitive appointment to the SES. OPM guidance states that candidate development program graduates are not guaranteed placement in the SES. Agencies’ ERB chairs must certify that candidates have successfully completed all program activities, and OPM staff and an ad hoc QRB review candidates’ training and development experience to ensure that it provides the basis for certification of executive qualifications. OPM also periodically sponsors a centrally administered federal candidate development program. According to an OPM official, the OPM-sponsored federal candidate development program can be attractive to smaller agencies that may not have their own candidate development program, and OPM administers the federal program for them. According to OPM officials, 12 candidates graduated from the first OPM-sponsored federal candidate development program in September 2006. Of those, 8 individuals have been placed; 1 is about to be placed, and 3 are awaiting placement. In January 2008, OPM advertised the second OPM-sponsored federal candidate development program, and selections for the second program are pending. With respect to oversight of and selection into the SES, we note that the Chairmen of the two Subcommittees represented here today introduced legislation in October 2007, which would create a Senior Executive Service Resource Office within OPM to improve policy direction and oversight of, among other things, the structure, management, and diversity of the SES. In addition, this legislation would require agencies to establish SES Evaluation Panels of diverse composition to review the qualifications of candidates. Because the Postal Service has specific statutory authority to establish procedures for appointments and promotions, it does not fall under the jurisdiction of the OPM QRB and its certification activities. Instead, the Postal Service promotes EAS and other employees to the PCES when these employees are selected to fill PCES vacancies. Promotions generally involve EAS employees in levels 25 and above who are CSP program participants and who were identified as potential PCES successors through a nomination and evaluation process (either through self- nomination or nomination by a PCES “sponsor”). As previously noted, the CSP program is intended to identify and develop these employees so that they can promptly and successfully assume PCES positions as these positions become available. The selecting official for a PCES-I position (i.e., the relevant officer) is required to obtain approval for the selection decision from the relevant member of the Service’s Executive Committee. Postal Service policy notes that employees promoted to the PCES should be CSP participants except in rare cases. However, participation in the CSP program does not trigger any promotion decision, and any employee can be promoted to the PCES, regardless of whether that person is participating in CSP. Further, there are no requirements for PCES vacancies to be advertised, nor are selecting officials required to interview candidates for such vacancies. According to postal officials, selecting officials use a variety of methods to fill PCES-I vacancies, which may involve interviews and discussion among officers regarding candidates or potential candidates, or which may involve considering employees who have had developmental assignments. Such discussions may happen when the vacancy is in one area of the country and potential candidates are in other areas, or when potential candidates are in CSP program position pools outside the jurisdiction of the selecting official. The Postal Service has implemented a structured process to select nominees to participate in up to 5 of the approximately 400 CSP program position pools. First, the Service conducts a range of preparatory activities for the 2-year CSP program cycle, including a needs assessment for the program, such as determining what PCES positions have been created or eliminated and any CSP position pools where succession planning is shallow. The Service’s Employee Development and Diversity Office, which is responsible for the CSP program, coordinates activities with CSP program liaisons throughout the Service, who provide administrative support and information about the program. Second, the Postal Service receives nominations for each 2-year CSP program cycle, including self-nominations and other nominations from PCES sponsors. Nominees complete applications that include self- assessments against the eight competencies in the Service’s Executive Competency Model. PCES sponsors and the relevant PCES-I executives also evaluate each nominee and make recommendations to the CSP program committees to either support or not support each nominee. Third, each of the Service’s 43 officers convenes a CSP program committee of three or more executives to consider nominees for each position pool under each officer’s jurisdiction. Each CSP program committee reviews nominees for pools under its jurisdiction and makes recommendations regarding each nominee. Officers then select participants for their pools, subject to review and approval by the responsible member of the Executive Committee. The Postmaster General and Chief Human Resources Officer also review some selections for “critical” position pools that are so designated by each officer. Fourth, once selected, CSP participants develop an individual development plan (IDP) that outlines planned developmental activities and assignments for the 2-year CSP program cycle. IDPs are reviewed and approved by the CSP program committees and by the relevant executives. Chairman Davis, Chairman Akaka, and Members of the Subcommittees, this concludes our prepared statement. We would be pleased to respond to any questions that you may have. For further information regarding this statement, please contact Kate Siggerud, Director, Physical Infrastructure Issues, on (202) 512-2834 or at siggerudk@gao.gov; or George Stalcup, Director, Strategic Issues, on (202) 512-6806 or at stalcupg@gao.gov. Individuals making key contributions to this statement included Gerald P. Barnes and Belva Martin, Assistant Directors; Karin Fangman; Kenneth E. John; Kiki Theodoropoulos; and Greg Wilmoth.
A diverse Senior Executive Service (SES), which generally represents the most experienced segment of the federal workforce, can be an organizational strength by bringing a wider variety of perspectives and approaches to policy development and decision making. In January 2003, GAO provided data on the diversity of career SES members as of October 2000 (GAO-03-34). In March 2000, GAO reported similar data for the Postal Career Executive Service (PCES) as of September 1999 (GAO/GGD-00-76). In its 2003 report, GAO also projected what the profile of the SES would be in October 2007 if appointment and separation trends did not change. In response to a request for updated information on diversity in the SES and the senior ranks of the U.S. Postal Service, GAO is providing data on race, ethnicity, and gender obtained from the Office of Personnel Management's (OPM) Central Personnel Data File and the Postal Service for (1) career SES positions as of the end of fiscal year 2007 and the SES developmental pool (i.e., GS-15 and GS-14 positions) as well as a comparison of actual fiscal year 2007 data to projections for fiscal year 2007 that GAO made in its 2003 report, and (2) the PCES, the Executive Administrative Schedule (EAS), and EAS participants in the Corporate Succession Planning (CSP) program. GAO also describes the process that executive agencies and the Postal Service use to select members into their senior ranks. Data in the Central Personnel Data File and provided by the U.S. Postal Service show that as of the end of fiscal year 2007, the overall percentages of women and minorities have increased in the federal career SES and its developmental pool for potential successors since 2000 as well as in the PCES and EAS levels 22 and above, from which PCES potential successors could come, since 1999. Actual fiscal year 2007 SES data show that representation increased from October 2000 among minorities and women and that those increases generally exceed the increases we projected in our 2003 report. The only decrease among minorities occurred in African American men, whose fiscal year 2007 actual representation (5.0 percent) was less than the October 2000 baseline (5.5 percent). For the developmental pool (GS-15s and GS-14s), fiscal year 2007 data show that increases also occurred generally among minorities and women since October 2000. Both executive branch agencies and the Postal Service have processes for selecting members into their senior ranks. Executive agencies use Executive Resources Boards to review the executive and technical qualifications of eligible candidates for initial SES career appointments and make recommendations on the best qualified. An OPM-administered board reviews candidates' qualifications before appointment to the SES. The Postal Service does not fall under the jurisdiction of OPM's board for promoting employees to the PCES. Instead, it promotes EAS and other employees to the PCES when they are selected to fill PCES vacancies. Most employees promoted to the PCES have been CSP program participants, consistent with Postal Service policy encouraging this practice. The CSP program is intended to identify and develop employees so that they can promptly and successfully assume PCES positions as these positions become available.
The Immigration and Nationality Act of 1990 established special immigrant and nonimmigrant categories for religious workers, religious professionals, and ministers. The act authorizes special immigrants to be admitted to the United States as religious workers if, for 2 years prior to admission, they have been members of a religious denomination having a bona fide, nonprofit, religious organization in the United States; they intend to enter the United States to work for the organization at the organization's request in a religious vocation or occupation; and they have been carrying on the religious work continuously for at least 2 years immediately preceding their application for admission. The act established a limit of 5,000 on the number of special immigrant religious workers and religious professionals that can be admitted in any one year. Although the special immigrant provisions for religious workers and religious professionals were to expire on October 1, 1994, they have been amended twice and extended to October 1, 2000. Applying for an immigrant religious worker visa is a two-step process. First, a petition must be filed with INS. A petition is the form the sponsoring individual or organization must file on behalf of an alien to demonstrate that the alien meets the requirements of a specific immigration category. The petition must include supporting documentation showing that the religious worker will be working for a religious organization and how the religious worker will be paid or remunerated. The documentation should also clearly indicate that the religious worker will not be solely dependent on supplemental employment or solicitation of funds for support. INS reviewers examine the petitions and supporting documentation to determine if the alien meets the program requirements. If INS approves the petition, the alien files an application for an adjustment of status with INS if he or she is already in the United States or an application for a visa with a State overseas post if he or she is abroad. If the alien does not meet the requirements, INS denies the petition. About 85 percent of those admitted for permanent residence as religious workers in fiscal years 1996 and 1997 were already in the United States. Nonimmigrant religious workers can be admitted under the same conditions as special immigrant religious workers, except that there is no requirement for prior religious work experience, and the maximum period of stay for nonimmigrant religious workers is 5 years. The authorization for admission of nonimmigrant religious workers did not contain sunset restrictions nor any limit on the number that can be admitted. To obtain a nonimmigrant visa, the alien files an application, but no petition is required. Documentation required in support of the visa application must establish the arrangements made, if any, for remuneration, including the amount and source of any salary, a description of any other type of remuneration, and a statement indicating whether the remuneration will be in exchange for services rendered. The majority of nonimmigrant religious workers applies and receives their visas abroad through State's overseas posts. (See app. I for more information on immigrant and nonimmigrant religious worker visa issuance.) Both INS and State have expressed concern about fraud in the religious worker visa program, but they do not have data or analysis to firmly establish the extent of the problem. Their knowledge of program fraud is based on information developed primarily from fraud investigations and through the visa screening process. INS has conducted several fraud investigations since 1994 involving hundreds of applicants. In addition, fraud has been identified through INS' and State's visa screening processes. The agencies' reviewers generally deny petitions and visas to unqualified applicants, but according to the agencies' officials, it is difficult to prove willful intent to commit fraud. The types of fraud the agencies have encountered often involved petitioners making false statements about the length of time that the applicant was a member of the religious organization and the nature of the qualifying work experience. Some of the investigations involved religious organizations petitioning for more workers than they can reasonably support. Evidence uncovered by INS suggests that some of these organizations exist solely as a means to carry out immigration fraud. INS and State have uncovered incidents of fraud in the religious worker visa program, but they do not routinely investigate questionable visa petitions and applications or report fraud information by type of visa. State's Bureau of Diplomatic Security, the office responsible for investigating the use of counterfeit U.S. passports and visas, has not conducted any investigations of religious worker visa fraud. State's antifraud units at overseas posts sometimes review suspicious applications to screen out ineligible applicants, but they do not routinely report the results to State's headquarters. Individual cases of suspected fraud are generally not investigated, unless the suspected fraud is part of a larger scheme to systematically circumvent immigration laws. Moreover, INS does not routinely follow up on recipients of employment-based visas, including religious visas, to determine whether they comply with the law. The agencies generally deny questionable visa petitions and applications they receive. Most are not denied for fraud, but for other reasons, such as failing to comply with statutory requirements and regulations, including failure to provide requested documents. They give fraud as the reason for the denial when they have sufficient evidence that the applicant or petitioner willfully misrepresented a material fact. An INS workload report on immigrant petitions received, approved, and denied, showed that of the approximately 8,400 petitions for religious workers processed in fiscal year 1998, 3 percent were denied for suspected fraud. The reported 3-percent fraud denial rate for religious worker petitions was the third highest fraud denial rate among the 44 different immigrant petition categories listed. The fraud denial rate for most of the other categories was less than 1 percent. State Department statistics on visa denials do not identify denials by type of visa. However, a 1998 State survey of 83 overseas posts identified instances of fraud uncovered during visa processing. At our request, the Fraud Branch at INS' Office of Investigations in Washington, D.C., surveyed fraud units in INS' district and suboffices to identify the number of active and closed fraud investigations involving religious worker visas since 1994. The units identified 54 such investigations involving about 1,700 petitions during the 5-year period. The 54 INS investigations, of which about 40 are closed, ranged from cases involving individual fraud schemes to organized fraud rings. For example, the fraud unit in the Chicago District Office investigated 30 cases involving individuals who failed to meet the 2-year experience requirement. At least five investigations performed by INS since 1994 have involved individuals or organizations filing petitions for hundreds of religious workers. For example, in 1995, INS investigated a pastor who filed 450 immigrant religious worker petitions covering over 900 individuals, falsifying the number of years the aliens had been a member of the church. The pastor died of natural causes before an indictment could be returned, and the petitions were denied or allowed to expire. INS recently completed an investigation it started in 1994 involving suspects who provided false supporting documents to INS to show that the aliens met the 2-year work experience requirement. This investigation, which involved over 400 petitions, ultimately led to the arrests of six individuals, guilty pleas to charges of conspiracy to commit visa fraud, and additional investigations of several similar schemes. In another recent case, reviewers at INS' Vermont Service Center became suspicious when one organization, which had filed about 100 petitions for immigrant visas the previous 2 years combined, filed over 200 petitions the third year. The reviewers doubted that the organization could support so many full-time workers and referred the case to an INS district office fraud unit where it is currently under investigation. Some investigations were initiated because of suspicious activity identified by State Department consular officials. For example, consular officers at the U.S. embassy in Suva, Fiji, became suspicious of a church that filed petitions on behalf of 30 individuals from Fiji who were in the United States on expired visitors' visas. The information was forwarded to INS for investigation. The investigation revealed that only 1 of the 30 petitions met the requirements for a religious worker visa. The post suspected that this scheme was related to a larger one involving petitions on behalf of Tonga residents to stay in the United States illegally. Also, the U.S. embassy in Bogota uncovered a fraud scheme in which the local church was providing applicants with false documents to demonstrate that the applicants had been members of the church for the required 2-year period. The embassy's antifraud unit discovered that in some cases the applicants had recently joined the church, and in other cases, they had no membership affiliation at all. INS and State reviewers stated that they are not confident that the agencies' screening process is identifying all unqualified applicants and sponsoring organizations. They attributed the problem to the lack of sufficient information to determine the eligibility of visa applicants and their sponsors. INS, with State's support, is considering a number of steps to address this problem. INS requires the petitioner to provide evidence that (1) the organization qualifies as a nonprofit organization, (2) the alien meets the qualifications for an immigrant religious worker visa, and (3) the alien will be paid or otherwise remunerated by the religious organization. INS and State reviewers have asserted that sometimes the required supporting evidence, although minimally acceptable, consists of little more than a letter from the sponsoring organization and does not adequately establish an applicant's eligibility as a religious worker or the sponsoring organization's ability to pay the worker. The reviewer can deny the application or petition pending the receipt of additional information, but such actions take more time. The INS reviewers stated that more specific information about the applicant's training and qualifications and the exact nature of the position to be filled, including the number of petitions previously filed, should be provided up front, similar to other employment-based visa categories. In addition, unlike most other employment-based visas, the applicant can file a petition on his or her own behalf and, although supporting documentation from the sponsoring organization is still required, all of it can be submitted by the applicant. For most other employment-based visa categories, the petition and supporting documentation must be submitted by the potential employer. The reviewers believe INS should require information from independently verifiable sources. The reviewers also stated that the documents should be current. They said that sometimes the sponsoring organizations submit copies of their original tax-exemption form, which may no longer be valid. A related issue raised by State's overseas posts concerns the definition of a "religious worker." They believe that the definition of religious worker is too broad, making the religious worker visa program an attractive vehicle for fraud and abuse. According to the survey, posts sometimes struggled with what they considered to be the "marginal" nature of some of the religious positions used by the applicants. A common sentiment was that almost anyone involved with a church, aside from those occupations that were not intended to be covered by the 1990 religious worker visa legislation, for example, maintenance and cleaning staff, could qualify as a religious worker. INS is developing a number of initiatives to improve its visa screening process and to detect and deter fraud. Most of these initiatives are focused on requiring petitions to include more comprehensive information to allow reviewers to make better informed decisions. Some of the service centers are using the capabilities of commercial software to enhance their ability to identify patterns and trends that may indicate fraud. State officials said they would support INS' efforts to increase evidentiary standards. Further, State is consulting with the Internal Revenue Service and the Department of Labor to develop more comprehensive information on religious occupations and organizations to help the overseas posts better understand the definition of "religious worker" and "traditional" religious functions. INS is in the process of implementing a proposed regulatory change to expressly require that the prior work experience specified for immigrant religious worker visa applicants be full-time work. The proposed rule also states that the documentation supporting an applicant's petition must indicate that the religious worker will be working for the religious organization in the United States on a full-time basis. INS officials stated that INS is changing the regulation to address the problem of individuals doing part-time voluntary work for a religious organization while working full-time in a secular occupation. They said an applicant's ability to demonstrate 2 years of prior full-time, paid religious work experience is a good indication that the individual is a committed religious worker. They also believe such experience is a good indicator that the individual will be doing full-time religious work for which the organization will pay a salary. The proposed changes were initially published for comment in June 1995. According to INS, it plans to finalize the regulatory change in October 1999. INS is also considering revising its requirements for the documents that must be initially submitted by the petitioner for an immigrant visa. Such documents could include pay stubs to show that the worker was compensated for full-time work and bank statements to demonstrate that the organizations have sufficient financial resources to support their worker or workers. INS has the authority to ask for additional evidence to verify information in the petitions, and some INS reviewers will defer making a final decision until the organization furnishes this type of supplemental information. The suggested change involving additional documentation would increase the amount of information that all organizations must initially submit and that all adjudicators would use to review petitions. By initially requiring more specific documents and by clarifying the full-time religious work requirements, INS may also reduce the number of filings by unqualified applicants. INS has no timetable for implementing the changes to the requirements for supporting documents. However, INS officials stated that they might revise the documentary requirements after the agency redrafts or finalizes its proposed rule change on the full-time work experience requirement. According to a State official, State would participate in any changes to the requirements for immigrant visas and publish visa regulations jointly with INS. He said that, if appropriate, State would revise its documentary requirements for nonimmigrant visas to correspond with INS' suggested revisions for immigrant visas. Until recently, reviewers could not quickly and efficiently determine how many filings had been made by a petitioning organization. As previously discussed, organizations petitioning for numbers of workers that appear to be inconsistent with the organization's membership size and financial resources to support the workers sometimes indicates fraud. However, until a pattern had been identified, the reviewers could not know if the petitions were potentially fraudulent. For example, one organization currently under investigation had 37 petitions approved in fiscal year 1996 and 76 petitions approved in fiscal year 1997 before the pattern was detected. While all service centers now have the capability to identify multiple filers, the California and Vermont Service Centers have developed their own systems using commercial off-the-shelf software (Microsoft SQL for California and Oracle and Access for Vermont), which they believe provides more efficient inquiry and reporting capability than the system provided by INS headquarters. In addition, the two service centers are in the process of consolidating their databases so that they can share data. We visited 12 religious organizations in California, New York, Maryland, and the District of Columbia to discuss their experience with the religious worker visa program. The religious organizations generally believed that the program met their needs. For example, several of the organizations use the program to meet the needs of growing ethnic congregations. One church with 7,000 members uses the program to provide workers to minister to its separate Filipino, Korean, Hispanic, and French- and English-speaking African congregations. A religious organization with a worldwide membership uses the program to recruit native speakers familiar with the religion to serve as religious translators and broadcasters. Another religious organization with 3 million members in more than 120 countries uses nonimmigrant religious workers to participate in church-sponsored community service programs. We asked representatives of the organizations for their opinions of INS' proposed changes to the program. Of the seven commenting on the full-time work experience requirement, four stated that the proposal would not negatively affect their organization, because the majority of the applicants they sponsor for immigrant religious worker visas have already been serving in full-time capacities. However, three expressed reservations. For example, the representative of one religious organization stated that the requirement might adversely affect applicants who work for congregations in which ministerial duties are shared. The representative of another organization stated that the full-time work experience requirement could be problematic for those engaged in religious vocations if proof of paid full-time work was required, because some individuals are often not paid a salary. He said the requirement could also cause problems for some individuals who perform their religious duties part-time while studying for the priesthood or ministry. Three of the four religious organizations commenting on the proposed change in documentary evidence requirements stated that the change would not pose a problem. However, the one opposed to the proposed change pointed out that INS already has the discretion to ask for additional documents when required and does not believe that religious employers and applicants should routinely be required to assume additional documentary burdens. Some representatives also stated that INS should avoid the appearance of deciding for a religious organization what constitutes religious work. In addition to asking for their opinions on the potential modifications proposed by INS, we also asked the religious organizations for their suggestions for improving the program. Three of the organizations suggested making the special immigrant religious worker visa category permanent. The representative of one of the religious organizations said this would eliminate the glut of petitions that are filed before the "sunset" date. One organization that was familiar with student visas suggested that a sponsoring organization could submit to INS an annual status report on each of its nonimmigrant religious workers, much like academic institutions that must annually certify the status of foreign students. Another organization suggested that INS provide some materials concerning the religious worker visa program in some foreign languages to help ensure that organizations fully understand the regulations and requirements of the program. Both INS and State are attempting to balance the need to screen out unqualified applicants with the religious worker visa program's original purpose of facilitating the entry of qualified religious workers. The program modifications that INS is undertaking or plans to undertake to verify the accuracy of petitions for immigrant religious worker visas are reasonable steps to improve program integrity. If implemented, the modifications should help to better screen visa applicants and religious organizations. In oral comments on a draft of this report, INS and State concurred with the report's findings and conclusions. INS noted that its planned regulatory change and other steps underway to improve its screening process should help reduce the incidence of fraud. INS and State also provided technical comments, which we have incorporated as appropriate. To determine whether INS and State have data on any fraud in the religious worker visa program and to determine the nature of any abuse, we met with INS and State headquarters officials and visited three of the four INS service centers responsible for processing and approving religious worker visas. We also analyzed information from about 700 religious worker visa petitions denied by the California Service Center between January 1, 1996, and August 18, 1997, and data from about 83 responses to a State Department survey of 100 of its overseas posts in February 1998. We met with officials at INS' New York District Office responsible for interviewing visa applicants, and other officials to discuss INS' efforts to identify patterns and trends in the use of the program that could indicate fraud. We also met with officials of State's Fraud Prevention Program and Office of Diplomatic Security to discuss State's efforts to identify and investigate religious worker visa fraud. In addition, we met with fraud investigators from INS' Los Angeles District Office to discuss specific fraud investigations and INS' processes for accepting, investigating, and resolving fraud cases. We interviewed INS and State officials to discuss their agencies' processes and procedures for determining if visa applicants and sponsoring organizations met program requirements. We reviewed the relevant law and related legislative history, the INS regulations, State's Foreign Affairs Manual, advisory cables to the overseas posts, and other guidance to determine what criteria the agencies use to judge petitions and applications. We observed the process for reviewing and approving visa petitions at three INS service centers in California, Texas, and Vermont and discussed with service center staff how petitions are evaluated and the limitations of the process. We also queried by telephone consular posts in India, Korea, Mexico, and the Philippines about their processes and procedures for reviewing and approving visa applications. We chose those posts because they process relatively large numbers of applications for nonimmigrant religious worker visas. To identify any steps INS and State had taken or planned to take to address identified problems, we met with INS and State officials. We discussed the potential effect of any proposed changes with representatives of the U.S. Catholic Conference, the General Conference of the Seventh-Day Adventist Church, the Christian Science Church, Agudath Israel of America, and the Lutheran Immigration and Refugee Service. We selected these organizations because they have testified in support of the special visa for religious workers or are otherwise considered knowledgeable about the program. We also discussed the proposed changes with churches and other religious institutions that use the program. We selected these organizations by extracting information from the California and Vermont Service Centers' databases of religious worker visa petitions approved in fiscal year 1997 to identify the churches using the program. We selected an illustrative sample of large, medium, and small users based on the number of each organization's approved petitions. We interviewed representatives of seven churches from this group. We conducted our review from February 1998 to November 1998 in accordance with generally accepted government auditing standards. We are sending copies of this report to interested congressional committees. We are also sending copies to the Honorable Madeleine Albright, Secretary of State, and the Honorable Doris Meissner, Commissioner, INS. We will also make copies available to others upon request. Please contact me at (202) 512-4128 if you or any of your staff have any questions concerning this report. The major contributors to this report are listed in appendix II. This appendix shows the number of religious worker visas issued from fiscal years 1992 to 1998 and the major countries of origin of visa recipients in fiscal year 1998. Figure I.1 shows that the number of immigrant religious worker visas issued since fiscal year 1992 has fluctuated, reaching the annual limit of 5,000 in fiscal years 1994 and 1997, when the program was scheduled to expire. Meanwhile, issuances of nonimmigrant visas have steadily increased since fiscal year 1992. Figure I.2 shows that South Korea and India were the top countries of origin of immigrant and nonimmigrant religious worker visas, respectively. Richard Seldin The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary, VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on: (1) the extent and nature of any fraud the Immigration and Naturalization Service (INS) and the Department of State have identified in the religious worker visa program; and (2) any steps INS and State have taken or plan to take to change the visa screening process. GAO noted that: (1) although INS and State have identified some program fraud through the visa screening process and investigations, they do not have data or analysis to firmly establish the extent of fraud in the religious worker visa program; (2) the nature of the fraud uncovered typically involved: (a) applicants making false statements about their qualifications as religious workers or their exact plans in the United States; or (b) conspiracy between an applicant and a sponsoring organization to misrepresent material facts about the applicant's qualifications or the nature of the position to be filled; (3) INS and State sometimes detect fraud schemes when a sponsoring organization petitions INS for hundreds of religious workers at a time; (4) in order to increase the availability of information necessary to allow reviewers to determine the eligibility of visa applicants and sponsors, INS, with State's support, is considering changes to the visa screening process; (5) these changes include: (a) having an applicant submit additional evidence of his or her qualifications; (b) having the sponsoring organization submit additional evidence regarding its ability to financially support the applicant; and (c) incorporating new software applications that alert reviewers to organizations filing petitions for numerous workers; (6) INS is also proposing a regulatory change to expressly require that the prior work experience specified for immigrant religious worker visa applicants be full-time and that the individuals work for the religious organization in the United States on a full-time basis; (7) the religious organizations GAO met with believe the program meets their needs; and (8) of the seven organizations commenting on the proposed regulatory change, three opposed it because some part-time religious workers that are eligible may no longer qualify.
Federal efforts to use collaboration, broadly, and collaborative resource management more specifically have their roots in natural resource and environmental law, litigation, and alternative efforts to resolve environmental conflicts. Beginning in the 1960s and 1970s as environmental concerns over species, wilderness preservation, and air and water pollution heightened and legislation to protect different resources followed, litigation over land and resource use became more common. In the 1980s and 1990s, a number of factors, including court decisions and regulatory and economic changes, resulted in decreased timber harvests and increased scrutiny of grazing on public lands. In the 1990s, concerns over pollution and resource problems that cross property lines—such as water quality or endangered species—increased, and sometimes resulted in litigation. Also during this time, development of private lands posed increased threats to habitat, water quality, rural lifestyles, and wildlife, including threatened and endangered species. Over the same time frame beginning in the 1970s, environmental conflict resolution began to evolve as an alternative way of dealing with environmental disputes outside of the courts. This approach uses facilitation, mediation, and other methods to negotiate solutions among disputing parties. It also involves collaborative efforts to solve problems and conflicts before they have a chance to fully develop. In the 1990s, as these alternatives to litigation became more established, two laws were enacted authorizing their use by federal agencies and the U.S. District Courts—the Administrative Dispute Resolution Act of 1996 and the Alternative Dispute Resolution Act of 1998. Also in 1998, legislation created the U.S. Institute for Environmental Conflict Resolution, a federal institute to assess, and assist in resolving, conflicts related to federal land, natural resource, or environmental management. Throughout the 1990s, some communities facing natural resource problems decided to use alternative approaches to solving associated conflicts, forming grassroots groups of diverse stakeholders to discuss the problems and develop solutions. The collaborative groups that formed often included federal land and resource management agency representatives as participants. Recognizing the value of these groups, the federal land and resource management agencies began developing programs in support of such efforts. The agencies have been working collaboratively with communities for a long while, but placed increased emphasis on collaboration in the 1990s. Specifically, in 1997, the Forest Service began a partnership program to gather guidance and information on how best to work with local communities. In 2003, Interior began an effort to focus on working cooperatively with local communities on conservation activities, both on public and private lands. In addition, the U.S. Fish and Wildlife Service has a program, called Partners for Fish and Wildlife, to work with private landowners to provide technical and financial assistance in protecting threatened and endangered species on their lands. More recently, the federal land and natural resource agencies have been authorized by specific legislation to collaborate with nonfederal parties on specific resource problems. For example, both BLM and the Forest Service received authority to use stewardship contracts—which allow them, for example, to use the value of products sold, such as timber, to offset the cost of contracted services such as removing small trees and brush from the forest—to achieve national forest land management goals that meet local and rural community needs. In 2004, the President signed Executive Order 13352 introducing the Cooperative Conservation initiative to increase the use of collaboration and other processes for managing land, natural resource, and environmental issues. The order directed the Secretaries of Agriculture, Commerce, Defense, and the Interior, and the Administrator of the Environmental Protection Agency to carry out natural resource and environmental laws in a manner that facilitates “cooperative conservation.” The order defined this as “actions that relate to the use, enhancement, and enjoyment of natural resources, protection of the environment, or both, and involve collaborative activity among Federal, State, local, and tribal governments, private for-profit and nonprofit institutions, other nongovernmental entities and individuals.” The Executive Order is being carried out by CEQ, in its role coordinating federal environmental efforts and working with agencies in the development of environmental policies and initiatives. Also involved is OMB, in its role overseeing the preparation of the federal budget and supervising executive branch agencies. OMB evaluates the effectiveness of agency programs, policies, and procedures, as well as ensuring that agency reports, rules, testimony, and proposed legislation are consistent with the President’s budget and with administration policies. In addition, OMB oversees and coordinates the administration’s procurement, financial management, information, and regulatory policies. While collaboration refers broadly to the way different groups work together to achieve a common goal, collaborative resource management efforts involve multiple parties joining together voluntarily to identify environmental and natural resource problems and goals and to design activities and projects to resolve the problems and achieve their goals. The federal agencies work with collaborative resource management groups using partnership tools, which are cooperative or voluntary agreements among the federal and nonfederal groups to share resources and achieve the objectives of all parties. Each of the four major federal land and resource management agencies—BLM, U.S. Fish and Wildlife Service, National Park Service within Interior and USDA’s Forest Service—has a complex mix of legislative authorities that allow it to create and fund partnerships. In the simplest form, a partnership can exist without any exchange of funds or items of value from the federal agency to a nonfederal group and a memorandum of agreement or understanding is used to describe the details of the arrangements. In cases when federal funds or property are provided to nonfederal entities as part of a partnership, the agencies use different instruments such as grants or cooperative agreements to document the agreement and work to be done. Collaborative resource management efforts can involve any mix of the nation’s 2.3 billion acres of federal, state, local, private, or tribal land. Historical settlement and development of the nation resulted in the intermingling of lands among these different entities. As shown in figure 2, about 60 percent of the nation’s land, or almost 1.4 billion acres, is privately owned and managed, while more than 27 percent, or about 628 million acres, is managed by the four federal land and resource management agencies. More than 43 million acres, representing almost 2 percent of the nation’s land, are owned and managed by the federal government for purposes such as military installations and water infrastructure. About 8 percent of the nation’s land, or 195 million acres, is owned and managed by state and local governments and more than 2 percent, or about 56 million acres, is held in trust by the federal government for Native American tribes. Collaborative efforts are governed by a framework of federal, state, and local laws, as well as federal Indian law and tribal law, that determine how management activities, including collaborative management activities, are carried out. These efforts often involve coordinated decision making for management activities that the collaborative groups undertake. Each land and resource manager or landowner, including federal agencies, retains decision-making authority for the activities that occur on their respective lands and follow applicable requirements to implement them, although the federal agencies may work with other group members to develop and consider plans and gather information and community input. When collaborative activities occur on private lands, individual landowners make decisions about the activities that occur subject to applicable federal, state, and local laws, and decide whether and how to share information related to their lands with members of the group. Collaborative management activities on federal lands are governed by federal resource and environmental laws. Overall, the four federal land management agencies manage their lands for a variety of purposes, although each agency has unique authorities that give it particular responsibilities. Specifically, both BLM and the Forest Service manage lands under their control for multiple uses and to provide a sustained yield of renewable resources such as timber, fish and wildlife, forage for livestock, and recreation. On the other hand, the National Park Service’s mission is to conserve the scenery, natural and historic objects, and wildlife of the national park system so that they will remain unimpaired for the enjoyment of current and future generations. The U.S. Fish and Wildlife Service, under its authorities, manages refuges for the conservation, management—and where appropriate—restoration of fish, wildlife, and plant resources and their habitats within the United States, for the benefit of present and future generations. Other federal agencies—including the military services in the Department of Defense and the power marketing administrations in the Department of Energy—have land and resource management responsibilities that may cause them to become involved in collaborative efforts. The military services—the Army, Navy, Marine Corps, and Air Force—use their lands primarily to train military forces and test weapon systems, but are required under the Sikes Act of 1960 to provide for the conservation and rehabilitation of natural resources on military lands. The power marketing administrations—which include the Western Area Power Administration, Bonneville Power Administration, Southwestern Power Administration, and Southeastern Power Administration—sell and deliver power within the United States on hundreds of miles of transmission lines across public and private land using rights-of-way. Under the Energy Policy Act of 2005, transmission owners, including the power administrations, must maintain the reliability of their transmission systems, which includes establishing and maintaining the vegetation on these rights-of-way so that power lines are not compromised. Lines may be at risk from trees falling on them, electrical arcing from a power line to a tree or other objects in the right-of- way, or forest fires. Other agencies, such as the Department of Transportation and state transportation agencies, conduct activities that affect land and resources, and collaborate with agencies such as the U.S. Fish and Wildlife Service to manage the effects on wildlife and habitat. Management activities that occur on federal lands, including those developed by a collaborative group are subject to the National Environmental Policy Act (NEPA) of 1969, and the Endangered Species Act of 1973. NEPA requires that federal agencies evaluate the likely environmental effects of proposed projects and plans using an environmental assessment or, if the action would be likely to significantly affect the environment, a more detailed environmental impact statement. The scope of actions being analyzed under NEPA may encompass a broad area, such as an entire national forest, or a specific project such as treatment of invasive species on several acres of land. The federal agencies are mandated to include the public in the NEPA process through efforts such as providing public notice of meetings, making related environmental documents available to the public, and considering public comments. Under the Endangered Species Act, federal agencies are required to consult with the U.S. Fish and Wildlife Service to ensure that any activities they carry out do not jeopardize the continued existence of a threatened or endangered species or destroy or harm any habitat that is critical for the conservation of the species. Collaborative activities that occur on state, local, and private lands are subject to state and local laws that provide authority for numerous agencies to manage state and local lands and programs to protect and conserve natural resources, as well as generate revenue from these resources. Many states have trust lands that were granted to them at statehood by the federal government. These lands, which constitute 46 million acres of the continental United States, are typically managed to produce revenue for beneficiaries such as schools and other public institutions. As a result, the primary uses of these state lands are activities that may generate revenue such as livestock grazing, oil and gas leasing, hard rock mining, and timber. In addition, states regulate land and natural resource use through a variety of programs, such as wildlife management or forestry programs. Each state manages fish and wildlife through various programs, and these state wildlife programs typically manage certain species of wildlife as game for recreation purposes. These programs may also own and manage land with habitat particularly suited for game species, and sometimes provide protection for particular species of concern. State forestry agencies, which are also in every state, can manage their state forests for uses such as timber or recreation. Private landowners determine how, or whether, to implement collaborative activities on their lands, consistent with applicable federal, state, and local laws and zoning restrictions that regulate the types of activities that can occur on particular areas of land including open space, agricultural, residential, commercial, and industrial lands. For example, a nonprofit organization, such as The Nature Conservancy, can own land solely for conservation purposes, while a timber company uses its lands to harvest timber for profit. Private activities must also be consistent with applicable federal environmental laws such as the Endangered Species Act. Under the act, private landowners are not required to consult with the U.S. Fish and Wildlife Service on activities they conduct on their land, but the act prohibits them from “taking” a threatened or endangered species. In certain cases, private landowners may obtain permits for taking species if the taking is incidental to a lawful activity. To obtain such a permit, a landowner must submit a habitat conservation plan to the U.S. Fish and Wildlife Service that specifies the likely effect of the landowner’s activities on a listed species and mitigation measures that the landowner will implement. Landowners may also enter into voluntary safe harbor agreements with the U.S. Fish and Wildlife Service in which landowners manage habitat for endangered species in return for assurances that no additional restrictions will be imposed as a result of their conservation actions. Land use activities, such as harvesting trees for timber, applying fertilizer and pesticides for agriculture, and diverting water for irrigation or other use, can degrade air and water quality and habitat for wildlife. However, undeveloped lands used for forestry, livestock grazing, and agriculture—in addition to producing the nation’s food and fiber—are vital to the protection of the nation’s environment and natural resources. To encourage conservation on private lands used for agricultural and natural resource production, USDA operates approximately 20 voluntary conservation programs that are designed to address a range of environmental concerns—soil erosion, surface and ground water quantity and quality, air quality, loss of wildlife habitat and native species, and others—by compensating landowners for taking certain lands out of production or using certain conservation practices on lands in production. Among these programs, USDA’s Natural Resources Conservation Service manages the Environmental Quality Incentives Program, which promotes agricultural production and environmental quality as compatible national goals and provides technical and financial assistance to farmers and ranchers to address soil, water, air, and related natural resource concerns and to comply with environmental laws, and the Wetlands Reserve Program, which authorizes technical and financial assistance to eligible landowners to restore, enhance, and protect wetlands. Since its beginning as the Soil Conservation Service more than 70 years ago, the service has delivered its assistance to farmers and ranchers through partnerships with locally led conservation districts. Resource and land use decisions on Indian lands are governed by federal Indian law and tribal law. Federal Indian law includes relevant provisions of the Constitution, treaties with Indian tribes, federal statutes and regulations, executive orders, and judicial opinions that collectively regulate the relationships among Indian nations, the United States, and individual state governments. Tribal law includes the constitutions, statutes, regulations, judicial opinions, and tradition and customs of individual tribes. Experts whose literature we reviewed consider collaborative resource management to be effective in managing natural resources because it can reduce or avert conflict and litigation, while at the same time improving natural resource conditions and strengthening community relationships. The experts note that successful groups that are able to achieve these benefits use various collaborative practices. In addition, many experts cite limitations to collaboration and others question collaborative resource management efforts involving federally managed land, arguing that collaborative efforts can favor local interests over national interests, be dominated by particular interests over others, result in a “least common denominator” decision that inadequately protects natural resources, or inappropriately transfer federal authority to local groups. Experts view collaborative resource management as an effective approach for addressing natural resource problems compared with more traditional approaches, such as independent and uncoordinated decision making or litigation. They note, based on their research of many collaborative efforts, that collaborative resource management offers several benefits, including (1) reduced conflict and litigation; (2) better natural resource results; (3) shared ownership and authority; (4) increased trust, communication, and understanding among members of a group; and (5) increased community capacity, such as fostering the ability for community members to engage in respectful dialogue. In addition, experts say that effective collaboration can have different structures and processes, but use similar practices. According to the experts, collaboration can reduce conflict and litigation because it provides a way for people to become directly involved in resolving issues through face-to-face discussions and move beyond the impasse associated with more adversarial approaches. Experts say that the lawsuits, administrative appeals, and lobbying campaigns that have been associated with natural resource management in the past can be expensive and divisive and lead to delays in getting land management activities and projects accomplished. Such was the case in the Applegate watershed in northern California and southwestern Oregon in the early 1990s when years of adversarial conflict between environmentalists, the timber industry, and government agencies over forest management issues and litigation related to these issues had resulted in policy gridlock, with neither side able to effectively achieve its goals. In this case and in many others cited by the experts, stakeholders were driven to try collaboration because they were frustrated with a lack of progress through other means. Through face-to- face discussions, parties may be able to define solutions that meet their mutual interests and avert potentially costly litigation that requires winners and losers and, in some cases, results in delays. For example, according to one of the participants of the Blackfoot Challenge, one of the collaborative efforts we studied, the group was able to prevent litigation by an environmental group over water flows in the Blackfoot River in Montana by implementing conservation programs during drought that increased water levels in the river for fish. The experts noted that, in addition to reducing conflict, collaboration can lead to better natural resource results than traditional approaches. A collaborative process, with a range of stakeholders—from local citizens to agency technical specialists, and from environmentalists to industry representatives—incorporates a broad array of knowledge, which may include specialized local knowledge or technical expertise that would not be available to particular stakeholders or agencies if they were working alone. With input from a wide variety of stakeholders, collaborative efforts are often able to identify creative solutions to natural resource problems and make better, more-informed decisions about natural resource management. Because these decisions are made collaboratively and have concurrence from multiple affected stakeholders, solutions are frequently easier to implement with less opposition. A second collaborative effort we studied, the Cooperative Sagebrush Initiative, started in 2006 to involve multiple stakeholders in developing and implementing solutions to conserve sagebrush habitat. Another benefit noted by experts is that collaborative resource management creates shared ownership of natural resource problems among the stakeholders. The experts recognize that many of the nation’s natural resource problems that cross ownership boundaries are not amenable to traditional centralized government solutions through regulation and cannot be solved by single organizations. For example, problems such as the spread of invasive species, the decline of threatened and endangered species, the loss of open space from development and urban sprawl across agricultural landscapes, and non-point-source water pollution—pollution from diffuse sources—are just a few of the numerous challenges resulting from the independent actions of countless individuals. Collaborative efforts bring many of these individuals together, making progress toward resolving the problems possible. In addition, through collaboration, federal and state programs can be made locally relevant and decision making and progress are able to transcend political boundaries. Consequently, local stakeholders feel consulted and may view federal agencies as partners, and programs encourage joint stewardship of public lands. Experts also noted that collaborative resource management can increase communication, trust, and understanding among different stakeholders. The collaborative process can bring together stakeholders with divergent interests who may have no prior direct experience working together or have an adversarial relationship. As they work together to address a particular common natural resource problem, these stakeholders often begin to develop trust and increase communication. Furthermore, through such communication, stakeholders can become more informed about each other and the natural resource problem and develop an enhanced understanding of its complexities. For example, environmental and industry groups with divergent opinions about natural resource use may be represented in a particular collaborative effort. Through working together in collaborative groups and opening lines of communication, these stakeholders may learn to appreciate each other’s perspective by focusing on interests that they have in common. Experts have noted examples in which environmentalists learned to appreciate ranchers’ needs to earn a living through grazing livestock, timber companies acknowledge the value of healthy ecosystems, and federal agency technical experts recognized the importance of using traditional knowledge in land management practices. One of the collaborative efforts we studied, the Eastern Upper Peninsula Partners in Ecosystem Management, has shared information to improve forested habitat, including on private timber lands. In addition to improving relationships within a collaborative group, experts identify collaboration more broadly as a means to increase the social capacity of a community. Increased community capacity can include developing networks between the public and private sectors and enhancing the public’s engagement in issues affecting the community. The experts note that through increasing community capacity, collaborative groups may enable the community to deal better with future problems that arise. Collaborative groups that are able to achieve these benefits can be organized differently and have different decision making and organizational processes, but use similar practices that distinguish them from more traditional groups and make their efforts more effective and potentially more successful. A collaborative group can be organized formally—such as a legislatively mandated advisory group or an incorporated nonprofit organization—or less formally, with loosely organized members and simple written agreements. Collaborative groups may also employ a variety of processes to manage their meetings and organizations and may strive to achieve different desired outcomes, such as sharing information on what each member is doing, partnering on particular management activities, or seeking agreement on how to manage natural resource problems. The eastern Upper Peninsu inclde fore tht hve hitoriclly een mged for timer. The grop focus on abt 4 million cre thn the Hith Ntionl Foret, the Seney Ntionl Wildlife Refge, Pictred Rock Ntionl Lkehore, te lnd, nd privtely-owned lnd. The prtner inclde the ForeService, U.S. Find Wildlife Service, Ntionl Prk Service, Michign Deprtment of Nl Rerce, The Nre Conervncy, nd compnie owning privte foret lnd. collaborative practices noted that all stakeholders—individuals and organizations whose interests are affected by the process or its outcome—should be included in the process by participating or being represented. One expert suggested that such stakeholders may include those affected by any sort of agreement that could be reached, those needed to successfully implement an agreement, and those who could undermine an agreement if not included in the process. Some experts added that participation should be voluntary. Develop a collaborative process. Many experts noted that a collaborative process should be designed by the participants to fit the needs and circumstances of their situation. Some experts recommended that groups employ the assistance of a neutral facilitator with experience in building collaborative processes. According to some experts, the process should include decision and process rules to govern how the group operates. For example, collaborative groups may use consensus to make decisions, described by several experts as a process in which discussion proceeds until all viewpoints are heard and the stakeholders, or most of the stakeholders, are willing to agree to a conclusion or course of action. When using consensus, some experts note that a group should agree on what consensus means and what the responsibilities are for parties who disagree, such as providing an alternative. In addition to establishing decision rules, one expert noted that participants need to identify the roles and responsibilities for implementing an agreement and obtain commitment from the participants that an agreement will be implemented. Pursue flexibility, openness, and respect. According to many experts, flexibility, transparency, and respect should be built into the collaborative process. Flexibility is important in the process in order to accommodate changing timetables, issues, data needs, interests, and knowledge. Transparency and open communication are essential for maintaining trust and can be achieved through maintaining a written record of proceedings and decisions and ensuring that all parties have equal access to relevant information. Having a respectful process is also necessary to attain civil discourse in which participants listen to one another, take each participant’s perspectives seriously, and attempt to address the concerns of each participant. Building respect and openness involves accepting the diverse values, interests, and knowledge— including local knowledge—of the parties involved. Find leadership. Several experts identified the need for collaborative groups to find a credible leader who is capable of articulating a strong vision. According to the experts, a leader should have good communication skills, be able to work on all sides of an issue, and ensure that the collaborative process established by the group is followed. Experts noted that neutral facilitators can also function as leaders for a group. In addition, experts said that it is important to build leadership skills within the organizations participating in a group so that these leaders can effectively represent the interests of their organizations. Identify or develop a common goal. Most of the experts who wrote about collaborative practices noted the importance of groups having clear goals. In a collaborative process, the participants may not have the same overall interests—in fact they may have conflicting interests. However, by establishing a goal based on what the group shares in common—a sense of place or community, mutual goals, or mutual fears—rather than on where there is disagreement among missions or philosophies, a collaborative group can shape its own vision and define its own purpose. When articulated and understood by the members of a group, this shared purpose provides people with a reason to participate in the process. Develop a process for obtaining information. Some experts noted that effective collaborative processes incorporate high-quality information, including both scientific information and local knowledge, accessible to and understandable by all participants. As one expert noted, conflict over issues of fact is capable of incapacitating a collaborative process. Therefore, it is important to develop a common factual base, which can be accomplished by all participants jointly gathering and developing a common understanding of relevant data. This process allows the stakeholders to accept the facts themselves, rather than having the facts disseminated to them through experts. Leverage available resources. Many of the experts emphasized that collaboration can take time and resources in order to accomplish such activities as building trust among the participants, setting up the ground rules for the process, attending meetings, conducting project work, and monitoring and evaluating the results of work performed. Consequently, it is important for groups to ensure that they identify and leverage sufficient funding to get the group started and to accomplish the objectives. One expert noted that many collaborative groups are successful in attracting sufficient funding for restoration projects but have difficulty in securing funding for administration of the group. Provide incentives. Some experts note that economic incentives can help collaborative efforts achieve their goals. For example, by purchasing conservation easements, a group can give landowners incentives to help achieve the goal of preserving open space. A conservation easement is a restriction placed on a parcel of land that limits certain types of uses or prevents development from taking place in order to protect the resources associated with the land. By purchasing easements and thus creating an incentive for a landowner to keep the land in its current land use, the groups are able to keep the land from being developed, preserving open space and providing other ecological benefits. Monitor results for accountability. According to many experts, to be effective, the participants in groups need to be accountable to their constituencies and to the process that they have established. In addition, organizations supporting the process expect accountability for the time, effort, money, or patience they invested in the group. As a result, experts note the importance of designing protocols to monitor and evaluate progress toward a collaborative group’s goals, from both an environmental and a social perspective. Some experts recommend that collaborative groups use monitoring as a part of an adaptive management approach that involves modifying management strategies or project implementation based on the results of initial activities. While experts noted that these practices are commonly shared by successful collaborative groups, one expert said that the use of the collaborative practices does not guarantee a group’s success. To measure whether groups are successful, experts noted that two criteria can be used: (1) whether the groups were able to increase participation and cooperation and (2) whether they improved natural resource conditions. The first criterion measures success based on organizational factors and social outcomes, such as improved relations and trust among stakeholders. In many instances, the groups studied by one expert identified factors such as improved communication and understanding as their greatest success. Factors used by some experts to evaluate success in this respect include the perceived effects of the collaborative effort in building relationships, the extent of agreement reached, and educating and outreaching to members of the community. The second criterion for success is based on whether groups have been able to improve natural resource conditions as measured by specific indicators, such as water quality, ecosystem health, or species recovery. Some experts note that to evaluate progress toward improving resource conditions, monitoring needs to be performed over a period long enough for change to occur and focus on indicators that are associated with a group’s natural resource goals. Although collaborative resource management is generally viewed by the experts as an effective approach for addressing natural resource problems, many experts discussed two limitations to its use. First, the process of collaboration, which involves bringing people together to work on a problem and moving the group forward to reach a decision, can be difficult and time-consuming, particularly in the initial stages when the group is getting started, and thus require large amounts of resources, including staff and money. Even after a group has been working together for a period of time, there may be inefficiencies with the process as new group members need to be brought up to speed. Second, collaboration does not always work in providing the solution to all natural resource problems. In some instances, for example when there are irreconcilable differences among group members, agreement may not be possible. In other instances, one particular stakeholder may derail the process by refusing to cooperate. As a result, collaborative resource management is not applicable everywhere, and collaborative efforts may not be replicable. For example, collaboration may not work in a community deeply divided over a particular natural resource issue that has generated a long history of controversy and litigation even though a collaborative effort dealing with the same issue was successful in another community. Furthermore, some experts question whether collaborative resource management groups are equitable; have balanced power; produce solutions that are protective of the environment; and are accountable to the public, particularly in circumstances where federally managed lands are involved. A number of experts raised concern over the equity of collaboration, noting that it can remove discussions from the public arena and empower those who are involved in the group at the expense of those who cannot, or choose not to, participate even though they have a legitimate interest. By their nature, collaborative groups tend to be primarily made up of local stakeholders. Yet, others who may not live in the community but have an interest in the lands because they recreate there, use water originating there, or value endangered species living there are sometimes left out of the process because they are unaware it is occurring or do not have the means or the resources to participate. For example, national environmental organizations cannot always participate in local efforts because they may not have people at these locations or be able to bear the expense of traveling there. Some experts also question collaboration on the grounds that public processes may be co-opted by parties with particular interests who manage to control the agenda of the group. Many experts raising this question were concerned about local economic interests taking over a process and, because of their influence, overriding other interests. Yet, one expert noted concerns that the process could also be co-opted by environmental interests. Furthermore, some experts critical of collaborative resource management raised concerns about the efforts focusing on reaching a consensus decision. By trying to reach consensus, they argued, compromises are made that can result in a “least common denominator” solution, which some may view as less protective of the natural resources. Finally, a few experts criticize collaborative efforts designed to make decisions about management activities on federal lands because they believe collaboration reduces federal agencies’ accountability to the broader public. Specifically, some of these experts say that collaboration effectively transfers the authority to make land management decisions from the federal land management agencies to local citizens. Consequently, these experts argue that when collaborative groups make decisions related to federal land, the land and resource management agencies do not carry out their legal responsibilities to manage the public land and are not accountable to the public. In response to such questions raised about collaboration, other experts note that a well-designed and implemented collaborative process can avoid some of the outcomes with which the critics of collaboration are concerned. For example, a process that is inclusive will incorporate both local and national interests, and a process that uses the leadership of a neutral facilitator can help to ensure that all viewpoints are considered and prevent any one group from taking over the process. Furthermore, one expert notes that a well-designed collaborative process that includes debate over the facts of an issue can avoid a “least common denominator” solution. Finally, according to an expert, when participating in collaborative groups that are transparent, federal agencies can show that they are not improperly transferring authority to local communities. Overall, the collaborative resource management efforts that we studied were successful in achieving participation and cooperation among their members and sustaining or improving natural resource conditions, the two criteria the experts identified to gauge the success of collaborative groups. Six of the seven collaborative efforts we studied have reduced or averted the kinds of conflicts that often arise when dealing with contentious natural resource problems, particularly those that cross property boundaries, such as threatened and endangered species, lack of wildland fire, invasive species, degraded wildlife habitat, or similar problems. However, the extent of resource improvement across broader landscapes that the efforts were working in was difficult to determine because the landscape-level data needed to make such determinations were not always gathered. The seven efforts we studied managed natural resource problems that can often cause conflict and controversy, and sometimes litigation. As shown in table 1, the natural resource problems undertaken by the seven efforts we studied ranged widely from fragmented riparian habitat for fish and lack of wildland fire in rangeland ecosystems to predator interactions with livestock, travel access in wilderness areas, and nature-related outdoor activities. Each of the natural resource problems the efforts managed, or are managing, involves many different interests that can potentially lead to conflict among the different members of the group. For example, in the Blackfoot Challenge case, federal agencies are required to protect threatened and endangered species such as the grizzly bear and the gray wolf, yet ranchers fear these large predators because of the harm they can cause to livestock. Or, in the Uncompahgre Plateau example, as a result of the Energy Policy Act of 2005, transmission line operators must ensure that their power lines remain reliable, which traditionally involved clear cutting the rights-of-ways involved, even on public lands. Meanwhile, natural resource managers seek to provide habitat for lynx and deer and to prevent large openings in the forest that may come with utility corridors. The natural resource problems and potential or actual conflicts managed by each of the groups are described in more detail in appendix II. As table 1 shows, six of the seven efforts were able to identify solutions to their natural resource problems that met their common interests. For example, by developing the concept of a credit system, the Cooperative Sagebrush Initiative has identified a way to encourage—and pay for— preservation and restoration of sagebrush habitat while also allowing for the development of sagebrush in areas that are economically or otherwise important. In another example, the Onslow Bight Forum identified lands that were important to preserve and restore as habitat for different species and purchased these from willing landowners. Because the groups can pool their funds, they are able to purchase more properties and more expensive properties, and by purchasing the land on the free market from willing owners, the group provides the landowners with the value of their property, thereby not harming their economic interests. While the seventh group— the Steens Mountain Advisory Council—was able to provide advice on a cooperative management plan and vegetation treatment plans, it did not provide input on a travel management plan for the area, a key management issue. All seven efforts we studied used several of the collaborative practices identified by the experts—such as seeking inclusive participation; using collaborative processes; pursuing flexibility, openness, and respect; and finding leadership—and six of the efforts were successful in reducing or averting conflicts. These six groups were able to cooperate and focus on their common interests and goals, despite different perspectives and interests among the members. In addition to identifying common goals, several of the successful efforts were able to use other practices, such as obtaining scientific and other information to inform their decisions, leveraging funds, and providing incentives. The one effort that has been less successful in dealing with conflict used several of the collaborative practices, but does not have a common goal and does not have funding to gather information, leverage resources, or provide incentives. The Steen Montin collabortive effort i locted in theastern Oregon. The effort i focused on abt 496,000 cre of high deert montin re tht has gret ecologicl diverity nd vried wildlife. The primry rerce concern Steen Montin inclde issu relted to livetock grzing, wilderness, trvel ccess, nd mgement of jniper tht hve encroched into sageusnd grassnd reas. In 2000, the Steen Montin Coopertive Mgement nd Protection Act eablihed the re nd tasked the Steen Montin Adviory Concil with providing innovtive nd cretive suggetion to the BLM on how to mge the nl rerce on Steen Montin in nner tht wold llevite conflict. The Steen Montin Adviory Concil inclde locl rncher, recretioni, nd environmentl repreenttive. Seek Inclusive Participation. The seven groups each have members that have multiple different perspectives such as private landowners, conservation groups, natural resource land management agencies, and wildlife agencies. Most of the groups include representatives from federal agencies such as BLM, the Forest Service, and the U.S. Fish and Wildlife Service, and several include USDA’s Natural Resources Conservation Service. All but one of the groups we studied were primarily organized around landowners and managers who can make decisions about their respective lands, including members of conservation-oriented groups such as The Nature Conservancy and local conservation groups such as the North Carolina Coastal Land Trust and North Carolina Coastal Federation. Two groups, the Blackfoot Challenge and the Malpai Borderlands Group, focus primarily on private lands and the surrounding public lands. On the other hand, the Uncompahgre Plateau, Onslow Bight Forum, and the Eastern Upper Peninsula Partners in Ecosystem Management include large areas of public lands, with the exception of lands owned by the land conservancy groups in North Carolina and several forest companies in Michigan. While the groups are open to other participants such as environmental groups, according to several participants, they may not seek them out or the environmental groups may not participate. All but one of the groups have self-selected membership, which means that they attract members who are interested in working on the problems identified by the group and are willing to find solutions to these problems, which may not be the case with certain organizations. Only one group, the Steens Mountain Advisory Council, is required by law to include certain members, including representatives of the ranching and environmental communities, including one local and one national representative from each. Develop a Collaborative Process. The seven groups we studied are organized differently but are each organized to collaborate. Three of the groups—the Blackfoot Challenge, the Cooperative Sagebrush Initiative, and the Malpai Borderlands Group—have incorporated as nonprofit organizations, each with a board of directors, and one—the Uncompahgre Plateau Project—has a separate nonprofit financial management group. According to members of one group, being incorporated allows the group the autonomy to raise funds and complete management projects on its own, without relying on the federal or state agencies. Also, incorporating puts the groups on equal footing with the agencies as they identify projects with mutual benefits. Of the remaining three groups, two are less formally organized and one is more formally organized. The Onslow Bight Forum and the Eastern Upper Peninsula group function as information-sharing groups that allow the individual members to determine what actions they will take independently. The Onslow Bight Forum uses a memorandum of understanding to identify the role of each member and the group, while the Eastern Upper Peninsula group does not have any organization documents and operates informally. Finally, the last group—the Steens Mountain Advisory Committee—is a legislatively organized advisory group for BLM and has written protocols to describe its organization and processes. All but one of these groups uses a consensus process to make decisions. This process involves all participants, focuses on solutions, and proceeds until agreement is reached. For example, participants of one group, the Blackfoot Challenge, said that its members followed the 80-20 rule—they worked on 80 percent of the items they could agree on and left the 20 percent they could not agree on at the door. The participants said that as they worked together longer, the 20 percent of items that cause disagreement have been reduced as well. Two groups—the Onslow Bight Forum and the Eastern Upper Peninsula group—do not make formal decisions, but use a consensus process in discussing and agreeing on a plan of action that members can decide to take or not. One group, the Steens Mountain Advisory Council, uses a voting process to make certain decisions rather than a consensus process. To make a recommendation to BLM, the advisory council is required to have 9 of its 12 members vote in favor of it. According to the members, unfilled positions and poor attendance at council meetings have made it difficult to achieve the number of votes needed to make recommendations to BLM. Pursue Flexibility, Openness, and Respect. All but one of the groups have flexible and open processes that allow the members to discuss their positions. Two of the groups—the Onslow Bight Forum and the Eastern Upper Peninsula group—would not likely exist without the openness that allowed the members to retain their own missions and land management goals rather than the group subsuming them. Several of the groups, such as the Uncompahgre Plateau Project, use Web sites and plans to communicate with each other and the community. On the other hand, the Steens Mountain Advisory Council is different from the other groups in that it was legislatively created, and the act that created both the Steens Mountain Cooperative Management and Protection Area (CMPA) and the council resulted from lengthy negotiations among several parties, some of whom are, or have been, represented on the council. The group has used facilitators to overcome some of the conflict that developed through the negotiations, but some acknowledge that the council established by the act has not yet resolved key conflicts over management of the area. Yet, some of the members we interviewed were hopeful that a change in members that occurred recently might help to invigorate the group. The Onlow Bight Conervtion Form i collabortive grop focused on the long-lef pine fore, euarie, wetlnd, nd pocoin (wetlnd on hill tht form ecause of ccted pet) in coasl North Crolin. The grop formed in 2001 rond issu such as increasing development nd it effect on wildlife habitt, prticrly tht of the endngered red-cockded woodpecker, nd wter quality. The Onlow Bight Conervtion Form i n informtion-ring prtnerhip of federnd te gencie nd nonprofit gro who hve igned Memorndm of Undernding to identify opportnitie to work together to conerve the nl rerce of the Onlow Bight lndpe. The memer inclde the Mrine Corp, ForeService, U.S. Find Wildlife Service, North Crolin Deprtment of Environment nd Nl Rerce, North Crolin Wildlife Rerce Commission, The Nre Conervncy, the North Crolin Coasl Federtion, nd the North Crolin Coasl Lnd Trust. Find Leadership. All of the groups have benefited from the availability of community leaders or agency employees who could lead the group. Several of the groups were started by local community leaders who energized and engaged others to work with them, although the federal agency staff were working alongside the community leaders to support the efforts. In particular, the Blackfoot Challenge, Malpai Borderlands Group, and Uncompahgre Plateau projects were started and sustained by community leaders, but they recognize the important contribution of the federal agency employees who were involved as well. On the other hand, federal and state agency employees took the lead in starting the Eastern Upper Peninsula group and were also important in the Cooperative Sagebrush Initiative, and federal agency staff worked with staff from The Nature Conservancy to start the Onslow Bight Forum. One community leader on the Steens advisory council has attempted to focus the group on its role and keep it on track for making recommendations to BLM. Identify a Common Goal. Of the seven groups we studied, six identified and shared a common goal. For example, the Onslow Bight Forum brought together diverse members with similar interests in preserving open space and habitat—the U.S. Marine Corps has an interest in preserving open space around its installations for safety reasons and to help save endangered species, and land conservation groups seek to preserve habitat corridors and prevent development of the rural landscape. Similarly, the Eastern Upper Peninsula group focused on the need to facilitate complementary management of public and private lands, for all appropriate land uses, and to sustain and enhance representative ecosystems in the Eastern Upper Peninsula. On the other hand, the Steens Mountain Advisory Council does not share a common goal for management of the Steens Mountain area, with some members advocating motor vehicle access through wilderness areas for historical uses such as livestock grazing and others advocating for more wilderness areas to be set aside in the planning area and greater conservation requirements instituted in those wilderness areas already existing. The Steens Mountain act established a cooperative management area, the purpose of which is to conserve, protect, and manage the long-term ecological integrity of Steens Mountain for present and future generations. To further this purpose, the act directed BLM to manage the area to achieve five objectives. Several participants indicated that the issue will need to be litigated to clarify the act’s requirements. The Uncomphgre Plteau Project collabortive grop i locted in thwetern Colordo. The grop focus it effort on the Uncomphgre Plteau, which 1.5 million cre, 75 percent of which iublic lnd. The plteau home to abundnt wildlife pecie, inclding poption of mle deer. The grop formed in 2001 to protect nd retore the ecotem helth of the plteau. In ddition, key electricl trmission line tht connect the eastern nd wetern United Ste cross the plteau, creting the need for vegettion mgement ner thee line. The prtner in the Uncomphgre Plteau Project inclde the ForeService, BLM, Public Lnd Prtnerhip, Colordo Diviion of Widlife, Wetern Are Power Adminitrtion, nd Tri-Ste Genertion nd Trmission Assocition, Inc. The prtner igned Memorndm of Undernding nd eablihed n Exective Committee to gide it overll direction; Technicl Committee nd contrct employee, to crry ot it ctivitie; nd nonprofit orgniztion to hndle it finnce. Develop a Process for Obtaining Common Information. Each of the seven collaborative groups has established a group or process to jointly develop and use scientific information as part of their decision making, although some groups have done so more than others. For example, the Malpai Borderlands Group has a scientific advisory board to develop research projects on fire to support the group’s efforts to restore fire, which had been suppressed for decades, to the ecosystem to help restore healthy grasslands. It also holds annual science conferences to bring together the relevant scientific findings on rangelands, fire, threatened and endangered species, and other issues. The group also works with USDA, Forest Service, and university researchers on vegetation and fire studies. On the other hand, rather than develop its own scientific information, the Cooperative Sagebrush Initiative relied on data produced by the U.S. Geological Survey on sagebrush habitat and studies completed by the Western Association of Fish and Wildlife Agencies to assess the status of sage grouse and the sagebrush ecosystem in the 11 western states involved. Several groups developed landscape maps to show different information. For example, the Onslow Bight Forum used habitat and biological information, and other information, to develop a landscape map of the key areas for habitat and preservation purposes. Finally, some groups, such as the Uncompahgre Plateau Project, reported that using scientific information, including field trips to demonstrate effects of their management activities, helped them to communicate their efforts to outside parties who may have otherwise been critical. The Mlpi Borderlnd Grop collabortive effort i locted on the order with Mexico in thern New Mexico nd Arizon. The grop formed nonprofit orgniztion in 1994 to work on retoring the nl fire regime, preerve lrge open ce, nd mintin l lifetyle in the pproximtely 800,000 cre of deert grassnd region tht inclde mix of federl, te, nd privte lnd. foundation grants, and to use these funds in conjunction with federal partners’ funding to leverage the amount of work that could be done by the group. For example, the Blackfoot Challenge recently received an Ash Institute for Democratic Governance and Innovation award of $100,000, the Uncompahgre Plateau Project received $500,000 from the state of Colorado and $620,000 from the Ford Foundation, and the Malpai Borderlands Group received $8.5 million from its different fundraising efforts. According to the Onslow Bight Forum, its members have raised as much as $75 million since 2001 from state and federal funds to acquire land, a process helped by the existence of the forum. On the other hand, the Eastern Upper Peninsula project and the Steens Mountain Advisory Council do not generate funding. The Eastern Upper Peninsula project members said they did not intend to raise funds because they did not intend to conduct joint projects, and the Steens group is not organized to raise funds. The federal legislation that created the Steens Mountain Advisory Council authorized $25 million to be appropriated to BLM to work with local ranchers, landowners, and others to conduct work in the cooperative management area; however, these funds have not been provided. Some members said that, if provided, these funds could be used to pursue activities such as purchasing private inholdings, which are privately owned lands within the boundary of a national park, forest, or other land management unit. The Mlpi Borderlnd Grop was initited grop of rncher nd environmentli. Federgencie, inclding the ForeService, U.S. Find Wildlife Service, nd Nl Rerce Conervtion Service; Arizon nd New Mexico te gencie; nd conervtion gro, such as The Nre Conervncy, hve plyed role in the grop’ effort. Provide Incentives. Several of the groups we studied that have dealt successfully with conflict used different types of incentives to gain cooperation and participation. Such incentives include conservation easements, payments for projects or damages caused by wildlife, and different agreements related to threatened and endangered species. The Blackfoot Challenge, Malpai Borderlands Group, and Eastern Upper Peninsula project have arranged, or helped arrange, conservation easements to protect either rangeland or forested land that could have been developed for housing, otherwise. The Malpai group also used another type of payment to help reduce conflict over livestock losses caused by predators, supporting a predation fund to pay ranchers when it can be proved a predator—the jaguar in New Mexico and Arizona—has killed livestock. A third type of incentive, safe harbor agreements and habitat conservation plans, has been used by the Malpai Borderlands Group. Safe harbor agreements seek to assure landowners that if they restore or enhance habitat, they will not incur new restrictions if their actions result in a threatened or endangered species taking up residence. In order to obtain a permit to take a species incidental to lawful land management activities, a landowner must complete a habitat conservation plan, which specifies measures the landowner will undertake to minimize and mitigate the effect on the species. These agreements encourage private landowners to conduct projects that will protect species on their property, while also protecting their use of the land should they “take” one of the species— either by killing it or degrading its habitat. According to one group these agreements can be complex and time-consuming to arrange, and thus, it may be more efficient for the group to work with the U.S. Fish and Wildlife Service through the process than for each individual landowner. In addition to these types of arrangements, the Cooperative Sagebrush Initiative wants to develop a related incentive, a conservation credit bank in which one party would pay to protect sagebrush habitat, or conduct restoration of habitat, and another party would purchase credits to develop land that would degrade sagebrush habitat or kill a species. The group is still considering how to measure the conservation value of different sagebrush species and habitat they provide and how to monitor those values. Through cooperating, five of the seven efforts we studied have accomplished multiple management activities and projects that have helped sustain or improve natural resource conditions in their areas. Officials of the five efforts that have completed resource management projects to date said that this work had improved resource conditions and helped to accomplish the goals the groups hoped to achieve. The Cooperative Sagebrush Initiative has not yet accomplished its work, as it started in September 2006 and is just developing demonstration projects. And, although the Steens Mountain Advisory Council has helped BLM to develop a management plan for the Steens Mountain CMPA, it did not deal with the most contentious issues that relate to travel access, wilderness areas, and wilderness study areas in the plan issued in November 2007. Table 2 shows the work accomplished by the different efforts that we studied. As shown in table 2, the efforts’ accomplishments ranged widely, from developing joint plans and scientific information, to changing vegetation conditions and managing species habitat. For example, some of the groups developed landscape maps of vegetation and potential habitat that integrated information for each of the members in the group. The groups also accomplished numerous activities to keep landscapes open and usable for natural resource purposes, such as grazing or timber harvesting. At the same time, the groups worked on several projects to help conserve threatened and endangered species habitat. The two efforts that have not completed projects—the Cooperative Sagebrush Initiative and the Steens Mountain Advisory Council—have not moved beyond planning work. As shown in table 2, three of the groups—the Blackfoot Challenge, Malpai Borderlands Group, and Uncompahgre Plateau Project—have employed monitoring programs that demonstrate the effect of their activities on site- level natural resource conditions. Monitoring environmental or natural resource characteristics is typically conducted at the site level—the area involved in a management activity, such as a vegetation treatment—to determine what effect the management activity has, or at the landscape level—a broad area—to determine the overall conditions across that area. Monitoring can also be conducted over time to indicate the trend in conditions at a site or landscape. Montana’s Department of Fish, Wildlife and Parks, one of the partners involved in the Blackfoot Challenge, conducts fish surveys in the Blackfoot River to determine how populations are faring. This work measures the benefits provided by the group’s riparian projects for fish populations, including endangered bull trout. The Malpai Borderlands Group conducts range monitoring on 290 sites in its area and conducts monitoring of some species to determine how they have been affected by group projects. The Uncompahgre Plateau Project maps its vegetation treatments and fires, and thus shows areas of different vegetation ages, types, and the habitat it provides across the broad area managed by several agencies. Because the agencies’ mapping data are not compatible, however, staff said that they had to develop ways to merge the data, which was is time-consuming and expensive. Through January 2008, the agencies, with the help of the group, had pulled together data for two large watersheds and had begun working on two more. The other groups do not conduct monitoring as a group, although the resource management agencies do track resources in some cases. Two of the seven groups—Blackfoot Challenge and the Uncompahgre Plateau Project—monitor the results of some of their projects across the larger landscape to determine the effect of their work across the broad landscapes that they are trying to affect; however, the other groups do not conduct landscape monitoring. According to two groups, they are not able to monitor across a larger area for two primary reasons. First, according to participants, it is time-consuming and expensive to monitor multiple sites regularly across a large area, and this is what is necessary to understand the effects of multiple projects in that large landscape. For example, even though the Malpai Borderlands Group monitors 290 sites for the effects of grazing, climate, and other factors on the condition of the grasslands that are useful for assessing the condition of that pasture or smaller area, according to the group’s scientists, the group does not collect comparable data across different pastures or smaller areas that allow comparison across the broader landscape. Data must be collected at a different, broader scale and need to be collected consistently at specified locations to determine the condition of the hundreds of thousands of acres of rangeland that the group is helping to manage. Currently, the group and its scientific advisory board are considering what data to collect. The second reason that the groups do not collect data is that they either have not agreed to collect such data or they have not agreed on the work that they will conduct and monitor. Two groups—the Onslow Bight Forum and the Eastern Upper Peninsula group—do not monitor because both of these groups organized to share information, not to develop joint projects and monitoring. According to some Onslow Bight members, it would be useful to track the results that individual members have accomplished with the group’s information, but the group has not decided to do this jointly or to dedicate the resources to it. According to the members of the Eastern Upper Peninsula group, their purpose has never been to jointly manage projects and therefore there is no need to monitor results. The group’s purpose is to share information about natural resource problems, such as invasive species, and effective ways to treat them, without requiring the participants to work together. The group gives members a place to find common problems with other agencies and then each agency or participant can conduct its work and monitor results accordingly. Finally, the Cooperative Sagebrush Initiative and the Steens Mountain Advisory Council do not conduct any monitoring because the groups are just beginning projects that warrant monitoring. The Cooperative Sagebrush Initiative recognizes the need for monitoring and has considered including the cost of monitoring in each project to ensure that it is conducted, but the group has not yet conducted any projects, nor have they conducted pilot projects to ensure that they can correctly measure the benefits achieved by restoration projects. At Steens Mountain, BLM has drafted an overall monitoring plan for the Steens Mountain area that may serve to monitor work accomplished. However, BLM has not yet conducted some of the key work identified as needed by the Advisory Council because the agency is still conducting studies to determine how to best clear juniper in wilderness areas and wilderness study areas because mechanical tools— the method that has been proven effective for removing large juniper trees—cannot be used to cut down trees prior to burning. Federal land and resource management agencies face several challenges to participating in collaborative resource management efforts, according to the experts, federal officials, and participants in collaborative efforts we interviewed. Key challenges that the agencies face include improving federal employees’ collaborative skills and working within the framework of existing laws and policies. The 2004 Executive Order and 2005 White House Conference on Cooperative Conservation set in motion an interagency initiative, including a senior policy group, an executive task force, and working groups, to develop policies and take actions that support collaborative efforts and partnerships. The policies and actions taken as part of the initiative have made progress in addressing the challenges agencies face. However, additional opportunities exist to develop tools, examples, and guidance that would strengthen federal participation in collaborative efforts and better structure and direct the Cooperative Conservation initiative to achieve its vision. As the federal land and resource management agencies work to collaborate with state, local, private, and tribal entities, they face several challenges. The key challenges identified by experts, federal officials, and participants in collaborative efforts we interviewed include (1) improving federal employees’ collaborative skills; (2) determining whether to participate in a particular collaborative effort; (3) sustaining federal employees’ participation over time; (4) measuring participation and monitoring results to ensure accountability; (5) sharing agency and group experiences with collaboration; and (6) working within the framework of federal statutes and agency policies to support collaboration. The first challenge agencies face involves improving their employees’ skills in collaboration, as well as increasing their use. Such skills include improving communication, identifying and involving relevant stakeholders, conducting meetings, resolving disputes, and sharing technical information and making it accessible. Federal participants and others we interviewed indicated that federal employees are often technical experts and improving their collaborative skills may enable them to work more effectively with a collaborative group. They indicated that such skills are important to work effectively with neighboring landowners and community members who are interested in the projects and lands. Many participants emphasized that hiring new people with collaborative skills is one way to improve the level of collaboration by federal agencies and also said that training in collaboration for employees is important to improve skills. Some federal agency officials said that hands-on training in collaborative efforts, involving participants from other groups, is most helpful. Furthermore, to encourage the use of collaboration by federal employees, several participants we interviewed said that management should support field staff in their collaborative efforts. For example, one participant stated that management needs to identify those employees with collaborative skills and assign them according to these skills. Some participants said that senior employees may be better at collaboration because they have developed a relationship with the group or are more comfortable in interpreting laws and policy to apply in specific situations that might arise. Others said that new employees have enthusiasm and only need to be shown how they can best work with groups. Several participants said that federal agencies need to allow their staff to become acquainted with a community to work better with local groups, and others said that providing flexibility for the employees to work with the groups is needed. Finally, one participant we interviewed said that collaborative efforts will fail if federal management officials reverse the decisions made by the federal representatives working with a collaborative group because the group will no longer trust the federal agencies to do what they have agreed on. A second challenge agencies face in working with collaborative groups is determining whether or not to participate in a particular group. Collaborative efforts are commonly started by concerned citizens interested in the management of their public lands and, as a result, the federal agencies can choose whether to be involved and what role to play. If they make an uninformed choice, they risk becoming involved in a group that might take great effort and expend considerable staff resources with few results. Various external factors affect a collaborative group’s ability to cooperate and succeed, including a community’s collaborative capacity and the amount of controversy involved. If federal agencies do not understand these contributing factors, as well as the nature of the controversy related to a problem, federal staff may become involved in a collaborative effort that has little chance of working, potentially leading to increased conflict and costs. Part of determining whether to be involved is what role the agencies can play. Participants we interviewed indicated that it is important for federal agencies to be involved in collaborative efforts because they are such large landowners, and, in many areas, natural resource problems cross their boundaries onto other lands. However, several participants—including federal agency officials—indicated that the agencies should “lead from behind,” letting the group take a lead in determining what work can be done. One participant said that by doing this, the community works out their issues and comes to a common understanding among themselves— without the agency staff brokering the discussion. In such cases, the agencies can help the groups by providing planning assistance, technical information, funding, and even administrative support. In other cases, the federal agencies may want to use a collaborative group to provide input on a management plan or project, and in these cases, the agencies need to determine which groups to involve and what their particular natural resource management concerns are. Regardless of the federal role in collaboration, experts and participants emphasized the need for federal agencies to clarify how a group’s agreed-upon ideas could affect decisions about federal land. Once federal staff have become involved in a collaborative effort, a third challenge becomes sustaining employees’ participation over time. This is particularly important because of limited resources available in the field offices and the staff’s limited ability to participate while also conducting their work for the agency. Experts and participants we interviewed said that, to be effective, federal participation should be consistent and ongoing throughout the collaboration, which can be for many years. For example, participants of the Blackfoot Challenge and the Malpai Borderlands Group indicated that their groups had benefited from agency staff acting as liaisons to the groups for several years. These groups were highly organized in their efforts and worked with agency officials to create these relationships. However, at many of the field offices we visited, federal agencies were experiencing staffing limitations that made their work with existing collaborative efforts more difficult and limited. In particular, the federal agencies’ field offices had experienced recent downsizing in the last several years and were one or two people below their normal staffing levels. As a result, the remaining staff members were spread thinly across existing programs to accomplish their work and achieve targets set by the agencies. According to the officials, these federal employees sometimes continued to participate in collaborative efforts but devoted less time and attention to them. For example, in North Carolina, federal officials for the National Park Service, U.S. Fish and Wildlife Service refuge, and Forest Service had been involved in the Onslow Bight Forum efforts to map key habitat, but as their biologists left the agencies, the agencies became less involved and attended fewer meetings. Another issue related to staffing and federal agency support of collaborative efforts is the agencies’ practice of transferring people frequently from one field location to another. Participants said that longevity and a “sense of place”—or commitment to an area—is important for collaborating with groups whose participants may have been in an area for generations. A few participants thought that changing staff helped to bring in new people with energy and new ideas, but, according to several other participants, moving staff frequently creates a gap in the support for a group, which may hinder progress if a federal participant for a project moves at the wrong time. Some participants thought that the transition between outgoing and new federal staff could be eased by the outgoing staff member writing a memo to describe all the relevant details of the group, its members, its issues, and its projects, among other things, but others thought that it would be better to rely on the other staff in the office or group members for knowledge about the group, community, and other factors that would affect the agency’s participation. Once a collaborative effort has begun, an important challenge faced by federal agencies and the members of the group is measuring participation and monitoring the results of the efforts. Measurement and monitoring allow members, both federal and nonfederal, to be accountable to each other and to the public. In the case of the federal agencies, measuring participation and monitoring results help show how an agency’s participation in a group has helped to achieve some important resource management goal for the agency. According to federal officials we interviewed, agencies will be involved in collaborative efforts to the extent that the group can help them achieve federal land management goals and targets for work they are required to do. However, according to experts, federal officials, and participants, it is difficult to measure the results of collaboration because there is no direct measure or “widget” produced from participating or collaboration. For example, according to one participant, counting the number of meetings held does not measure collaboration, and, in fact, the number of meetings needed for a well-run group may decrease over time. Participants also said that it may take a few years to build a group and relationships before any work is accomplished, which may not fit with agency performance targets that are set annually. Moreover, experts said that monitoring the natural resources results of collaborative management is also difficult because of the long-term nature of ecological change. For example, it can take several years before the results of a management project can be seen or measured; at the same time, natural fluctuation in drought, vegetation, and species can mask the effects of management actions. To counter these difficulties, according to some participants we interviewed, groups need to have an overall plan for the improvements in natural resources they are working to achieve and monitor according to those goals. Even then, as the examples we studied show, collaborative groups have a difficult time monitoring because of the time and cost involved. A fifth challenge that the federal agencies face in participating in collaborative efforts involves sharing agency and group experiences with collaboration. By their nature, collaborative groups are decentralized and localized, with their members focused on the group’s management plans and activities. According to experts and participants, these groups are each unique in their makeup, organization, circumstances, and abilities, yet can experience similar problems working together and with federal agencies. Some participants who had been involved in the White House Conference on Cooperative Conservation and other conferences stated that such forums are useful for giving groups the opportunity to share practical experiences of working together and with federal agencies. The types of lessons include the fact that groups can benefit from paid staff, even part- time, or a director to keep the group organized between meetings. Finally, agencies face the challenge of collaborating within the existing framework of federal statutes and agency policies that establish a management culture within each agency. In addition to the framework of natural resources and environmental laws and policies described above, agencies have a set of laws and policies for working with nonfederal entities or groups, including the Federal Advisory Committee Act, policies on ethics related to working with groups, and financial assistance requirements. Some experts and participants in collaborative groups identified aspects of federal laws and agency policies as being inconsistent with collaboration. However, aspects of the policies reflect processes established to support good government practices such as transparency and accountability. The federal agencies have not, in all cases, evaluated the laws and policies involved to determine how best to balance collaboration with the need to maintain good government practices. A short description of these laws and policies follows. Federal Advisory Committee Act: Some experts and collaborative groups assert that the Federal Advisory Committee Act inhibits collaborative management by imposing several requirements on interaction between federal and nonfederal participants. For example, the act requires that all committees have a charter, and that each charter contain specific information, including the committee’s scope and objectives, a description of duties, the period of time necessary to carry out its purposes, the estimated operating costs, and the number and frequency of meetings. The act generally requires that agencies announce committee meetings ahead of time and give notice to interested parties about such meetings. With some exceptions, the meetings are to be open to the public, and agencies are to prepare meeting minutes and make them available to interested parties. By making the process bureaucratic, some experts and others say that the act limits groups’ abilities to work together spontaneously to solve problems or get work done. USDA officials indicated that they have a budget limit on what they can spend on groups working under the act. Some participants of collaborative groups we interviewed said that the fact that the act’s requirements do not apply to privately led efforts is one reason for communities to lead collaborative efforts with assistance from federal agencies. Other participants said that the act’s requirements caused their groups to focus their goals solely on information sharing, because the group’s purpose would not be to offer advice regarding agency decisions, and therefore the group would not be subject to the act. Ethics rule: USDA and Interior implement federal ethics’ rules on federal employees’ participation on the board of directors of an outside organization differently, resulting in their staff members participating in different capacities on a group’s nonprofit board. The ethics rules generally prevent a federal employee from serving as a board member while serving in an official capacity for the federal agency because of concerns over conflicts of interest. Waivers may be granted under limited circumstances; however, according to USDA and Interior officials, USDA rarely grants waivers, while Interior has granted some waivers. As a result of different implementation of the rule, in the Blackfoot Challenge case, a Forest Service member serves as a nonvoting board member, while BLM and the U.S. Fish and Wildlife Service members serve as voting members. Several of the participants of the group expressed confusion and some distrust over the different federal agency interpretations, saying that they raised some questions about the Forest Service’s commitment to participate. Other groups that form nonprofit boards may face this same inconsistency. Financial requirements: Some groups receive federal grants or cooperative agreements that enable them to conduct activities that provide for a public purpose. Nonfederal participants in collaborative efforts identified federal agency financial procedures for these grants and cooperative agreements that make it difficult for them to work collaboratively with the agencies. For example, some grants require that any interest earned be returned to the federal government, others require the group to raise funds to meet a share of costs, or others do not allow the group to be paid up front, which is difficult for small organizations without much funding. In addition, several participants indicated that it is difficult to pull together funding over the long term from the numerous sources available—foundations, agencies, and fundraising activities—and that this is an ongoing struggle for groups. However, because federal agencies need to seek competing offers or applications for many types of grants and agreements, the agencies may not be able to provide stable funding to groups for very long. For example, the participants of one group we interviewed recently learned that they would have to compete with others to renew their agreement, even though the group has ongoing management plans and projects with BLM and other agencies to provide long-term vegetation management across the agencies’ lands. The result of this action is that the group was uncertain if they would be able to carry out these long-term plans and projects because they rely on this stream of funding to pay for part-time staff to organize the group and provide support for planning projects and reporting the results. One specific type of funding agreement that can help make collaboration work, identified by some federal officials we interviewed, is the watershed restoration and enhancement agreement. Under this authority the Forest Service can use appropriated funds to enter into agreements with other federal agencies; states, tribal, and local governments; or private entities to protect, restore, and enhance fish and wildlife habitat and other resources on public or private land. However, the authority that allows this for the Forest Service—the Wyden Amendment—is set to expire in 2011. In addition, Interior officials stated that they do not have general authority to use their funds to restore or enhance resources on nonfederal land; however, they indicated that BLM, the U.S. Fish and Wildlife Service, and the National Park Service can fund projects on nonfederal land related to reducing the risk of damage from wildland fire. The agency officials that discussed these funding sources said that the ability to spend some of their funds on nonfederal lands enhances—or would enhance—their ability to work with partners in the community. Endangered Species Act requirements for listing species: Participants in the Cooperative Sagebrush Initiative identified several aspects of the Endangered Species Act that make collaboration difficult for them. They have identified and proposed areas where they believe Endangered Species Act policies could be made more consistent with their collaborative effort. In particular, the group is planning to conduct restoration projects for sagebrush habitat, but, according to one participant, these restoration projects are scrutinized as much as a destructive project is in terms of the effect the project may have on a potentially endangered species such as the sage grouse. The group has proposed to Interior that the policy for listing species as endangered—the Policy for Evaluating Conservation Efforts—would apply to their restoration actions because such actions might make listing unnecessary, or listing requirements might be less restrictive. This policy identifies criteria the U.S. Fish and Wildlife Service will use in determining whether formalized conservation efforts that have yet to be implemented or to show effectiveness contribute to making listing a species as threatened or endangered unnecessary. The group has also proposed other changes to the Endangered Species Act regulations and policies that they say would support collaboration and their particular effort. For example, under current policies, the U.S. Fish and Wildlife Service treats the two types of species (threatened and endangered) in the same manner with regard to prohibitions on the taking of a species. The group has proposed that Interior relax the prohibition on the taking of threatened species, arguing that the Endangered Species Act allows for threatened species to be treated in a different manner from endangered species. National Environmental Policy Act: Experts and participants have stated that NEPA hinders collaboration by essentially duplicating the public participation that occurs through collaborative efforts. Collaborative groups may develop a plan or project that they would prefer. For federal projects having a significant environmental effect, NEPA requires the development and analysis of a reasonable range of alternative actions, including the agency’s preferred alternative action, in an environmental impact statement. It also requires public participation in the development of the environmental impact statement. Because collaborative groups often include many of those interested in the natural resources or management being conducted, several participants said that the collaborative group provides the agencies with its preferred alternative and a good sense of the public’s opinion of the project. They believe, for this reason, that NEPA requirements are redundant in these cases. Building on the agencies’ earlier efforts to develop their partnership programs and abilities to work collaboratively, the 2004 Executive Order and 2005 White House Conference heightened attention to partnerships and collaboration across the federal government. After the White House Conference, a report entitled Supplemental Analysis of Day Two Facilitated Discussion Sessions (Day 2 report) was written summarizing the comments of numerous participants in collaborative groups and highlighting actions that the federal agencies could take to improve cooperation and partnerships. In response to the Day 2 report, a senior policy team—composed of the Chairman of CEQ, Director of OMB, and selected Deputy Secretaries of the departments—identified issues to be further addressed by an executive task force and working groups. The task force formed—or incorporated—working groups to address several overall themes identified in the Day 2 report: personnel competencies, training and development, legal authorities for cooperative conservation, conflict resolution, the Federal Advisory Committee Act, education, federal financial assistance, measuring and monitoring, volunteers, engaging the public, and Web site development. Table 3 shows the challenges we identified with input from experts, federal officials, and participants in our review; proposed actions from the Day 2 report that are responsive to the challenges; and the policies or actions taken by the task force working groups that address the challenge. As shown in table 3, several actions have been taken, including development of policies, that have resulted in progress toward addressing several of the challenges agencies face participating in collaborative efforts, but other opportunities exist to take actions that further address the challenges. The challenge of improving federal employees’ collaborative skills is being addressed by the personnel competencies working group. Through 2007, with the input of the Office of Personnel Management, this working group developed a set of collaborative behaviors for federal employees that some of the agencies have made part of their strategies to hire and train employees to improve their collaborative skills. According to Interior and Forest Service officials, senior executive service managers in the agencies are already rated on their ability to collaborate and collaborative behaviors. Interior agencies are now considering how to incorporate these into personnel rating systems for other federal officials and staff, and the Forest Service has revised its employee rating system and incorporated the collaborative competencies into the new system for both managers and employees. In addition, the training and development working group identified and published appropriate training courses offered by each of the land and resource management agencies. For example, BLM and the Forest Service offer a series of courses that include collaborative behavior, and BLM offers one course that visits a community and trains community and agency members on how to work as a group. According to a member of the working group, the idea of an experience-based training, in which staff would visit and work with an experienced group, has been developed but none of the agencies have adopted this at the time of our review. Furthermore, in 2005, CEQ and OMB issued joint guidance, developed by a broad interagency task force convened by the U.S. Institute for Environmental Conflict Resolution, to encourage agencies to use collaborative problem-solving and elaborate on the principles of collaboration. According to officials, the institute also offers a series of courses on collaboration that federal agencies can take. The twin challenges of determining (1) whether to participate in a particular collaborative effort and (2) how to sustain federal employees’ participation over time have not been addressed by policies or actions of the task force or its working groups. However, BLM published a collaborative guidebook in 2007 that includes a discussion of factors to consider in determining whether to collaborate. Similarly, the Forest Service’s Web site links to various partnership assessment tools created by the Natural Resources Conservation Service and private companies. In addition, the Forest Service developed an assessment document that guides an office through an analysis of its workload and how much time it can devote to a collaborative effort. The results of this analysis can help determine whether an office will be able to sustain their participation in a group. Finally, the Forest Service has adopted a tool developed with the Collaborative Action Team, called a transition memo, which allows an employee transferring locations to leave detailed documentation about the community, groups, leaders, and other information for the person coming into the position. While these separate tools are available to the individual agency that developed them, they have not been shared or adopted more broadly among the federal agencies to help them in making decisions whether and how much to participate in particular collaborative efforts. Without tools to assess these aspects of collaboration, particularly as the agencies increase their ability and efforts to participate in collaborative efforts, agencies may be more likely to get involved in unsuccessful efforts. The challenge of measuring participation and monitoring results of collaborative efforts, as shown in table 3, has been partly addressed by the measuring and monitoring working group. Through September 2007, the working group gathered, reviewed, and analyzed tools that measure and monitor how cooperative conservation activities help achieve environmental protection and natural resource management goals. For example, the working group discussed different means to demonstrate the leveraging power of partnerships and collaboration. Some of these tools can also help people engaged in partnerships and collaborative efforts monitor how they are doing and improve their efforts during the process. In addition, the working group identified a few resources that discuss, in general, monitoring of natural resource conditions. In October 2007, the group posted a variety of tools on the Cooperative Conservation Web site, which is an initial step to address this challenge. However, actions that would more fully address natural resource monitoring—the Day 2 report indicated that project monitoring protocols would be useful—have not been taken by the task force or working groups. CEQ officials indicated that an ongoing effort on key national indicators might help to address this aspect of the challenge. However, until guidance or protocols on natural resource monitoring for collaborative groups is provided, federal agencies and groups will be unable to track and relate their progress to Congress, the communities, or other interested parties. The challenge of sharing experiences among agencies and groups has been partly addressed through the actions of the outreach working group, which has developed an official Web site and examples of collaborative experiences. In addition, in 2007, the Collaborative Action Team started WestCAN, facilitating the development of a network of people familiar with cooperative conservation. Other actions identified in the Day 2 report that could be taken and would address this challenge include organizing and supporting annual conservation conferences. As of October 2007, the agencies had held nationwide listening sessions, but had not held or proposed any further conferences on cooperative conservation either nationally or regionally. Federal officials indicated that such meetings can be expensive and time-consuming to organize and that they would like others to take the lead in organizing them. They also indicated that it is important to have clear goals and objectives for such meetings and that the meetings should lead progressively to achieving these goals and objectives. Individual agencies have held conferences in the past; they also meet regularly with nonprofits interested in the collaborative approach through the Collaborative Action Team. However, these meetings and tools may not provide the opportunity for the different agencies and groups to meet and share information and possible solutions, or the face-to-face experiences that participants in the conference found valuable. Without such meetings, it would be difficult for the groups to be able to meet periodically to generate ideas and share information or develop a cooperative conservation network. The challenge of working within the agencies’ legal framework is being addressed, as shown in table 3, by several actions. At a broad level, the legal authorities working group worked with the agencies to publish a compendium, for each department, of the authorities that allow and support collaboration, which will help agency staff who are working with collaborative groups to understand the requirements that they face. More specifically, the status of actions to resolve perceived inconsistencies between the authorities and collaboration include the following: The Federal Advisory Committee Act working group is streamlining requirements for federal advisory groups, which is one of the primary pieces of legislation that agencies and participants in collaborative efforts have identified as inconsistent with collaboration. According to CEQ officials, the Federal Advisory Committee Act team has determined that flexibility exists within the current law and policy for groups and is developing the best way to share this information with agency staff and group participants, such as training. A legal analysis of the incentives and disincentives affecting collaborative groups—particularly those associated with the Endangered Species Act and NEPA—was an action proposed by the Day 2 report that has not been addressed by the task force or working groups. In addition, USDA’s and Interior’s different implementation of ethics rules resulted in inconsistent decisions regarding federal employees serving on nonprofit boards. While no specific actions have been taken by the task force, Interior is evaluating regulatory and policy changes to the Endangered Species Act in response to the concerns raised during listening sessions held in 2006, and by the Cooperative Sagebrush Initiative. As of October 2007, Interior had not proposed any regulatory or policy changes to the Endangered Species Act. Also, in October 2007, CEQ issued guidance on collaboration within the NEPA process that discusses using a collaborative group’s option as the preferred alternative in a NEPA analysis. The guidance resulted from the recommendation of a federal task force in 2003 and followed the issuance in 2005 of a report by the National Environmental Conflict Resolution Advisory Committee concluding that one way to achieve NEPA goals is for the federal agencies to use environmental conflict resolution practices, including collaboration. However, no evaluation or action has occurred as of October 2007 to resolve the inconsistent application by USDA and Interior of federal ethics rules. While these actions are addressing the Federal Advisory Committee Act, Endangered Species Act, and NEPA, the federal financial assistance working group did not complete its task of evaluating the extent to which cooperative funding authorities could be enhanced to better assist collaboration. Because of the number and complexity of funding authorities, the working group determined that each department should undertake an analysis of its own financial assistance to collaborative groups. Through December 2007, Interior was considering its use of cooperative agreements and whether they can be used to support partners to conduct work that is mutually beneficial to the group and Interior agencies. In such situations, both the partners and the federal agencies bring resources to the table and both sides benefit from the work jointly conducted. However, an Interior official noted that laws related to federal contracting may limit the agencies’ ability to use these agreements in the absence of specific statutory authority to do so. In September 2007, an Interior official stated that the type of authority needed is reflected in authorities provided to the Natural Resources Conservation Service and other agencies that allow them to work with partners on mutually beneficial activities. Through September 2007, the Forest Service had authority to use cooperative agreements with private and public organizations, including nonprofit groups, to perform forestry protection activities and other types of cooperative projects that provide mutual benefits other than monetary considerations to both parties. In addition, the agency has authority to work on mutually beneficial restoration projects under the Watershed Enhancement and Restoration Act or Wyden authority, but this authority is not permanent, extending only to 2011. In late December 2007, Congress passed, and the President signed, the Consolidated Appropriations Act for fiscal year 2008, which included two provisions related to the agencies and cooperative agreements. The first provision authorizes Interior to enter into cooperative agreements with state or local governments, or not-for-profit organizations, if the agreement will (1) serve a mutual interest of the parties to the agreement in carrying out Interior’s programs and (2) all parties will contribute resources. The second provision extended through 2010 the Forest Service’s authority to enter into cooperative agreements with state, local, and nonprofit groups if the agreement serves the mutual benefit (other than monetary consideration) of the parties carrying out programs administered by the Forest Services and all parties contribute resources. However, the overall problem of facilitating collaborative partnership projects for collaborative groups and partners—in terms of interest, cost share, and other administrative matters—remains. For this reason, an overall evaluation of federal funding assistance and tools available for collaborative groups could help to identify the situations across agencies that hinder collaboration and the potential legal and policy changes that could be made. Overall, the working groups and agencies have made some progress in developing policies and taking actions that address the challenges they face in working with collaborative groups. However, these challenges will not be fully addressed or solved in the short term. As indicated in the Day 2 report, the actions to be taken by federal agencies would require a sustained effort and a senior policy team with an overall strategy to sequence the many actions that need to be taken by multiple different federal agencies. While the Cooperative Conservation initiative is being coordinated by a task force and working groups, both are temporary, formed by federal agency personnel interested in the cooperative approach but who, for the most part, have other full-time responsibilities. Because of this, the structure and direction—which includes goals, actions, time frames, and responsibilities—of the initiative as it moves forward are uncertain. According to CEQ and agency officials, the task force working groups were organized to propose actions that could be taken in the short term; CEQ officials said that the senior policy team would meet to assess the status of actions and progress toward the vision laid out for the Cooperative Conservation initiative. As of December 2007, the policy team had not met, but CEQ officials expected they would meet after the issuance of the second annual report on the implementation of the Cooperative Conservation initiative. Currently, the task force is developing the report, which was expected to be issued in January 2008. Collaborative resource management offers federal land and resource management agencies a promising tool with which to approach the ongoing and potential conflicts that arise in managing the nation’s land and resources. Compared with the alternatives—such as litigation or individual landowners making independent, potentially conflicting decisions about their separate parcels of land—collaboration provides groups a way to integrate multiple interests and achieve common goals. To date, federal land and resource management agencies have had some success in working with collaborative efforts. Moreover, the policies put in place through the Cooperative Conservation initiative move the federal government and agencies forward in supporting collaborative resource management efforts. However, based on the challenges that the agencies face in working with collaborative efforts, additional opportunities exist to enhance and effectively manage federal agencies’ participation in and support of ongoing and future collaborative efforts. Specifically, because federal agencies have limited resources and time, yet at the same time have multiple opportunities to collaborate, they need to be judicious in their decisions about collaborating with particular efforts and could benefit from guidance on how this can be done. This would involve dissemination of tools that already exist for field offices to assess a community’s capacity for collaborating, and the federal ability to participate. In addition, because the agencies are accountable to Congress and the public for achieving their land and resource management goals, it is important for them to be able to demonstrate the results that have been accomplished through collaborative efforts. This means that agencies and groups should be able to measure participation and monitor their progress, including monitoring the broader landscape-level effects that result from their collaborative efforts and projects. Furthermore, collaborative resource management is just beginning to emerge as one approach for federal land and resource agencies to work with local groups in ways that can reduce conflict and improve resources. In addition to developing capability among agency personnel, federal agency support for this approach entails helping to create networks, identifying best practices, and generating new ideas. These outcomes can be achieved though facilitating the exchange of information and lessons learned among collaborative groups, as was done at the White House Conference. Federal support also involves an ongoing commitment to identify practicable legal and policy changes that could enhance collaboration. In particular, CEQ, OMB, and other federal agencies can evaluate and identify possible changes to federal financial assistance authorities and policies that make it difficult to work with partners. Also, USDA and Interior can identify a way to achieve more consistent results in determining participation by USDA and Interior employees on nonprofit boards. In the future, as the agencies participate in different collaborative efforts, additional situations may arise in which agencies need to seek ways to implement laws or policies in a manner that enhances collaboration. Finally, because collaborative resource management involves multiple departments and agencies facing common challenges and will take a sustained effort to implement, it is important that the effort has structure and long-term direction to ensure that it is ongoing and completed. Structure could be provided by continuing such an interagency effort as the Cooperative Conservation task force and its working groups. One way this could be accomplished would be by developing a memorandum of understanding between participating agencies. Long-term direction to address common challenges could be provided by the memorandum of understanding, or through another organizational document or plan that will steer the task force, working groups, and agencies toward realizing the vision of the initiative. To enhance the federal government’s support of and participation in collaborative resource management efforts, we recommend that the Chairman of CEQ, working with the Secretaries of Agriculture and the Interior, direct the interagency task force to take the following actions: 1. Disseminate, more widely, tools for the agencies to use in assessing and determining if, when, and how to participate in a particular collaborative effort and how to sustain their participation over time. 2. Identify examples of groups that have conducted natural resource monitoring, including at the landscape level, and develop and disseminate guidance or protocols for others to use in setting up such monitoring efforts. 3. Hold periodic national or regional meetings and conferences to bring groups together to share collaborative experiences, identify further challenges, and learn from the lessons of other collaborative groups. 4. Identify and evaluate, with input from OMB, legal and policy changes concerning federal financial assistance that would enhance collaborative efforts. 5. Identify goals, actions, responsible work groups and agencies, and time frames for carrying out the actions needed to implement the Cooperative Conservation initiative, including collaborative resource management, and document these through a written plan, memorandum of understanding, or other appropriate means. Furthermore, to ensure that federal agencies can work well with collaborative groups, we recommend that the Secretaries of the Interior and Agriculture take action to develop a joint policy to ensure consistent implementation of ethics rules governing federal employee participation on nonprofit boards that represent collaborative groups. We provided CEQ, Interior, and USDA with a draft of this report for review and comment. Interior concurred with the conclusions and five of the six recommendations in the report, providing written comments that included additional information describing actions the department and its agencies are taking that they believe are responsive to our recommendations, some of which have been finalized since they received the draft report. We made changes to the report as appropriate to include this information, but underscore the fact that the recommendations apply more broadly to the federal agencies implementing the Cooperative Conservation initiative (see app. III). USDA provided oral comments also concurring with the conclusions and five of the six recommendations in the report. CEQ did not provide comments on the report. The departments neither agreed nor disagreed with our sixth recommendation that the Secretaries take action to develop a joint policy to ensure consistent implementation of ethics rules governing federal employee participation on nonprofit boards that represent collaborative groups. USDA’s Office of General Counsel, however, expressed concerns that such a policy might be desirable, but not feasible. The office said that the two departments may provide waivers based on each agency’s interests and distinct relationship with the collaborative group, and therefore it is not practicable to have a joint policy in advance of a particular request and consultation may not make the waivers more uniform. While we understand these concerns, we believe that such a consultation would have either resulted in a consistent recommendation in the case of the Blackfoot Challenge, or if it did not, would have at least provided a transparent response to the group and field offices seeking the waivers. We continue to believe that the departments should make a good faith effort to develop and implement a process that would be more transparent to the groups with which they work. Therefore, we did not change our recommendation. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretaries of the Interior, Agriculture, and Defense, Chairman of CEQ, and Director of OMB, as well as other interested parties. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff has any questions regarding this report, please contact me at (202) 512-3841 or nazzaror@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors are listed in appendix IV. The objectives for this study were to determine (1) experts’ views of collaborative resource management as an approach for addressing complex natural resource management problems; (2) the extent to which selected collaborative resource management efforts have addressed land use conflicts and improved natural resource conditions; and (3) what challenges, if any, federal land and resource management agencies face in participating in collaborative resource management efforts and how the Cooperative Conservation initiative has addressed the challenges. For the first objective, to determine experts’ views of collaborative resource management as an approach for addressing natural resource problems, we examined the academic literature related to the topic. To identify relevant articles in the literature, we first interviewed experts who have studied collaborative resource management. Following GAO’s methodology for identifying experts, we started with knowledgeable individuals and agency personnel and asked them for referrals to experts. In an iterative process, we contacted these experts and asked them for nominations of other knowledgeable individuals. We interviewed over 20 individuals who could be considered experts, based on the nominations of others in the field. We asked these experts for references to articles on the collaborative resource management approach. We also identified articles through a search of four academic databases including Agricola, a database of articles relating to aspects of agriculture, forestry, and animal science; ProQuest Science Journals, a database of science and technology journals that includes literature on biology and earth science; ECO, a database of scholarly journals; and BasicBIOSIS, a database of biology and other life science-related journals. We searched these databases using the terms “ecosystem management policy” and “collaborative resource management policy,” which produced over 950 articles in the four databases. Abstracts of these articles were reviewed and only those articles appropriate for our work were retained for a literature review. This process yielded over 130 articles (the full article was used, not just the abstract). To perform the literature review, one of two analysts (A, B) read and reviewed each of the articles and indicated whether or not the contents included themes related to our objectives, that is, the common practices, benefits, limitations, and critiques of collaboration. The analysts summarized information from the articles that was relevant to these themes and recorded it as statements in a database. To verify that the two analysts were extracting similar information from the articles, the analysts randomly selected 10 percent (13) of the total articles. For each of these 13 articles, if Analyst A had originally summarized and categorized relevant information in the article, then Analyst B independently performed the same tasks. Similarly, Analyst A reviewed the articles originally reviewed by Analyst B. For each article, the verification work was compared with the original and it was determined whether both analysts agreed or disagreed on the presence of information in the article related to each theme. This analysis indicated that the two analysts were extracting comparable information from the articles. A content analysis was then performed on the statements. Each analyst classified the statements from the articles read as a benefit, limitation, or critique associated with collaborative resource management. The analysts then exchanged data and examined the other analyst’s categorizations to determine whether there was agreement on classifying each statement from the literature review into the benefits, limitations, and critiques categories. The two analysts reviewed the statements they had placed into these categories and either concurred with the classification or noted the basis of disagreement. For items where there was disagreement, the disagreement was resolved so that agreement was 100 percent. Once the analysts had established a unified set of statements under each category—benefits, limitations, and critiques—each analyst independently grouped the statements under each category into similar components. The analysts’ lists of components for each category were compared, discussed, and merged into one set. The components we agreed upon for each category and a description of them are noted in table 4. After developing the categories and components, we independently assigned each of the statements to one of the components. After the statements were independently assigned a component, the analysts discussed every statement for which they had assigned different components and reached agreement on the category for each of the statements. As a result, the analysts attained 100 percent agreement on the assignment of statements to components. Table 5 reports the number of statements that were assigned to each component. The literature review was also used to identify what the experts viewed as common practices of successful collaborative groups. Such practices were described in 15 of the articles from the literature review and one GAO report that described practices to sustain collaborative efforts among federal agencies. To develop a comprehensive list to summarize the practices described in all of these sources, two analysts independently generated lists based on commonalities of those described in the literature. A third analyst reconciled the two lists and all three analysts discussed the results and agreed on the following final list of practices: Seek inclusive representation. Develop collaborative processes. Pursue flexibility, openness, and respect. Establish leadership. Identify or develop a common goal. Develop a process for obtaining information. Leverage available resources. Provide incentives. Monitor results for accountability. For the second objective, to determine the extent to which selected efforts have addressed land use conflicts and improved natural resource conditions, we identified seven examples involving collaborative resource management efforts. The examples were identified using referrals made by experts and citations in the literature. The seven examples we chose to study were judgmentally selected based on several criteria, as shown in table 6, designed to capture groups with (1) a significant amount of federal land involved, (2) participation of multiple stakeholders, (3) locations across the United States, and (4) different types of groups, from nonprofit groups, to an advisory council, to loosely organized information-sharing groups. Although there are many collaborative efforts dealing with water issues, we confined our examples to land management efforts to limit the scope of our work. The examples we selected included both new and experienced groups, made up of multiple participants including federal agencies, from rural areas. The groups chosen and the states in which they are located are shown in table 6. To gather information on each group’s organization, efforts, and results, we conducted field visits and detailed, semistructured interviews with several key participants of the group, and in some cases, interested parties who were not participating in the group. We obtained related documentation of each group’s activities and results and in some instances observed the groups’ projects in the field. We did not independently verify data related to the groups’ results. In analyzing the groups, we considered conflicts to exist if two or more participants had different interests to achieve and considered conflicts to be reduced or averted if a common solution or interest was identified. For the third objective, we identified challenges associated with the collaborative resource management approach described by the experts in the literature and by members of the collaborative resource management groups we studied. The components of the challenges described by the experts in the literature were identified using the literature review and content analysis that is explained above. Table 7 describes the challenges. As with the benefits, limitations, and critiques, each statement identified as a challenge in the literature review was assigned to a component. The number of statements that were assigned to each challenge component is listed in table 8. An additional challenge related to sharing experiences with collaboration was identified through semistructured interviews with collaborative group participants. Many participants we interviewed mentioned that aspects of their collaborative group were unique, yet the groups share similar problems and could benefit from sharing experiences with other groups. This challenge reflects the personal experiences of participants working within a specific collaborative group. To identify how efforts under the Cooperative Conservation initiative address challenges associated with federal land and resource management agencies’ participation in collaborative resource management, we interviewed federal officials from organizations responsible for implementing the Cooperative Conservation initiative, including the Council on Environmental Quality, Office of Management and Budget, Department of the Interior, and Department of Agriculture. In addition, we reviewed Cooperative Conservation documents and agency guidance related to partnerships and Cooperative Conservation. We conducted this performance audit from October 2006 through February 2008, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To understand the purpose and nature of collaborative resource management groups, we selected seven such groups for detailed study. We met with participants of these groups individually or, at times, together to discuss the natural resource problems and conflicts the group was managing and the practices used by the group that enabled them to successfully alleviate conflict and improve resource conditions. To various degrees, the seven groups we studied used the collaborative practices identified by experts that successful groups commonly use. Experts emphasized that while these practices are commonly used by successful groups, the use of these practices does not guarantee success for all groups. Collaborative groups are unique and can succeed or fail depending on the nature of the problem or conflict involved. The following describes each of the collaborative groups, the natural resource problems or conflicts they managed, and the extent to which they used collaborative practices. The Blackfoot Challenge (Challenge) is a landowner-based nonprofit group working in the 1.5-million-acre Blackfoot River watershed in Montana. Although it began much earlier, the group was officially established as a nonprofit group in the early 1990s, with a board including private landowners and federal and state agency personnel. The participants of the group sought to create an organization that could resolve natural resource issues, such as the reintroduction of threatened and endangered species and their effect on private landowner livelihoods, before they became conflicts. Of the total acres in the watershed, about 57 percent is publicly managed by the Forest Service, Bureau of Land Management (BLM), and the state of Montana. The remaining lands in the watershed are owned by timber companies and private citizens. The area has had a long history of mining, logging, and ranching. More recently, the area has increasing numbers of people, which has increased development and recreation. The ecosystem is also home to threatened and endangered species including the bull trout, grizzly bear, and gray wolf. Participants of the Challenge identified several natural resource problems and conflicts that the group has managed, and is continuing to manage, including the following: In 2000, the Challenge responded to a conflict that arose over low water flows in the Blackfoot River that threatened the survival of fish and other river species and organisms. The Challenge formed a Drought Response Committee, which has since expanded to address long-term water conservation and recreation issues. The committee met with the Big Blackfoot Chapter of Trout Unlimited, which had concerns about fish populations and habitat; Montana Department of Fish, Wildlife and Parks; and water users to develop an emergency drought plan for the river. The plan, based on the idea of “shared sacrifice,” provided more in-stream flow as water users voluntarily reduced the amount of water they withdrew, allowing more water to be left in the stream. In 2005, this plan helped save 60 cubic feet per second of water. Riparian habitat for fish in the Blackfoot River is fragmented by culverts, roads, and other infrastructure on both public and private land that block tributaries and creeks flowing into the river. Wildlife agencies have noticed the reduction in fish populations, including the threatened bull trout. Many groups, including federal agencies, fishermen and women, and ranchers, are interested in reconnecting streams that have been blocked to provide better fishing and wildlife habitat opportunities. However, some ranchers are hesitant about making improvements or working with federal agencies. The group has worked with willing ranchers and the local chapter of Trout Unlimited to develop a plan for restoring riparian areas and tributaries across the watershed. Over time, the groups have protected and restored 38 miles on 39 tributaries and 62 miles of riparian habitat. In 2002, the Challenge responded to concerns throughout the valley about increased grizzly bear activity by creating a Wildlife Committee to exchange information and coordinate efforts. The Blackfoot watershed is nearby three wilderness areas and is considered a prime wildlife corridor for wolves and grizzly bears, whose populations are increasing. Local landowners are concerned about increased human and livestock interaction with such species. The Challenge began a Carcass Pick-Up Program in conjunction with the Montana Department of Fish, Wildlife and Parks; the U.S. Fish and Wildlife Partners Program; local ranchers; and a waste service to remove dead livestock from ranches to deter bears from searching for such remains. Human-grizzly bear conflicts have been reduced by 91 percent from 2003 through 2006. In 2005 and 2006, the Challenge dealt with two unique resource conflicts. In the first case, conflict arose over a housing development around one particular community in the watershed that would dramatically affect an important elk migration corridor and increase the community’s population, water use, and school enrollment. As a result, there were many different stakeholders interested in the issue. Rather than taking a position on the conflict, the Challenge has instead brought the community together with the stakeholders to find an acceptable alternative. In a second similar case, members of the Challenge did not take sides on a controversial proposed gold mine near Lincoln, Montana, in the northern part of the watershed. Instead of advocating for a particular solution, the Challenge offered to bring people together to discuss their options. In the end, according to the participants, the state passed a law against methods of mining that use cyanide to leach the gold from the rocks and the proposed mine was ultimately blocked. The collaborative practices used by the Challenge are described in the following sections. The Challenge board and its working committees include a wide variety of representation. Members of the board are landowners, land managers, agencies, and others who are represented through working committees and membership. The group has tried to involve every type of stakeholder in the process to provide help or share resources. They realize, however, that some perspectives that should be included may be missing from the board, including absentee landowners who own second homes in the valley. In an effort to provide greater inclusiveness, the board has created at-large members. As members of the Challenge, federal agency officials are members of the Executive Board and committees. Because the Challenge provides a forum for information sharing, agency officials have an opportunity to hear community concerns. It allows them to know, in an informal capacity, if local people are supportive of particular actions before making decisions. Of equal importance, the agencies have an opportunity to communicate correct facts about their respective agencies. This helps to correct rumors and reduce doubt, uncertainty, and distrust between the community and the agencies and provides a forum for agency officials to make participants aware of their limitations early in the process. Although federal employees serve as members of the Executive Board, a nonprofit board, the Forest Service member serves as a nonvoting member, while the BLM and U.S. Fish and Wildlife Service employees serve as voting members. The group uses an “80-20” rule, whereby the group concentrates its efforts on 80 percent of the issues it can agree upon and does not force consensus on the 20 percent that it is unable to agree upon. This strategic approach allows the group to first work on solutions to problems that are less controversial and more likely to succeed, thereby building common ground and trust among participants. The Challenge does not advocate any one position because it believes if it did, it would be unable to act as a bridge between two sides of an issue. Instead the group chooses to facilitate dialogue and information sharing. This process helps to promote community dialogue between private landowners and public agencies in an attempt to resolve issues before they become major conflicts. Members of the Challenge attributed much of their success as a group to the time they have taken to develop trust among members. Participants of the Challenge include individuals that are respectful of diverse views, committed to the effort, and are willing to negotiate and build consensus. One member described the group’s common approach as polite, thoughtful, kind, and respectful. According to participants, a collaborative group needs the right leader and the Challenge has had several committed, talented community leaders over the years. They view the right leader as someone who is a local opinion leader and who has the respect of a majority of the community. A participant described one of the reasons for the Challenge’s success as inspired leadership, which involves being able to focus the group on its common interests. The group also hired an Executive Director, which was a crucial step for the Challenge in terms of raising funds and organizing the group because it could only accomplish a limited amount on a volunteer basis. Concern for maintaining a certain quality of life in the area prompted landowners, public agencies, and other community leaders to begin working together on ways to manage the watershed. The group’s mission is to “coordinate efforts that will enhance, conserve and protect the natural resources and rural lifestyles of Montana’s Blackfoot River Valley for present and future generations.” As early as the 1970s, private landowners and public agency officials worked together to resolve conflicts, or potential conflicts, among various users within the watershed. For example, in an effort to protect and restore fish and wildlife habitat along the river corridor, several public agencies, including BLM, the Forest Service, U.S. Fish and Wildlife Service, and state wildlife and parks agencies, attempted to purchase conservation easements from private landowners. The landowners made the agencies aware that they were each asking to acquire land, and the agencies and landowners started talking about their common goals. In the 1980s, a conflict over access to the river between recreationists and private riparian landowners developed. To access the river, recreationists had to trespass on private lands. In response, a local timber company joined with BLM and the Montana Department of Fish, Wildlife and Parks to allow limited access across private land to use the river if the agencies would manage the activities and effects on resources. The Challenge relies on the scientific expertise and information provided by the resource managers from the federal and state agencies. To make decisions about specific resource management problems, the group has a standard set of committees that include knowledgeable agency and community members. One committee in particular, the Drought, Water Conservation, and Recreation committee, monitors snowpack, stream flow, and drought conditions, as well as recreation use of the river. The Challenge has recently become involved in monitoring and developing water quality standards for streams in the watershed because the water quality data needed to analyze and improve conditions in the watershed were inadequate. It also works with university researchers to conduct studies. In the past, the Challenge has operated on about $50,000 per year, receiving funding from private donors and foundations. The group recently received a $100,000 award for innovations in governance from the Ash Institute for Democratic Governance and Innovation at Harvard University. The group’s resources are used to leverage federal funds by coordinating private projects with federal projects. For example, as the Forest Service and BLM work to restore parts of a stream on their respective lands, the Challenge coordinates the projects and adds its own resources to conduct work on private stretches of the same stream, thereby providing greater stream restoration than if the agencies had conducted individual projects. The Blackfoot Valley uses conservation easements as an incentive for conservation activities. Through many partners, more than 100 conservation easements on more than 90,000 acres of private lands have been purchased to keep agricultural and grasslands open and available for ranching and wildlife use. Conservation easements are being purchased and donated to the following organizations: Forest Service; U.S. Fish and Wildlife Service; Montana Fish, Wildlife and Parks; The Nature Conservancy; Montana Land Reliance; and Five Valleys Land Trust. For the most part, the Challenge uses monitoring data that the agencies collect, although in specific cases, the group and its partners are monitoring the results of their projects. In particular, the local chapter of Trout Unlimited led the development of a process to prioritize tributaries and stretches of the river to restore and monitor results. In addition, the Montana Department of Fish, Wildlife and Parks monitors fish populations in the river, which indicates habitat improvement and water quality conditions. The Challenge recently began monitoring water quality. The Cooperative Sagebrush Initiative (Initiative) is a partnership of landowners, communities, local working groups, conservation groups, industries, and tribal, state, and federal agencies that started in 2006 to focus on conservation of the western sagebrush landscape. The effort encompasses the sagebrush range, which spans 11 western states, and involves creating incentives for conservation through mechanisms such as a system to trade credits for conservation activities. The group incorporated into a nonprofit organization in 2007 and is still organizing and planning the effort, so it has not yet conducted conservation activities. In 2007, the group solicited proposals for projects designed to demonstrate how the work could be done and incentives could be developed and has endorsed three proposed projects that encompass over 1 million acres of sagebrush habitat in four states. In the mid-1990s, the declining status of two sage grouse species— Gunnison sage grouse and greater sage grouse—triggered regional concern for the health of the sagebrush ecosystem. In 2000, the Gunnison sage grouse was added to the U.S. Fish and Wildlife Service’s list of candidate species to be considered for a threatened or endangered listing under the Endangered Species Act and the greater sage grouse was the subject of three petitions in 2002–2003 seeking listing throughout its range. The U.S. Fish and Wildlife Service found that a listing was not warranted for the greater sage grouse in 2005, or for the Gunnison sage grouse in 2006. The sagebrush range is also home to wildlife, such as mule deer, valued for hunting; scenic attractions; energy resources; and ranching; which could be affected by declining greater sage grouse populations or a listing of one, or more, of the species that are dependent on the sagebrush ecosystem. The primary natural resource problem that the Initiative is focused on is the decline of the sagebrush range and associated decline in greater sage grouse populations. These declines have been attributed to factors such as increased oil and gas exploration and development in the West, some ranching practices, and climate. Although the sage grouse species were not listed when originally petitioned, there are three lawsuits that could affect the legal status of the sage grouse. The states, energy companies, ranchers, and developers are concerned that a listing decision would limit their activities in sagebrush habitat. The collaborative practices used by the Initiative are described in the following sections. The Initiative was started when representatives of a nonprofit organization called the Sand County Foundation saw an opportunity for oil and gas companies to become involved in stewardship of the sagebrush ecosystem and help with key issues hindering sage grouse conservation in the West that were identified in a report sponsored by the Western Association of Fish and Wildlife Agencies. These key issues included creating an organizational structure for conservation efforts, establishing leadership to coordinate the efforts, and finding resources to fund the efforts. Representatives from the Sand County Foundation and the U.S. Fish and Wildlife Service initiated discussions with representatives from BLM, the U.S. Geological Survey, and Encana Oil and Gas to develop ideas for a collaborative conservation effort that spanned the range of the sage grouse. The partners believe that the effort should be broad, inclusive, and representative and, therefore, include key state agencies; counties; tribes; a wide spectrum of landowners, ranchers, and citizens; a diverse mix of companies across multiple industries; a good representation of local, regional, and national conservation groups; and other federal agencies such as the Department of Defense. Potential partners in the Initiative were identified through conversations among the core group who initiated the effort. Subsequently, invitations to participate were sent out broadly to individuals and the list of potential partners grew through further recommendations. At the second major general meeting of the group in December 2006, over 80 people attended, including representatives from federal and state agencies, energy companies, and nongovernmental organizations, as well as private landowners. After its initial efforts to gain participation, the Initiative formed a partnership and outreach working group responsible for identifying and communicating with critical partners for the Initiative, as well as developing an outreach strategy to inform key audiences of the Initiative’s purpose and achievements. Partners we spoke with noted that they believe they have good representation from all of the necessary interests, although some noted that the tribes have not been involved thus far even though they have been encouraged to participate. Develop a Collaborative Process Decisions within the Initiative are made by consensus and meetings are facilitated by a staff member from the U.S. Institute for Environmental Conflict Resolution. To accomplish work, the Initiative has developed a strategic plan that includes four working groups: (1) a partnership and outreach group to ensure that the Initiative includes all stakeholders and reaches out to underrepresented interests; (2) an incentives group to work on incentive mechanisms for the participants; (3) a projects group that identifies and prioritizes conservation projects; and (4) a funding group that is developing a banking structure for the group. The Initiative is governed by a 12-member Partnership Council that includes representatives from the Cooperative Sagebrush Steppe Restoration Initiative, Encana Oil and Gas, EnerCrest Corporation, Environmental Defense, Idaho Cattle Association, Idaho Department of Fish and Game, National Cattleman’s Beef Association, Peabody Energy/Powder River Coal, Shell Oil, Western Governor’s Association, Sand County Foundation, Utah Department of Natural Resources, Vermillion Ranch, and Western Association of Fish and Wildlife Agencies. In addition, there are nonvoting federal advisory members on the Partnership Council from the U.S. Geological Survey, U.S. Fish and Wildlife Service, and Natural Resources Conservation Service. According to some of the partners, the group views transparency as the best way to deal with critics and skeptics and, therefore, has invited everyone to participate. By having an open process for discussion, the group has been able to respectfully discuss different perspectives even though the members do not always agree. As one participant described it, there is more to the process than sitting around singing “kumbaya.” In addition, the group posts most of its information and documents on its Web site and opens its meetings and conference calls to any stakeholders who want to participate. Several participants attribute the initial success of the group to the visionary leadership of some of the group’s founders who saw an opportunity for conservation in the concurrent trends of increased oil and gas development in the West and decreasing sagebrush habitat. One of the participants noted that the group has benefited from several different leaders who have the ability to share a vision with others and motivate them to work toward it by focusing on problem solving and solutions. The Initiative partners came together around the goal of conserving sagebrush habitat, with the focus on preventing the need for a listing of the greater sage grouse under the Endangered Species Act. The partners have identified a common goal which is to “result in the long-term, verifiable recovery of the greater sage grouse and improvement of other species of concern in the sagebrush range.” Some participants noted that the Initiative would not exist without the threat of a listing because each of the partners has different concerns over the need for or result of a listing. For example, conservation organizations want to maintain the health of the species, industry is concerned over increased limitations on energy exploration and development in sagebrush habitat that would be brought about by a listing, and ranchers are concerned that a listing would restrict their activities on their private land as well as on the public land associated with grazing leases. The Initiative has utilized the expertise of scientists from the state wildlife agencies and the federal agencies to guide various aspects of the effort and has used existing sagebrush habitat data from the U.S. Geological Survey and sage grouse conservation studies completed by the Western Association of Fish and Wildlife Agencies across the 11-state sage grouse range. In 2006, a panel of sage grouse scientists, representing 10 state wildlife agencies, the U.S. Fish and Wildlife Service, BLM, Natural Resources Conservation Service, and the Forest Service convened to identify priority areas of conservation and types of conservation efforts that would benefit the sagebrush range. In addition, to mentor applicants who have applied for conservation projects under the Initiative and help them develop the details of their project, one of the working groups has been charged with recruiting a Science Advisory Council that will consist of scientists with expertise in sage grouse biology, range management, landscape ecology, and conservation biology. Furthermore, in February of 2007, the Initiative sponsored a workshop to explore how a conservation credit trading system for the sagebrush ecosystem may be defined. This workshop brought together sage grouse and sagebrush scientists as well as experts familiar with other credit trading systems such as wetland banking programs, endangered species conservation banks, and carbon offset programs. The Initiative’s early efforts have been funded by some of the member organizations such as the Sand County Foundation, National Fish and Wildlife Foundation, and Encana Oil and Gas. The funds generated thus far have paid for meetings and planning activities, but participants anticipate that the Initiative will be able to raise sufficient money for demonstrating conservation efforts. As the effort begins to implement conservation projects, participants noted that funding may come from industry, federal programs, or the conservation credit trading system. Funding for the demonstration projects will potentially be provided by a mix of the partners, including the federal agencies and oil and gas companies. According to the group, the Initiative’s partnership is built upon using incentives for landowners, local communities, and private industry to invest in habitat restoration and other conservation actions. The incentives working group has focused its efforts primarily on two incentives. First, the Initiative views the creation of a conservation credit trading system as a potentially significant economic incentive for landowners to engage in voluntary conservation efforts. This system would allow landowners or others to earn credits by implementing sagebrush conservation activities. These credits could then be sold to energy companies or others who may desire them for a variety of purposes, including mitigating the effect of development projects elsewhere in sagebrush habitat. The concepts behind the conservation credit trading system are currently in development and many of the participants acknowledge that there are significant inherent difficulties in designing such a system, particularly one that will stand up to scientific scrutiny. For example, the sagebrush ecosystem is highly heterogeneous, with varying levels of habitat quality across the range. This creates challenges in determining the value of a credit and how this may change from location to location. However, several of the participants we spoke with believed this credit trading system was crucial to the overall Initiative and remained optimistic that it could succeed. The second type of incentive that the Initiative is working on includes obtaining various assurances from the Department of the Interior that by implementing voluntary sagebrush ecosystem conservation efforts, participants would not bear greater costs or requirements if the greater sage grouse or other species dependent on the sagebrush ecosystem became listed under the Endangered Species Act. For example, if a rancher improved or created habitat for sage grouse on his or her land and then the species was listed under the Endangered Species Act, the rancher could be subject to restrictions on grazing practices that might harm the sage grouse by damaging its habitat. The Initiative developed and submitted five specific recommendations that they believe Interior could take to secure particular assurances. According to one partner, Interior has indicated that the group will receive a response soon. The group has not yet initiated any conservation projects; however, the group issued a request for proposal in May 2007 for demonstration projects designed to measurably improve sagebrush habitat and test the concept of a conservation credit trading system. The request for proposal included provisions for monitoring of projects. Some participants noted that monitoring would be a critical component of any conservation projects and conservation credit system. The Eastern Upper Peninsula Partners in Ecosystem Management group was started in 1992 originally to collaborate across boundaries on lands in the eastern Upper Peninsula of Michigan for ecosystem management. Over time, the group evolved into an information-sharing group to coordinate land management, but has been relatively inactive in recent years. Members of the group include state and federal government agencies, a conservation organization, and industrial (timber) landowners who together manage two-thirds of the four million acres of the eastern Upper Peninsula. This area includes the 895,000-acre Hiawatha National Forest, 95,000-acre Seney National Wildlife Refuge, 73,000-acre Pictured Rocks National Lakeshore, state land, and privately owned land. Historically, much of the eastern Upper Peninsula was managed for timber harvest and most of the region was cut by the early 1900s. In the 1800s, loggers harvested pine and shifted to hardwoods in the 1900s as pine trees were cut over. The eastern Upper Peninsula is once again largely forested with second-growth forests including aspen, white birch, and jack pine. In recent years, many of the timber companies have been selling their lands. Natural Resource Problems According to group members, there are few contentious issues causing conflict among land managers and owners in the eastern Upper Peninsula, but the group saw an opportunity among the large landowners to cooperate in a manner that could enhance ecosystems across the landscape. Many members note that the primary outcomes of the group have been educating partners with information that they can use in their management, sharing information among the partners, and building relationships. Some of the particular examples of the Eastern Upper Peninsula group’s coordinated efforts include the following: Most of the eastern Upper Peninsula is second-growth forest, with trees of similar age. Some members of the group sought to establish a mix of trees of different age classes across the landscape to provide healthy habitat for species, in particular, neotropical bird species such as the golden-winged warbler, that use the forests. However, the forest companies that owned land in the eastern Upper Peninsula were focused on commodity production rather than habitat health. The Eastern Upper Peninsula group provided opportunities to educate the industrial landowners that accommodating neotropical birds on their land could be done without affecting their financial bottom line. By coordinating with neighboring landowners to obtain a mix of vegetation over a larger area, the need for any one landowner to achieve all habitat objectives on his or her land alone was reduced. To support efforts to manage their land in a complementary manner, members of the group recognized the need for broad-scale mapping that could be used in looking at the overall landscape. As a result, the group coordinated to map and categorize land units in the region into areas with similar physical and biological characteristics, called land type associations. The land type associations have been used to varying extent by the partners as a planning tool and for some decision making. The group was able to reach consensus on the descriptions of the land classifications, but was unable to agree on the management implications of the ecological descriptions such as the need to use fire to attain a particular age variation in the trees. The partners were concerned that documenting management implications would constrain the activities they could conduct on their land. Many of the Eastern Upper Peninsula group partners have worked together on individual efforts to enhance their positive effects on the landscape, discuss compatible management, or preserve land. Examples of such efforts include the following: Through the relationship built with the Eastern Upper Peninsula group, The Nature Conservancy and a timber company were able to reach agreement on access and save a wetland area from being built over by a road. The timber company wanted to gain access across a nature preserve owned by The Nature Conservancy. The Nature Conservancy originally denied access and the timber company threatened to build a road across a wetland on its land. Through the relationship developed through the Eastern Upper Peninsula group, these organizations were able to discuss the issue and The Nature Conservancy agreed to allow access across its land. A National Park Service official noted that the Eastern Upper Peninsula group helped the National Park Service open a dialogue with the state and timber companies to discuss forest management issues. Pictured Rocks National Lakeshore has a 39,300-acre buffer zone of land within its boundary that is predominately owned by the state and timber companies. According to a former National Park Service official, the National Park Service has an interest in maintaining healthy ecosystems in this buffer zone, while the state and timber company’s interest is focused primarily on the use of the land to generate revenue from harvesting timber. As a result of the relationship that The Nature Conservancy developed with state and federal agencies and timber companies, The Nature Conservancy negotiated a conservation easement on 250,000 acres of private timberland. The easement will allow some forestry on the land, but in a manner that is compatible with a nearby Nature Conservancy preserve. The collaborative practices used by the Eastern Upper Peninsula group are described in the following sections. The Eastern Upper Peninsula group effort began when staff from the Michigan Department of Natural Resources recognized the need to talk with the landowners who shared their boundaries and subsequently convened a meeting with the Forest Service, National Park Service, U.S. Fish and Wildlife Service, and The Nature Conservancy. According to these partners, after they had been meeting for a period of time, they recognized the influence of private forest land in the eastern Upper Peninsula landscape. The group members debated about whether or not to bring private timber companies who owned or managed land into the partnership because they were commodity-based and would have different goals and objectives for the land than the agencies. Ultimately, according to the members, they decided to invite timber representatives into the group. One timber industry official noted that his company was initially interested in the Eastern Upper Peninsula group because participating in a collaborative group could help them attain certification for sustainable forestry practices. More recently, the timber companies have had less interest in the group, in part because many of them have been selling their land in the eastern Upper Peninsula. The participants stressed that the Eastern Upper Peninsula group is not a decision-making group and therefore does not have an established decision-making process. However, the group has used consensus to identify issues that it would like to work on. The group has no protocols, bylaws, or memorandums of understanding. The members share information and, as partners see the need, form subgroups to work on particular projects, with people joining in as they have the interest and time. Under this arrangement, each entity retains its own individual objectives and decision-making process that it will go through to determine what work it will undertake as a part of the group’s efforts. Some members noted that the informality of the group has allowed them to avoid issues with the Federal Advisory Committee Act, which establishes rules for federal advisory committees. According to the Eastern Upper Peninsula group partners, the participants generated trust because early in the process they agreed to respect the missions of each of the individual organizations and to not change any agency’s or organization’s mission or objectives. Participants describe trust as the most significant outcome of their efforts. When the group first began meeting, each of the partners discussed their organization’s missions, which helped the group to gain an understanding of one another. As a result of the trust generated by the group, they have been able to openly share information that they probably would have not shared otherwise, such as the location of timber harvests. Some participants noted that through the open atmosphere generated by the group, potential conflicts are often eliminated before they become conflicts. According to some of the members, the group was pulled together by a few key people who were all managers and able to make decisions. Everyone in the initial group was a manager and had good decision-making skills, an ability to voice his or her opinion, and knowledge of the relevant governing laws, authorities, and policies. Some members noted that different people emerged at various times to bring the group together on different issues and move the group forward. One of the original members coordinated the group and kept it going between 1992 and 2006. When this person assumed a different position within his agency and was no longer able to coordinate the group, it became less active and does not currently have a coordinator. Some members noted that there were still natural resource issues, such as invasive species, that the group could continue to work on and that the Eastern Upper Peninsula group effort could be improved by having a leader dedicated to the group who had coordination and facilitation skills. The Natural Resources Conservation Service has not previously been actively involved in the Eastern Upper Peninsula group, according to an official from the agency, but coordinates the Upper Peninsula Resource Conservation and Development Council—a congressionally designated, nonprofit group that identifies and undertakes resource management and community development projects. Some of the council’s goals overlap with those of the Eastern Upper Peninsula group. Consequently, the council coordinator, who is a Natural Resources Conservation Service employee, has offered to facilitate and coordinate the group’s meetings in the future, starting in early 2008. A Natural Resources Conservation Service official noted that this may supply the impetus needed to get the Eastern Upper Peninsula group active again and working on issues important to the group members. The Eastern Upper Peninsula group members agreed that their goal is “to facilitate complementary management of public and private lands, for all appropriate land uses, using a landscape-ecological approach to sustain and enhance representative ecosystems in the Eastern Upper Peninsula of Michigan.” According to one of the group’s founders, the Eastern Upper Peninsula effort was originally envisioned as a means to coordinate land management strategies and activities among neighboring landowners to achieve overall ecosystem goals. However, after the group began meeting, it became apparent that it would not be able to concur on a common management approach given the different missions of each of the partners. Efforts by some of the members to try to get the partners to coordinate and agree on common management practices and strategies were met with resistance. Consequently, the group determined that it would function as an information-sharing group and not a decision-making body. The Eastern Upper Peninsula group has placed a high priority on developing and sharing information. The group has worked together to map and describe land type associations in the eastern Upper Peninsula, which some members noted have been useful in making landscape-scale decisions. Members of the group stated that any information developed by the group is made available to other members without restrictions or protocols. For example, land type associations were developed for private lands adjacent to the national forest and were used by small foresters to help with their planning. The Eastern Upper Peninsula group has not officially sought funding because, according to group members, it made a decision that it did not want to receive and mange funds. Resources for the group came from the individual partners as they were needed and available. For example, some of the timber company partners published a guide on threatened and endangered species using private funds. The Eastern Upper Peninsula group does not use any particular incentives to achieve its goals. The Eastern Upper Peninsula group has not established any formal mechanisms to monitor natural resources, but has periodically assessed the need for the group to continue. According to one member, monitoring natural resource improvements made by a group is possible only if the group has joint projects, which is not the case of this group. Furthermore, the group has no resources to dedicate to monitoring. However, group members noted that they assessed the value of the group every 2 or 3 years by evaluating their progress toward their goals and discussing among the members whether the effort was still needed. In addition, every 2 to 3 years the group would discuss and set new goals. The Malpai Borderlands Group is a nonprofit group in southeastern Arizona and southwestern New Mexico working to restore fire as an ecological process to the rangelands and keep a working landscape based on natural resources—primarily, livestock grazing. The Sonoran and Chihauhaun deserts in this area have historically supported ranching, but also support numerous species, including threatened and endangered species such as the New Mexico ridge-nosed rattlesnake, jaguar, and Chiricauhua leopard frog. The group’s planning and activities encompass approximately 800,000 acres including public lands managed by the Forest Service, BLM, and the states of New Mexico and Arizona, as well as private lands held by ranchers and the nonprofit Animas Foundation. The group started informally, meeting to discuss problems the neighbors faced in ranching and eventually bringing in interested environmentalists who were concerned about subdivision and development of the land, including The Nature Conservancy. The group incorporated in 1994 to more actively pursue its goals. In working to restore fire to the landscape, the Malpai group has worked to resolve related problems. Wildland fires can provide some beneficial effects to ecosystems that are adapted to fire, such as restoring vegetation and improving habitat. Some landowners view fire as beneficial but others do not want to use fire to manage land and vegetation. For example, Arizona state trust lands are managed primarily for ranching and to generate income for public schools in the state. As a result, the state puts out all fires on these lands and generally does not use fire as a management tool to promote growth of grasses and fuel reduction of shrubs and bushes, although it works with the Malpai Borderland Group to set prescribed fires. On the other hand, the Forest Service, BLM, and some private ranchers want to burn their grasslands to reduce shrubs, such as creosote and mesquite and to promote grasses. The group has worked to educate landowners about the benefits of fire and has worked with the different landowners to set and burn several large fires. The group has succeeded in reintroducing fire to a total of about 69,000 acres. The effects of fire on threatened and endangered species are mixed and create difficulties for using fire to restore vegetation. While restoring fire to an ecosystem that is fire-adapted helps support habitats and species in the long term, using fire on the landscape in the short term can harm threatened and endangered species, such as the ridge-nosed rattlesnake, or food sources for other threatened and endangered species, such as the agave plant used by lesser long-nosed bats. The group worked to get the most recent scientific evidence from researchers working on the species to use in their plans to restore fire, both on public and private lands. More recently, the group has begun working on a habitat conservation plan with the U.S. Fish and Wildlife Service, which would identify the activities that could be undertaken by the group without triggering concerns about “taking”—killing or harming—a threatened or endangered species. Resource overuse can occur during drought. During an extended drought over the last decade, ranchers in the Malpai area faced a decision to sell off their herds or keep them on the land and potentially overgraze it. To avoid this outcome, the group and the Animas Foundation—a nonprofit working ranch operating within the group’s boundaries—established a grassbank on Animas Foundation lands in New Mexico. Ranchers with distressed lands have used the grassbank for 3 to 5 years. Continued drought has made this program less viable in the last few years as the drought has extended over a broader area. Development of open land and loss of the resource and open space occurs when ranchers sell their lands. Private landowners can sell their land at any time, but are more likely to sell during economic hardship. Yet ranchers, and others, have an interest in maintaining open lands for different purposes—livestock grazing, habitat for species, and amenities such as recreation or scenic views. The group worked with ranchers in the area who did not want to sell, purchasing conservation easements for their lands that allowed them to stay in ranching despite economic need to sell the land. The group has succeeded in protecting 77,000 acres of land using conservation easements. The group worked with an individual rancher who provided habitat for a threatened species—the Chiricahua leopard frog. As a result, the U.S. Fish and Wildlife Service provided the Malpai Borderlands Group with a safe harbor agreement that protects the owner, and any other landowners who wish to participate, should the species be damaged by typical ranching actions. The collaborative practices used by the Malpai Borderlands Group are described in the following sections. The Malpai Borderlands Group began informally as a discussion group that later incorporated as a nonprofit. The original members of the group were self-selected members of the ranching community and interested environmentalists who were associated with members of the group. When the Malpai Borderlands Group incorporated in 1994, this discussion group formed the original board. Many of the members of the Malpai group are landowners in the area, but some are not. The board includes a member of The Nature Conservancy and retired federal employees who were key in helping the group get started and work with the agencies. Board meetings are open and the group invites a wide range of people to attend. It also works with its critics on various issues; however, it has determined not to change the membership of the board to include outside parties because of concerns over control of members’ private lands. The members of the group are particularly concerned about the need to recruit young people to the group and board—some are leaving ranching altogether and those who remain often do not attend meetings. The group is managed by a nonprofit board, which has bylaws and organizational structure. According to some members, the group has succeeded because it is run by the board, and while the agencies have joined the effort, they do not direct it. This is important because the private landowners make decisions about what actions to take on their own lands. The group coordinates closely with federal and state agencies that manage lands within the Malpai planning area. Until the last few years, two of these agencies—the Natural Resources Conservation Service and the Forest Service—dedicated an employee to be a liaison with the group. When the Natural Resources Conservation Service liaison retired, a new person was selected with the help of the group; however, when the Forest Service liaison retired, the agency and the group decided not to fill that position and the agency is instead trying to have more employees work with the group. The group holds open meetings and invites a wide range of participants to talk about management issues. It works by consensus, trying to work problems out informally first. For example, in the mid-1990s, a member of the group photographed a live jaguar in the United States. Members participated in the discussions over protection of the species and designation of critical habitat—specific areas that may be critical for the conservation of the species—for it in the United States. The group invited a key scientist to visit and assess the habitat, and as a result, members believe that what they are doing to restore the habitat and keep it open is the best protection for the habitat. The Malpai group also established a fund to reimburse ranchers for any jaguar kills of livestock. While members of the group disagree with the need for the federal government to designate critical habitat for the species in the United States, which may have an effect on the activities that they can conduct on their land, they invited environmental groups to their board meetings to discuss protection of the species under the Endangered Species Act. According to the Center for Biological Diversity, a member attended a meeting but the groups disagreed on how to handle the situation. The U.S. Fish and Wildlife Service listed the jaguar as endangered outside of the United States in 1972, prohibiting the import of jaguar pelts into the country, and listed it as endangered within the United States under the Endangered Species Act in 1997. Recently, the Center has sued the U.S. Fish and Wildlife Service to compel the agency to develop a recovery plan and designate critical habitat for the jaguar. Members of the Malpai group attribute their success to the leadership of several individuals who brought vision, commitment, and organizational skills to the group. They also recognized the role played by federal agency officials both in Washington and in the field offices, who recognized the group’s potential and gave it the opportunity—and resources, including people—to work. According to members, leadership and organizational skills from The Nature Conservancy were also key to getting foundations interested in the group’s efforts and getting the group incorporated as a nonprofit. Most importantly, key members of the ranching community had the vision to join together—when most ranchers prefer to work as individuals—and other farsighted ranchers joined them. Members attribute this attitude to a particular individual whose philosophy was to protect the land and those who work it. The Malpai group’s goal is to “restore and maintain the natural processes that create and protect healthy, unfragmented landscape to support a diverse, flourishing community of human, plant, and animal life in our borderlands region. Together, we will accomplish this by working to encourage profitable ranching and other traditional livelihoods which will sustain the open space nature of our land for generations to come.” When lands in the area started selling, these ranchers became concerned about future subdivision and development of ranchland and the potential loss of their ranching livelihoods and joined together to protect both. Another concern was the lack of fire. As part of its decision-making process, the Malpai Borderland Group seeks to gather and use scientific information relevant to the problem its members are managing. The group has a science coordinator whose position is to manage several ongoing research efforts on lands in the Malpai planning area and a Science Advisory Board made up of more than 40 experts in rangeland science; this group provides advice about research efforts, monitoring, and management activities. These include a program of research to study the effects of wildland fire on threatened and endangered species such as the lesser long-nosed bat and ridge-nosed rattlesnake. The science program also includes 9,000 acres of research plots established by the Forest Service’s Rocky Mountain Research Station to study different revegetation treatments in areas excluded from grazing and 12 watersheds to examine the sediment runoff resulting from burning differently-sized areas and different amounts of vegetation. The group funds research, as well as partners with outside researchers from federal agencies, such as USDA’s research stations, and universities. In addition, the group sponsors an annual scientific conference on topics related to its interests and management activities. Because the group fosters a cooperative relationship among landowners and agency staff to manage a broad landscape, it has been able to raise more money for its conservation efforts. Private fundraising groups and individuals provide funding to groups that can achieve on-the-ground resource improvements and results. The group received start-up funds, which was important because it let the group buy basic office equipment such as computers, printers, and supplies. Over the years, the group has met at one of the ranch houses, in an addition built for the meetings. The group continues to get grants from nonprofit groups such as the National Fish and Wildlife Foundation and receives grants for research and personnel support. Most of the members have been involved since the inception of the discussion group and acknowledge the heavy time commitment that comes with being part of the group. The members see the benefit of participating because as a group they are able to accomplish activities that they would not do as individuals. For example, prior to the establishment of the group, one rancher could not coordinate with the agencies to burn vegetation on both his land and on the agency’s adjacent land. The group used to meet monthly, but now meets less often. Because the distances between ranches are great and require considerable travel time, the group conducts business by telephone conference and e-mail and holds quarterly board meetings in person. Incentives used by the group include a grassbank, which allows ranchers to temporarily move their cattle from their own drought-damaged land to healthier grasslands on the Gray Ranch owned by the Animas Foundation. In exchange, the Malpai Borderlands Group receives a conservation easement for the development rights to the private property on the ranch. These conservation easements are different from others used by The Nature Conservancy and federal agencies in that they contain a clause that states if the rancher loses access to his or her federal grazing allotment through no fault of his or her own, then the easement is void and the land could then be sold for development. The group has worked with U.S. Fish and Wildlife Service to manage the threatened and endangered species on privately-owned ranchlands in the group’s planning area. In one case, the group received a safe harbor agreement to protect one of the last remaining populations of Chiricahua leopard frogs that were residing in a rancher’s stock pond. The agreement allows the rancher, who had trucked water in to the pond during drought years to keep the frogs alive, to manage the stock pond for livestock purposes without the threat of enforcement action should any of the frogs die because of those actions. Other ranchers can participate in the safe harbor agreement by signing a certificate of inclusion with the Malpai Borderlands Group and thereby receive the protections of the agreement. The group is also developing a habitat conservation plan for the area in order to implement grassland and ranch management activities in areas where there are threatened or endangered species. For example, this habitat conservation plan will allow the use of fire in certain conditions and identify certain restrictions to protect the threatened ridge-nosed rattlesnake and several other species that might be harmed or killed by the fires. This will permit ranchers to conduct activities provided the restrictions are followed. As part of its management efforts, the group conducts range monitoring across the lands in its planning area and maintains more than 290 monitoring plots for this purpose. It pays a contractor to visit the plots to determine the condition of the pastures and the availability and use of grass by livestock or wildlife. According to members, these monitoring efforts are useful for judging the condition of grasslands in the vicinity of the plots, but do not gauge overall rangeland conditions. The group is working on a method for monitoring range conditions more broadly across the whole planning area. The group has also sponsored species counts for some of the threatened and endangered species on lands in its planning area. This work enabled them to better know and understand the location of species and to limit activities there. The Onslow Bight Conservation Forum (Forum)—named for the shallow crescent-shaped bay that makes up much of the coastline in southeastern North Carolina where the group is organized—is an information-sharing group organized to help protect and restore the unique coastal environment of the area and associated species. The Onslow Bight region, as with other parts of coastal North Carolina, is developing quickly and the rural nature of the area is rapidly changing. Because of its unique makeup, the area is a hotspot for endemic species—those that can only be found in that area— such as the Venus flytrap. This area of North Carolina contains both longleaf pine habitat favored by the endangered red-cockaded woodpecker and unique wetland habitat such as pocosins, or wetlands that form on a hill because of large amounts of peat that accumulate. The group, formed officially in 2001, originally began as a way to help the Marine Corps manage encroachment issues around its installations and to manage habitat for threatened and endangered species, in particular the red-cockaded woodpecker. The group has since expanded its vision to include aquatic habitat and conservation of land along the coast. The members of the group represent the large blocks of publicly-owned lands such as the North Carolina Wildlife Resources Commission game lands, the Croatan National Forest, Marine Corps Base Camp Lejune, Marine Corps Air Station Cherry Point, and several land conservation trust groups. In addition to overall biodiversity conservation, one focus of the group has been to study potential corridors for wildlife to migrate between these public lands. The natural resource management problems and conflicts that the Forum has managed revolve around land development and conservation: Development of lands eliminates habitat for different species and causes the public lands to become islands of biodiversity, which can affect management of these lands. In particular, development can harm endangered species such as the red-cockaded woodpecker. Agencies with populations that need to be protected are interested in expanding habitat to help protect the species and ease the pressure on their lands. Yet, private landowners are free to sell and develop their land. The Forum developed a habitat protection plan to identify the location of important habitat for threatened and endangered species and has discussed and agreed upon areas that are a priority for preservation and protection. This information has helped the agencies and land trusts coordinate and prioritize land acquisition and has prevented them from competing for the same lands. Since 2001, the Forum partners have together acquired about 57,000 acres of land from willing sellers. Encroachment near military installations creates safety hazards as well as complaints from neighboring communities about noise, dust, and other side-effects of training exercises. The military has the incentive to use its lands for training purposes and to have large buffers between its installations and communities. Yet, communities and others have incentives to develop lands for other purposes. Through the Forum, the Marine Corps representatives can work with other members to identify lands that have compatible uses with the military’s needs and also meet habitat purposes. Military funds can then be used to help acquire conservation easements to the land. Habitat fragmentation occurs with increased development, particularly with greater numbers and size of roads, which affects large species and increases vehicle collisions with wildlife that are possibly fatal. Private landowners have the right to sell and develop their land and zoning allows for building. However, hunting, environmental, and other groups have an interest in protecting species such as the black bear, which need land to roam. The Onslow Bight area supports a large population of bears and the number of collisions with wildlife in the area is increasing. The group has identified areas that road construction should avoid and the need for more wildlife crossings in new road construction. Historically, the longleaf pine and pocosins of the Coastal Plain depended on fire as an ecological process. Fire has been suppressed for years, although the health of the vegetation depends on fire. The agencies and land managers have an interest in burning their lands to restore their health, however, new community members do not like smoke and complain about burning programs. The group is working with The Nature Conservancy on a project started in 2005 called the Onslow Bight Fire Learning Network/LANDFIRE application project to develop and support a burn program to help restore habitat. The Nature Conservancy is also developing a memorandum of understanding (MOU) with the Forum to share equipment and personnel. Including burning on agency lands as part of the fire programs, the members of the Forum burn about 60,000 acres of land a year. The collaborative practices used by the Forum are described in the following sections. The Forum includes a range of participants who manage land or are advocates for land conservation. The Forum began with a network of land managers and federal and state agency officials, and members have discussed how broadly to advertise for potential members; for now, they have determined to keep the membership more narrow. Two land conservation organizations—North Carolina Coastal Federation and North Carolina Coastal Land Trust—have representatives in the Forum. Members also include representatives from the North Carolina Natural Heritage Program, which conducts inventories for rare species and high-quality habitat in the state, and the Wildlife Resources Commission, which manages state lands for wildlife. Another state agency, the Department of Transportation, has signed on as a member because it acquires lands to mitigate the destruction of wetlands or other lands for road building activities. It is also interested in identifying where to put underpasses for wildlife to safely cross roads; however, members indicated that agency representation has been infrequent. In addition to the Marine Corps, other federal agencies that are involved in the Forum include the Forest Service, U.S. Fish and Wildlife Service, and the National Park Service. The federal partners were initially more involved in planning efforts, but because the key staff involved left the area and were not replaced, the agencies have had less involvement. Members of the U.S. Fish and Wildlife Service Ecological Services group participate because of threatened and endangered species issues. Other federal employees from the Forest Service have attended as they are able to do so, but according to Forum and Forest Service members, other Forest Service activities compete for their attention. The Natural Resources Conservation Service also joined the Forum and attends meetings. However, while Forum members see a role for the agency because of the large amounts of conservation funding that it provides, the agency has been less involved in acquisition activities because that is not a main goal of the Natural Resources Conservation Service. The Forum exists through an MOU signed by all members. The MOU is nonbinding and states that each agency will retain its mission. It also states that the group will discuss and share information that is compatible with the land use and management objectives of each entity involved. The MOU allows the groups to discuss, share information, and agree on conservation or preservation opportunities, but in order to avoid triggering Federal Advisory Committee Act requirements, the group does not make official decisions or take official actions. For committees subject to the Federal Advisory Committee Act, the act generally requires that agencies announce committee meetings ahead of time and give notice to interested parties about such meetings. With some exceptions, the meetings are to be open to the public, and agencies are to prepare meeting minutes and make them available to interested parties. Nevertheless, the Forum can come to consensus on activities, which individual agencies can decide to undertake or not. According to members, because of the MOU, which allows each member to retain its overall mission and undertake the activities that best suit that mission, the group is highly flexible and open. In addition, participants said that the Forum has been managed in a transparent manner, in that the participants are clear in sharing their individual interests with other members. Participants said that this transparency has helped to foster respect among the members. For example, the Marine Corps members have been upfront about their purpose in working for land conservation, which involves relieving the pressure of development around their installations and potentially removing restrictions on training exercises that result from threatened and endangered species habitat. The Forum started with the efforts of two key people with The Nature Conservancy and the U.S. Marine Corps, modeled after a similar effort at the Army’s Fort Bragg in North Carolina. It has continued with the sustained interest of several more individuals. Members participate as they are able and as they can offer particular skills. Because these individuals and their agencies have sustained the Forum by such efforts as organizing meetings and completing work between meetings, the group is currently discussing whether it should hire staff to ensure that work gets accomplished. The participants are uncertain which of the agencies or groups could justify funding such a position and to whom that position would answer. The goal of the Forum is to provide for open discussion about the long-term conservation and enhancement of biological diversity and ecosystem sustainability in the Onslow Bight area. The members have different goals for managing their land and resources, but do share the goal of identifying opportunities to preserve, protect, and restore native biological elements in the coastal landscape, including marine and estuarine areas. To achieve their goal, the group has focused on acquiring lands that bridge the gaps between large publicly-owned lands, as well as some private conservation lands, and can meet their common needs. For example, one species on which the group focuses is the red-cockaded woodpecker; two of the federal partners have primary habitat for this species and support two of the main recovery populations of the bird as defined by the U.S. Fish and Wildlife Service in its recovery plan for the species. The group has identified, and has acquired, land between the public lands that can serve as a stepping-stone for members of the populations. The group recognizes that acquisition is only the first step of protecting land and resources. The next step is to restore habitat and manage those acquired lands and resources in the long term. Most of the land is being managed by the state’s Department of Environment and Natural Resources, primarily the Wildlife Resources Commission and the Division of Parks and Recreation. In developing its habitat protection plan, the Forum made use of available information about lands and resources in the area. In particular, the state’s Natural Heritage Program conducts assessments of habitat and identifies good habitat for purposes of preserving and protecting it, and the Forum used this data to develop the plan. It also used information on existing populations of species such as bears and red-cockaded woodpeckers and locations of undeveloped woodlands. The Forum also used the scientific expertise available from the federal and state agencies in its planning process. Biologists from the federal and state agencies helped to identify how species such as bears and woodpeckers move across the landscape and, accordingly, good places to protect. Members of the Forum have been successful in getting grants and using these funds to match agency funding to acquire lands. According to participants, one of the benefits of the Forum is that foundations and other funding groups use collaboration as a way to judge the potential success and effectiveness of the group. Sources of funding include the military, North Carolina trust funds established for purposes of land conservation, U.S. Fish and Wildlife Service grants under the North American Wetland Conservation Act, and funds raised by the land conservation group partners. The Forest Service also attempted to get funding from the Land and Water Conservation Fund, but did not succeed. The Forum does not have staff and its work is done by the participants, which means that sometimes it does not get done. The group meets every few months and keeps in touch by e-mail, but participants may not be able to prioritize or complete tasks for the group in between meetings. The Forum discussed hiring staff but has not made a decision to do so. According to members, having staff would allow the group to get more work done in between meetings and would ensure that the work would be done. The decision to have staff is difficult, however, because the action might force the group members to increase their commitment to the group through funding the position or even cause the Forum to take on a different organizational structure to enable the hiring of staff. Apart from the incentives provided by land acquisition, the group has not had the opportunity to provide or use any incentives to achieve its goals. However, in the future, the group may need to work more with private landowners and provide them incentives. Some members cited Natural Resources Conservation Service programs to protect and conserve agricultural lands and wetlands as potential sources of funding to work with landowners. For example, one program that could potentially be compatible with the Forum’s goals is the Wetlands Reserve Program, a program that seeks to restore marginal agricultural land to its previous wetland condition through cost-share assistance and easement purchases. According to the agency’s Forum representative, the agency’s staff currently works with landowners on more traditional agricultural issues such as preventing erosion and conserving soils. As membership in the Forum is voluntary, any activities the participants undertake are also voluntary and the Forum does not track its achievements. These activities, primarily land acquisition and some restoration work, help the Forum achieve its overall vision of protecting habitat. This conclusion is based on the assumption that protecting and restoring habitat will improve species conditions. As part of its planning effort, the Forum has developed a geographic information system (GIS) map of the public lands and locations of important species and habitat. Because the lands are acquired by each agency or participant and not by the Forum, this map is not updated to show acquisitions or to keep track of the lands protected. Rather, the information that the group develops about habitat and species can be used by each participant as it makes decisions about land acquisition. The Steens Mountain Cooperative Management and Protection Area (CMPA), located in southeastern Oregon, was created in 2000 when Congress passed the Steens Mountain Cooperative Management and Protection Act (Steens Act). The high desert mountain area occupies about 496,000 acres and supports diverse vegetation and wildlife, including habitat for the sage grouse. The same area has a long history of human use as a Native American site for spiritual experience and herbal gathering and for cattle grazing by local ranching families. The purpose of the CMPA is for BLM “to conserve, protect, and manage the long-term ecological integrity of Steens Mountain for future and present generations.” Of the 496,000 acres in the CMPA, about 428,000 acres are federal lands and the remaining lands are private and state lands. The Steens Act protected about 170,000 acres of the federally managed land as wilderness, of which about 95,000 acres are specifically designated as a cattle-free wilderness, the first of its kind. The federal land is managed for various uses by BLM, and BLM is authorized to work cooperatively with private land owners in managing the entire area. The Steens Act established a multistakeholder group called the Steens Mountain Advisory Council (Council). The Council is charged with providing BLM recommendations regarding “new and unique approaches to the management of lands within the boundaries of the CMPA and cooperative programs and incentives for seamless landscape management that meets human needs and maintains and improves the ecological and economic integrity of the CMPA.” The major land and resource management issues that the Council has considered are described below: The act required that BLM develop a comprehensive management plan for the Steens Mountain CMPA. In addition to the wilderness area created by the act, the CMPA contains several wilderness study areas that BLM must manage to retain wilderness conditions and wild and scenic river corridors that BLM must manage to maintain natural conditions. These designations may limit certain activities, such as motorized vehicles and equipment, in the areas, and as a result, Council members disagree over how to manage these areas—ranchers and others would like the wilderness study areas to be removed from consideration as wilderness, but an environmental group would like even more area to be considered as wilderness study area. In August 2005, BLM, with the Council’s input, issued a land management plan; however, it did not completely address management of roads and travel in the CMPA, deferring decisions on route designations until 2007. Travel management and designation of roads, tire tracks, and ways for traditional access was an issue discussed in 2007. BLM has been charged with managing travel in the CMPA and can potentially restrict travel in some places, in particular the new wilderness area and other wilderness study areas. Although motorized access to wilderness areas and wilderness study areas is limited, participants of the Council have not been able to agree on the definitions for different types of roads that should remain open for access. Given the historic uses of Steens Mountain, the area has many roads, tracks, or ways that are used at various times and for multiple reasons—such as to access property each day, check on fencing periodically, and gather herbs during different seasons. However, some of these have been proposed for closure by environmental groups in order to maintain wilderness characteristics of the wilderness areas and study areas, as required by law. An initial travel management plan was made public in May 2007, but was rescinded due to a court order and was reissued in November 2007. Private land management within the CMPA is another management issue in which the Council has been involved. BLM is authorized to work with private landowners within the CMPA to cooperatively manage the private and public lands, such as to control vegetation. However, BLM has been able to agree in only a few cases on what management activities and payments will be involved. At least one owner is considering selling his land for development rather than working with BLM. The act authorizes $25 million from the land and water conservation fund for, among other purposes, the acquisition of private land and conservation easements within the CMPA. According to the agency and Council members, none of these funds have been provided, limiting the actions local BLM officials can take. Council members and others explained that by adding new layers of management restrictions, such as wilderness management restrictions, the act limited their ability to manage the area in a new and innovative way, thereby precluding some cooperation and creative management that could have taken place. One area in which the group has agreed is related to vegetation management. The Council has endorsed a juniper management program to thin stands of juniper that have expanded and overcome sagebrush habitats and grasslands in the area. BLM, with Council input, is studying different options for reducing the expansion of juniper woodlands, but to date only limited activity has been funded. According to the agency, the Council has had greater success at working together to solve ecological restoration issues. The collaborative practices used by the Council are described in the following sections. The Council consists of 12 representatives that, according to the Steens Act, must be appointed by the Secretary of the Interior from nominees submitted by various federal, state, and local officials. Members include, among others: a private landowner in the CMPA; two members who are grazing permittees on federal lands in the CMPA; a member interested in fish and recreational fishing in the CMPA; a member of the Burns Paiute Tribe; two persons who are recognized environmental representatives, one of whom represents the state as a whole and one of whom is from the local area; a person who participates in dispersed recreation such as hiking, camping, nature viewing or photography, bird watching, horse back riding, or trail walking; and a person who is a recreational permit holder or is a representative of a commercial recreation operation in the CMPA. Several members noted that the group stalemates as a result of their makeup and difficulty in getting a quorum. According to several members and observers, the group is polarized on fundamental issues of use versus nonuse and some suggested the need for more neutral or balanced representation. Another community group, similar to the Blackfoot Challenge in Montana and the Malpai Borderlands Group in Arizona and New Mexico, has formed with the help of the staff at the local U.S. Fish and Wildlife Service refuge. This group, called the High Desert Partnership, has succeeded in working together on a few projects and has helped rebuild trust with the U.S. Fish and Wildlife Service among some community members. One difference is that the group is focused on the common interests of the members. The Council’s organization and processes have evolved, although members of the Council and others explained that it has been less successful making recommendations because of organizational problems. Although the Council votes using a majority rule, it was not until March 2006 that members adopted operating protocols that describe, among other things, the Council’s objectives, roles and responsibilities, and communication protocols. The Council needs 9 votes in order to provide BLM with a formal recommendation; however, during the several years the group has been in existence, attendance has been poor and filling vacancies has been a problem, making it difficult for it to establish a quorum for votes to take place. According to several members of the Council, they believe they have failed to make recommendations on large issues but they have made decisions about less important issues. More recently, all vacancies have been filled and some participants were more optimistic about the Council’s ability to collaborate in the future. In 2007, the Council provided approximately 20 recommendations. BLM has brought in an outside facilitator to help the Council work through conflicts. The facilitator worked with the members during a 2-day retreat and made progress on a wilderness access issue. However, a later vote by the Council failed to approve the final plan. At times, the group has lacked a respectful atmosphere. One observer explained that at one of the Council’s meetings some members fostered disrespect toward BLM representatives and tried to direct BLM decisions rather than simply provide advice. In response to such issues, the March 2006 protocols include a section on rules for members and members of the public to follow in order to facilitate an open and collaborative discussion. These rules say that members will listen with respect, avoid grandstanding in order to allow everyone a fair chance to speak and to contribute, and jointly advocate for support for consensus recommendations. According to the agency and participants, the group needs a strong leader or facilitator with sufficient training to guide the group. The Council has a regular facilitator from the local area; however, at least one member believes the group requires stronger facilitation to move forward. While the U.S. Institute for Environmental Conflict Resolution provided the Council with third-party facilitation in 2003 that achieved consensus on some travel access issues, the facilitation was short term and the consensus did not last. While one objective of the Steens Act was to promote and foster cooperation, communication, and understanding and to reduce conflict between Steens Mountain users and interests, members and other parties said that conflicting interpretations of the act are a fundamental source of conflict among parties. According to several BLM officials, cooperation among stakeholders was much better before the act. The Steens Mountain area has been considered an area worthy of conservation since at least 1999, when the area was considered for designation as a national monument but local stakeholders opposed special designation. For this reason, Council members have fundamentally different interpretations of the act, and continue to debate the conservation versus use clauses in it. Council members interpret the act differently—some refer to one of the statutory objectives of the CMPA that promotes grazing and a provision that allows reasonable access to lands within the CMPA, while others assert that a section requiring BLM to ensure the conservation, protection, and improved ecological integrity of the CMPA represents the act’s primary purpose. After the establishment of the CMPA and the wilderness area within it, a local environmental group identified several new possible wilderness areas—called wilderness study areas. The group has since sued BLM to designate these areas as wilderness study areas. In June 2007, the District Court held that BLM had properly declined to adopt most of the group’s proposed designations. The Steens Act authorizes BLM to establish a committee of scientists to provide advice on questions relating to the management of the CMPA, but BLM has not done so. A BLM official said that the reason a scientific group has not been formed is lack of funding requested by the scientists who were invited to participate. The local USDA Agriculture Research Service office has partnered with BLM and several private landowners over the last 30 years on scientific research including juniper management. On other issues, such as travel management, the county pulled together a common database for BLM and the Council to use in its discussions about access. The Steens Act established a Wildlands Juniper Management Area for experimentation, education, interpretation, and demonstration of management that is intended to restore the historic fire regime and native vegetation communities on Steens Mountain. The area is being used to demonstrate different ways BLM and partners are working to reduce the amount or size of juniper woodlands to effectively manage the expansion of juniper vegetation. Some additional experimentation may occur in the area and in other areas of the CMPA. The results of research can help the agency, with Council input, determine the best way to reduce vegetation using all available tools in many areas, and for certain areas including wilderness and wild rivers, through minimum use of mechanized transport or motorized equipment. BLM pays between $70,000 and $80,000 annually for the Council’s travel, staff support, and facilitation. Because it is an advisory committee, it is not organized to collect donations or spend funds. However, the Steens Act authorized $25 million to be appropriated to BLM to help purchase private properties within the boundaries of the CMPA, and additional funds would be available for incentive payments for cooperative agreements with private landowners. Several members of the Council and others told us that many conflicts might have been resolved had BLM received these funds. For example, funding could have been used to develop cooperative agreements or purchase private inholdings, thereby reducing controversial issues over access and permissible use. According to the Steens Act, BLM may provide conservation incentive payments to private landowners in the CMPA who enter into a contract with BLM to protect or enhance ecological resources on the private land covered by the contract, if those protections or enhancements benefit public lands. However, according to BLM officials and Council members, because funding has not been forthcoming, such agreements had not been finalized at the time of our review. In 2007, BLM initiated several cooperative management agreements concerning joint juniper management projects where each party pays its own costs and one agreement that provides public recreation on private lands where BLM funds were used (not land and water conservation funds). The Steens Act requires that a monitoring program be implemented for federal lands in the CMPA so that progress toward ecological integrity objectives can be determined. BLM developed a plan to monitor changes to current resource conditions within the CMPA, which would provide information on 31 resources and uses identified in the CMPA management plan. The Council has not been formally evaluated to determine its contributions or shortcomings. According to the agency and an observer, the group’s effectiveness should be evaluated, particularly because some federal dollars contribute to its functioning. The Uncompahgre Plateau Project is a collaborative group working to restore and sustain the condition of the 1.5-million-acre Uncompahgre Plateau, located in southwestern Colorado. The group began in the late 1990s in response to a decline in the mule deer population on the plateau that was observed by wildlife officials and hunters. After recognizing that the mule deer decline was an indicator of a larger ecosystem problem, the group broadened its focus to restoring and sustaining the ecological, social, cultural, and economic values of the plateau. The group, which includes federal agencies, a community group, a state wildlife agency, and utility companies, has developed a plan, the Uncompahgre Plateau Project Plan, to guide its efforts. Historically, the Uncompahgre Plateau, 75 percent of which is managed by the BLM, the Forest Service, and the Colorado Division of Wildlife (CDOW), has had multiple uses including logging, ranching, and recreation and provides habitat for many wildlife species, including game species. Commercial logging has occurred on Forest Service land for over a century, but in recent decades the Forest Service has decreased timber harvest on the plateau and current logging operations are limited to small sales of logs and firewood. Both the Forest Service and BLM manage grazing allotments on the plateau that are tied to privately owned ranches. Recreational use of the plateau has steadily increased and includes fishing, off-highway vehicle use, snowmobiling, mountain biking, camping, and cross-country skiing. In addition, CDOW manages two areas on the plateau for deer and elk hunting. Furthermore, the plateau contains lynx analysis units designated by CDOW and the U.S. Fish and Wildlife Service for lynx populations that were reintroduced into Colorado beginning in 1999. The Uncompahgre Plateau Project has concentrated on several natural resource problems on the plateau, including the following: According to the group’s participants, their focus broadened to larger ecosystem health issues when state biologists found that the observed decline in mule deer was related to poor habitat, specifically, vegetation that was too homogeneous in its age class distribution. According to natural resource managers, this condition resulted from certain activities on the plateau such as fire suppression and grazing practices. The Uncompahgre Plateau Project has initiated landscape-level planning and restoration efforts across jurisdictional boundaries to achieve more heterogeneous vegetation across the plateau and bring vegetation structure, age, condition, and spatial patterns in line with the habitat needs of wildlife species. The group’s initial planning and restoration efforts have focused on two watersheds covering over 220,000 acres of BLM, Forest Service, state, and private land and has included a variety of vegetation treatments such as roller chopping— using a large round drum to crush the shrubs—and prescribed burning. As of May 2007, the Uncompahgre Plateau Project completed over 100 restoration projects, covering over 50,000 acres. The Uncompahgre Plateau has had problems with invasive species on both public and private lands. Invasive species alter the ecology in an area by crowding out native species, changing fire regimes, or altering hydrologic conditions. To facilitate cooperation among land managers and private landowners in efforts to manage invasive species, the Uncompahgre Plateau Project has initiated a program to map, monitor, control, and prevent invasive species within designated weed management areas on over 350,000 acres. The Uncompahgre Plateau is a key location for east to west transmission lines connecting Rocky Mountain power sources with western markets such as Los Angeles. As a result of the Energy Policy Act of 2005, transmission line operators must ensure that their power lines remain reliable. Forested rights-of-way pose threats to reliability because of the potential for tall trees to fall on the lines, arcing from the power line to trees, and forest fires. Traditionally, power line rights-of- way have been clear-cut to remove tall trees underneath and adjacent to the power lines, which has historically generated conflict between utilities and land managers, according to a utility official. While this practice removes the threat to power lines directly posed by these trees, it can damage habitat and ecosystem health and the risk from forest fires still remains. Through the Uncompahgre Plateau Project, the utility companies and land management agencies have worked together to treat vegetation outside of the utility rights-of-way in order to reduce the risk of forest fires and threats to the power lines in a manner that creates more natural openings that are friendly to wildlife. This is accomplished through means such as creating undulating boundaries between treated and untreated vegetation, instead of straight lines. According to a group member, these treatment techniques are being used as a model for other utilities across the country. When conducting restoration projects, land managers working on the Uncompahgre Plateau want to replant with vegetation that is native to the plateau because it is better adapted to the local conditions and can improve the success of restoration projects. However, there is not a sufficient supply of native seeds available on the commercial market for large-scale restoration projects on the Uncompahgre Plateau. In response, the Uncompahgre Plateau Project initiated a native plant program to collect, study, and produce native seeds that can be used to facilitate restoration projects. According to a group member, it has gathered native seeds for over 50 plants and developed methods for propagating these. The ultimate goal of this program is to have private, local growers and larger commercial growers cultivate the seeds and sell them to the agencies and energy companies who are doing restoration projects. The collaborative practices used by the Uncompahgre Plateau Project are described in the following sections. The Uncompahgre Plateau Project partners include BLM; Forest Service; CDOW; utility companies including the Western Area Power Administration and Tri-State Generation and Transmission Association, Inc.; and an informal nonprofit community organization called the Public Lands Partnership. The Uncompahgre Plateau Project was initiated by the Public Lands Partnership and major land managers on the Uncompahgre Plateau—BLM, Forest Service, and CDOW. Later, the Western Area Power Administration and Tri-State Generation and Transmission Association, Inc., approached the Uncompahgre Plateau Project after seeing a presentation on the group and realizing that working collaboratively to treat vegetation beyond the utility rights-of-way and decrease the threat of forest fires could mutually benefit themselves and the original partners. The Western Area Power Administration and Tri-State Generation and Transmission Association, Inc., became formal partners in the Uncompahgre Plateau Project in 2004. Many participants cited the involvement of the Public Lands Partnership as a significant and unique asset to the Uncompahgre Plateau Project. The membership of the Public Lands Partnership is made up of county commissioners, city administrators, user groups from the timber industry, agricultural producers, environmentalists, recreationists, and local citizens. The organization started in 1992 because members of the community wanted to get involved in discussions about the public lands that surrounded them. The group brings together members of the public to discuss issues related to public lands including oil and gas drilling, forest plans, campground closures, travel access, and roads. BLM officials noted that, by having the Public Lands Partnership involved in the Uncompahgre Plateau Project, they have been able to complete their National Environmental Policy Act analyses more efficiently because, through the Public Lands Partnership, the public was brought in to help set the vision for the proposed action and there were no subsequent appeals. The Uncompahgre Plateau Project operates by consensus and, through its efforts, seeks to develop strong communication, collaborative learning, and partnerships among the agencies and community. Individual projects to be undertaken by the group are prioritized by a Technical Committee according to criteria established in the Uncompahgre Plateau Project Plan that was developed by the group. One participant noted that having a collaborative group allows the partners to take a project of theirs and see how it fits into the overall landscape. The Uncompahgre Plateau Project was formalized with a Cooperative Agreement and MOU, signed in 2001. When that MOU expired at the end of 2006, it was replaced by a second MOU. The structure of the group includes an Executive Committee, Technical Committee, coordinators, and a fiscal agent. The Executive Committee is responsible for annually reviewing project progress and addressing future resource commitments. The Technical Committee forms the working body and backbone of the group and meets monthly to coordinate activities, meet with outside members, review project requests, and recommend budgeting and project approvals. Members from each of the partner organizations hold positions on both the Executive and Technical Committees. In addition to these committees, the Uncompahgre Plateau Project contracted four part-time coordinators who are responsible for public relations and outreach, overall project coordination, financial record keeping and contracting, and grant writing. Some participants noted that the coordinators play a critical role in moving the group forward between meetings and making sure that projects get done. The Uncompahgre Plateau Project uses Uncompahgre/Com, Inc., a nonprofit organization, as its fiscal agent. One participant noted that the group was able to generate credibility and trust among the members through the group’s initial effort to develop a landscape plan for a watershed around a common vision. According to the participants, the group maintains transparency by having open meetings, distributing minutes of meetings, and using its Web site. Several participants attributed the initial success of the group to the leadership of the individual who was originally responsible for coordinating the group. He was described by several participants as a “charismatic leader” who had great vision for the group and was able to get projects going by working with the different agencies to generate support for the collaborative effort. While each of the Uncompahgre Plateau Project participants has different interests, they have identified that their common interest is to protect and restore the ecosystem on the Uncompahgre Plateau. The participants were able to agree on a common goal to: “improve the ecosystem health and natural functions of the landscape across the Uncompahgre Plateau through active restoration projects using the best science available and public input,” which represents the area where each of the partners’ individual interests overlap. The federal land management agencies—BLM and Forest Service—are responsible for managing multiple uses on the plateau, including timber, grazing, and recreation, and have an interest in conducting these management activities in a manner that preserves ecosystem health. CDOW is responsible for managing game species, so it is interested in ensuring that habitat for the mule deer and other game species is healthy and adequate to support them. The Public Lands Partnership represents the community’s values and is consequently interested in maintaining a healthy ecosystem for economic, environmental, cultural, social, recreation, and aesthetic reasons. The utility companies desire a healthy ecosystem, less prone to catastrophic wildfires, in order to protect the reliability of their power lines. According to participants, the Uncompahgre Plateau Project is always seeking new science to inform its decisions and looks for opportunities to bring new ideas to the table. For example, the group works with researchers from universities such as Colorado State University, Brigham Young University, Snow College, and the University of Wyoming to gather new scientific data on the vegetation and ecology of the plateau and study the effects of different vegetation treatments. Scientific publications related to research on the plateau are available on the Uncompahgre Plateau Project Web site. The Uncompahgre Plateau Project frequently sponsors field trips, which one participant noted is important to get community members involved, understand the resource problems that exist on the plateau, and become comfortable with the projects being carried out by the group. As part of the Uncompahgre Plateau Project planning efforts, BLM and the Forest Service have integrated their GIS map data for two priority watersheds and are working to integrate data for two other priority watersheds. Because the agencies’ mapping data are not compatible, however, staff said that the landscape assessment process was difficult. The agencies had to develop ways to merge the data, which was time- consuming and expensive. For areas outside of these watersheds, data generated by agency research are held within the sponsoring agency, so other partners sometimes do not have access to this information. For example, BLM fuel treatments are mapped in its GIS database, which the Forest Service does not have access to, and vice versa. The group noted that it would like to make all of the GIS maps available on its Web site, but according to group members, this effort is extremely resource intensive and therefore not feasible for the group to accomplish at this time with its current resources. According to the participants, BLM and the Forest Service have hired an outside consultant to serve as a repository for the GIS data. The group has been successful in leveraging funds and has received over $3 million from a variety of grants. Two grants that were instrumental in getting the Uncompahgre Plateau Project started included $500,000 from CDOW for mule deer conservation efforts and $620,000 given to the Public Lands Partnership from the Ford Foundation for community forestry. The finances of the group are handled by Uncompahgre/Com, Inc., which administers contracts, solicits bids, and pays invoices for the Uncompahgre Plateau Project and provides the partners a mechanism to pool their funds. The Forest Service, BLM, CDOW, and the utilities support the Uncompahgre Plateau Project through various means. BLM has an assistance agreement with the group under which it can provide money to the group for activities outlined in statements of work. BLM has also given the group program funding. BLM officials noted that by having nonfederal partners, the group has a relatively easy time coming up with the nonfederal matching funds that are required with particular federal grants. In addition, BLM and the Forest Service have provided money for the native plant program. The Forest Service has used various agreements including appropriated funds spent with Wyden Amendment authority— which allows federal money to be spent on nonfederal lands—to support the efforts of the Uncompahgre Plateau Project, such as completing invasive species work across jurisdictional boundaries. The Western Area Power Administration; Tri-State Generation and Transmission Association, Inc.; and CDOW have provided money to support vegetation management projects. The group noted that while it has had success leveraging funds in the past, it has run into difficulty acquiring funding now that the project is more mature. In addition, most grant money is for projects on the ground, so the group faces a challenge in funding its overhead costs. The Uncompahgre Plateau Project applied for a National Forest Foundation mid-capacity grant, which provides operating funding for organizations that have been working together for some time, but was unsuccessful in receiving this grant. The Uncompahgre Plateau Project assisted a local county in establishing a cost-share program to provide incentives for private landowners to treat invasive species. Furthermore, with assistance from Colorado State University, the group has established a program to assist local growers in cultivating native plants and purchase seed from them. According to group members, the Uncompahgre Plateau Project monitors its work on both a landscape level and a site level in the watershed where their efforts have been focused and produces an annual report for the Executive Committee and agency offices that describes their accomplishments. Some participants noted that monitoring efforts could be improved if there were more resources available. To monitor individual treatments on a site level, the group has set up a series of specific locations across a site that are monitored before, and 2 and 5 years after, a site is treated to assess whether the treatments are having anticipated results. For landscape-level monitoring the Uncompahgre Plateau Project uses GIS data to assess vegetation age classes across the watershed. The monitoring results are used in an adaptive management approach to revise management strategies in order to improve future treatments. One participant noted that the most difficult thing about conducting monitoring for collaborative groups, particularly landscape-level monitoring as the Uncompahgre Plateau Project has done, is integrating the data from different agencies. In addition to the contact named above, David P. Bixler, Assistant Director; Ulana Bihun; Nancy Crothers; Elizabeth Curda; Anne Hobson; Susan Iott; Rich Johnson; Ches Joy; and Lynn Musser made key contributions to this report. Marcus Corbin, John Mingus, Kim Raheb, Jena Sinkfield, and Cynthia Taylor also made important contributions to the report. Beierle, Thomas C. “The Quality of Stakeholder-Based Decisions.” Risk Analysis, vol. 22, no. 4 (2002): 739–749. Bell, Tom. “Broader System Urged for Fisheries Management.” Portland Press Herald, October 10, 2005, final edition, sec. B. Bengston, David N., George Xu, and David P. Fan. “Attitudes Toward Ecosystem Management in the United States, 1992-1998.” Society and Natural Resources, vol. 14 (2001): 471–487. Beyer, Jr., Dean E., Les Homan, and David N. Ewert. “Ecosystem Management in the Eastern Upper Peninsula of Michigan: A Case History.” Landscape and Urban Planning, vol. 38 (1997): 199–211. Bidwell, Ryan D., and Clare M. Ryan. “Collaborative Partnership Design: The Implications of Organizational Affiliation for Watershed Partnerships.” Society and Natural Resources, vol. 19 (2006): 827–843. Bingham, Lisa Blomgren, Tina Nabatchi, and Rosemary O’Leary. “The New Governance: Practices and Processes for Stakeholder and Citizen Participation in the Work of Government.” Public Administration Review, vol. 65, no. 5 (2005): 547–558. Bissix, Glyn and Judith A. Rees. “Can Strategic Ecosystem Management Succeed in Multiagency Environments?” Ecological Applications, vol. 11, no. 2 (2001): 570–583. Boxer-Macomber, Lauri Diana. Too Much Sun? Emerging Challenges Presented by California and Federal Open Meeting Legislation to Public Policy Consensus-Building Processes. Sacramento, Calif.: Center for Collaborative Policy, 2003. See esp. chap. 6, “Findings.” Brick, Philip, Donald Snow, and Sarah Van de Wetering, eds. Across the Great Divide: Explorations in Collaborative Conservation and the American West. Washington, D.C.: Island Press, 2001. Brody, Samuel D. “Implementing the Principles of Ecosystem Management Through Local Land Use Planning.” Population and Environment, vol. 24, no. 6 (2003): 511. Brody, Samuel D. “Measuring the Effects of Stakeholder Participation on the Quality of Local Plans Based on the Principles of Collaborative Ecosystem Management.” Journal of Planning Education and Research, vol. 22 (2003): 407–419. Brody, Samuel D., Sean B. Cash, Jennifer Dyke, and Sara Thornton. “Motivations for the Forestry Industry to Participate in Collaborative Ecosystem Management Initiatives.” Forest Policy and Economics, vol. 8 (2006): 123–134. Brunner, Ronald D., and Tim W. Clark. “A Practice-Based Approach to Ecosystem Management.” Conservation Biology, vol. 11, no. 1 (1997): 48– 58. Brunner, Ronald D., Christine H. Colburn, Christina M. Cromley, Roberta A. Klein, and Elizabeth A. Olson. Finding Common Ground: Governance and Natural Resources in the American West. New Haven, Conn.: Yale University Press, 2002. Brunner, Ronald D., Toddi A. Steelman, Lindy Coe-Juell, Christina M. Cromley, Christine M. Edwards, and Donna W. Tucker. Adaptive Governance: Integrating Science, Policy, and Decision Making. New York: Columbia University Press, 2005. Bryan, Todd A. “Tragedy Averted: The Promise of Collaboration.” Society and Natural Resources, vol. 17 (2004): 881–896. Burns, Sam. “A Civic Conversation About Public Lands: Developing Community Governance.” Journal of Sustainable Forestry, vol. 13, no. 1-2 (2001): 271–290. Center for Collaborative Policy. “Conditions Needed to Sustain a Collaborative Policy Process.” http://www.csus.edu/ccp/collaborative/sustain.htm (last accessed Jan. 23, 2008). Cheng, Antony S., and Steven E. Daniels. “Examining the Interaction Between Geographic Scale and Ways of Knowing in Ecosystem Management: A Case Study of Place-Based Collaborative Planning.” Forest Science, vol. 49, no. 6 (2003): 841–854. Cheng, Antony S., and Katherine M. Mattor. “Why Won’t They Come? Stakeholder Perspectives on Collaborative National Forest Planning by Participation Level.” Environmental Management, vol. 38 (2006): 545–561. Clagett, Matthew Patrick. “Environmental ADR and Negotiated Rule and Policy Making: Criticisms of the Institute for Environmental Conflict Resolution and the Environmental Protection Agency.” Tulane Environmental Law Journal, vol. 15 (2002): 409–424. Coggins, George Cameron. “‘Devolution’ in Federal Land Law: Abdication by Any Other Name…” Hastings West-Northwest Journal of Environmental Law and Policy, vol. 3 (1996): 211–218. Coggins, George Cameron. “Of Californicators, Quislings and Crazies: Some Perils of Devolved Collaboration.” Chronicle of Community, vol. 2, no. 2 (1998): 27–33. Coggins, George Cameron. “Regulating Federal Natural Resources: A Summary Case Against Devolved Collaboration.” Ecology Law Quarterly, vol. 25 (1999): 602–610. Comer, Robert D. “Cooperative Conservation: The Federalism Underpinnings to Public Involvement in the Management of Public Lands.” University of Colorado Law Review, vol. 75 (2004): 1133–1157. Congressional Research Service. Ecosystem Management: Federal Agency Activities. Washington, D.C.: April 19, 1994. Conley, Alexander, and Margaret A. Moote. “Evaluating Collaborative Natural Resource Management.” Society and Natural Resources, vol. 16 (2003): 371–386. Cortner, Hanna J., Sam Burns, Lance R. Clark, Wendy Hinrichs Sanders, Gus Townes, and Martha Twarkins. “Governance and Institutions: Opportunities and Challenges.” Journal of Sustainable Forestry, vol. 12, no. 3/4 (2001): 65–96. Cortner, Hanna J., John C. Gordon, Paul G. Risser, Dennis E. Teeguarden, and Jack Ward Thomas. “Ecosystem Management: Evolving Model for Stewardship of the Nation’s Natural Resources.” In Ecological Stewardship: A Common Reference for Ecosystem Management, vol. 2, ed. R. C. Szaro, N. C. Johnson, W. T. Sexton, and A. J. Malk, 3–19. United Kingdom: Elsevier Science, Ltd., 1999. Cortner, Hanna J. and Margaret A. Moote. The Politics of Ecosystem Management. Washington, D.C.: Island Press, 1999. Coughlin, Chrissy, Merrick Hoben, Dirk Manskopf, and Shannon Quesada. “A Systematic Assessment of Collaborative Resource Management Partnerships.” Master’s thesis, University of Michigan, 1999. Duane, Timothy P. “Practical Legal Issues in Community Initiated Ecosystem Management of Public Land: Community Participation in Ecosystem Management.” Ecology Law Quarterly, vol. 24 (1997): 771–797. Dukes, Frank. “Key Principles for Effective Collaborative Processes.” Institute for Environmental Negotiation, University of Virginia, December 21, 2006. Dukes, Frank. “Public Conflict Resolution: A Transformative Approach.” Negotiation Journal, January 1993: 45–57. Dukes, Frank. “What We Know About Environmental Conflict Resolution: An Analysis Based on Research.” Conflict Resolution Quarterly, vol. 22, no. 1-2 (2004): 191–220. Emerson, Kirk, Rosemary O’Leary, and Lisa B. Bingham. “Commentary: Comment on Frank Dukes’s ‘What We Know About Environmental Conflict Resolution.’” Conflict Resolution Quarterly, vol. 22, no. 1-2 (2004): 221– 231. Fall, Andrew, Dave Daust, and Don G. Morgan. “A Framework and Software Tool to Support Collaborative Landscape Analysis: Fitting Square Pegs into Square Holes.” Transactions in GIS, vol. 5, no. 1 (2001): 67–86. Frame, Tanis M., Thomas Gunton, and J. C. Day. “The Role of Collaboration in Environmental Management: An Evaluation of Land and Resource Planning in British Columbia.” Journal of Environmental Planning and Management, vol. 47, no. 1 (2004): 59–82. Freeman, Jody. “Collaborative Governance in the Administrative State.” UCLA Law Review, vol. 45 (1997): 3–98. Freeman, Richard. “The EcoFactory: The United States Forest Service and the Political Construction of Ecosystem Management.” Environmental History, vol. 7, no. 4 (2002): 632–659. GAO. Results Oriented Government: Practices That Can Help Enhance and Sustain Collaboration among Federal Agencies. GAO-06-15. Washington, D.C.: October 21, 2005. Gray, Barbara. Collaborating: Finding Common Ground for Multiparty Problems. San Francisco: Jossey-Bass Publishers, 1989. Gray, Gerald J., Larry Fisher, and Lynn Jungwirth. “An Introduction to Community-Based Ecosystem Management.” Journal of Sustainable Forestry, vol. 12, no. 3/4 (2001): 25–34. Gutrich, John, Deanna Donovan, Melissa Finucane, Will Focht, Fred Hitzhusen, Supachit Manopimoke, David McCauley, et al. “Science in the Public Process of Ecosystem Management: Lessons from Hawaii, Southeast Asia, Africa and the US Mainland.” Journal of Environmental Management, vol. 76 (2005): 197–209. Heikkila, Tanya, and Andrea K. Gerlak. “The Formation of Large-scale Collaborative Resource Management Institutions: Clarifying the Roles of Stakeholders, Science and Institutions.” The Policy Studies Journal, vol. 33, no. 4 (2005): 583–611. Helly, J. J., N. M. Kelly, D. Sutton, and T. Todd Elvins. “Collaborative Management of Natural Resources in San Diego Bay.” Coastal Management, vol. 29, no. 2 (2001): 117–132. Holsman, Robert H. and R. Ben Peyton. “Stakeholder Attitudes Toward Ecosystem Management in Southern Michigan.” Wildlife Society Bulletin, vol. 31, no. 2 (2003): 349–361. Houck, Oliver A. “On the Law of Biodiversity and Ecosystem Management.” Minnesota Law Review, vol. 81, April (1997): 869–979. Hummel, Mark and Bruce Freet. “Collaborative Processes for Improving Land Stewardship and Sustainability.” In Ecological Stewardship: A Common Reference for Ecosystem Management, vol. 3, ed. W. T. Sexton, A. J. Malk, R. C. Szaro, and N. C. Johnson, 97–129. United Kingdom: Elsevier Science, Ltd., 1999. Imperial, Mark T. “Analyzing Institutional Arrangements for Ecosystem- Based Management: Lessons from the Rhode Island Salt Ponds SAM Plan.” Coastal Management, vol. 27, no. 1 (1999): 31–56. Innes, Judith E. “Consensus Building: Clarifications for the Critics.” Planning Theory, vol. 3, no. 1 (2004): 5–20. Jackson, Laurie Skuba. “Contemporary Public Involvement: Toward a Strategic Approach.” Local Environment, vol. 6, no. 2 (2001): 135–147. Johnson, Kathleen M., Al Abee, Gerry Alcock, David Behler, Brien Culhane, Ken Holtje, Don Howlett, George Martinez, and Kathleen Picarelli. “Management Perspectives on Regional Cooperation.” In Ecological Stewardship: A Common Reference for Ecosystem Management, vol. 3, ed. W. T. Sexton, A. J. Malk, R. C. Szaro, and N. C. Johnson, 155–180. United Kingdom: Elsevier Science, Ltd., 1999. Johnson, Nels. “Legal Perspectives.” In Ecological Stewardship: A Common Reference for Ecosystem Management, vol. 1, ed. N. C. Johnson, A. J. Malk, R. C. Szaro, and W. T. Sexton, 107–113. United Kingdom: Elsevier Science, Ltd., 1999. Kagan, Robert A. “Political and Legal Obstacles to Collaborative Ecosystem Planning.” Ecology Law Quarterly, vol. 24 (1997): 871–875. Keiter, Robert B. “NEPA and the Emerging Concept of Ecosystem Management on the Public Lands.” Land and Water Law Review, vol. 25, no. 1 (1990): 43–60. Keiter, Robert B., Ted Boling, and Louise Milkman. “Legal Perspectives on Ecosystem Management: Legitimizing a New Federal Land Management Policy.” In Ecological Stewardship: A Common Reference for Ecosystem Management, vol. 3, ed. W. T. Sexton, A. J. Malk, R. C. Szaro, and N. C. Johnson, 9–41. United Kingdom: Elsevier Science, Ltd., 1999. Keough, Heather L. and Dale J. Blahna. “Achieving Integrative, Collaborative Ecosystem Management.” Conservation Biology, vol. 20, no. 5 (2006): 1373–1382. Keystone Center. “The Keystone National Policy Dialogue on Ecosystem Management: Final Report.” The Keystone Center, Colorado, October 1996. Knight, Richard L. and Peter B. Landres, eds. Stewardship Across Boundaries. Washington, D.C.: Island Press, 1998. Koontz, Tomas M., Toddi A. Steelman, JoAnn Carmin, Katrina Smith Korfmacher, Cassandra Moseley, and Craig W. Thomas. Collaborative Environmental Management: What Roles for Government? Washington, D.C.: Resources for the Future, 2004. Koontz, Tomas M., and Craig W. Thomas. “What Do We Know and Need to Know about the Environmental Outcomes of Collaborative Management?” Public Administration Review, December 2006: 111–121. Korfmacher, Katrina Smith. “What’s the Point of Partnering?” The American Behavioral Scientist, vol. 44, no. 4 (2000): 548–566. Lachapelle, Paul R., Stephen F. McCool, and Michael E. Patterson. “Barriers to Effective Natural Resource Planning in a ‘Messy’ World.” Society and Natural Resources, vol. 16 (2003): 473–490. Lawless, W. F., M. Bergman, and N. Feltovich. “Consensus Seeking Versus Truth Seeking.” Practice Periodical of Hazardous, Toxic, and Radioactive Waste Management, vol. 9, no. 1 (2005): 59–70. Leach, William D. “Building a Theory of Collaboration.” In Effective Collaboration for Environmental Change: Making Sense of an Emerging Socio-Ecological Movement, Community-Based Collaboratives Research Consortium, forthcoming. Leach, William D. “Collaborative Public Management and Democracy: Evidence from Western Watershed Partnerships.” Public Administration Review, December 2006: 100–110. Leach, William D. “Public Involvement in UDSA Forest Service Policymaking: A Literature Review.” Journal of Forestry, January/February 2006, 43–49. Leach, William D. “Surveying Diverse Stakeholder Groups.” Society and Natural Resources, vol. 15 (2002): 641–649. Leach, William D. “Theories about Consensus-Based Conservation.” Conservation Biology, vol. 20, no. 2 (2006): 573–575. Leach, William D., and Neil W. Pelkey. “Making Watershed Partnerships Work: A Review of the Empirical Literature.” Journal of Water Resources Planning and Management, vol. 127, no. 6 (2001): 378–385. Leach, William D., Neil W. Pelkey, and Paul A. Sabatier. “Stakeholder Partnerships as Collaborative Policymaking: Evaluation Criteria Applied to Watershed Management in California and Washington.” Journal of Policy Analysis and Management, vol. 21, no. 4 (2002): 645–670. Leshy, John. “Public Lands.” American Law Institute—American Bar Association Continuing Legal Education Course of Study, February 13– 15, 2002. Linden, Russell M. Working Across Boundaries: Making Collaboration Work in Government and Nonprofit Organizations. San Francisco, Calif.: Jossey-Bass, 2002. Lubell, Mark. “Collaborative Watershed Management: A View from the Grassroots.” Policy Studies Journal, vol. 32, no. 3 (2004): 341–361. Lugenbill, Chris Herman. “An Interorganizational Management Strategy for Collaborative Stewardship.” Ph.D. diss., Colorado Technical University, 2003. Lysak, Tetyana, and Edward P. Weber. “Collaboratives, Wicked Problems, and Institutional Design: Different Theories, Same Lessons.” Washington State University, submitted for publication, December 29, 2006. MacEachren, Alan M. “Cartography and GIS: Facilitating Collaboration.” Progress in Human Geography, vol. 24, no. 3 (2000): 445–456. MacCleery, Douglas W., and Dennis C. Le Master. “The Historical Foundation and Evolving Context for Natural Resource Management on Federal Lands.” In Ecological Stewardship: A Common Reference for Ecosystem Management, vol. 2, ed. R. C. Szaro, N. C. Johnson, W. T. Sexton, and A. J. Malk, 517–556. United Kindgom: Elsevier Science, Ltd., 1999. Malk, Andrew. “Processes for Collaboration.” In Ecological Stewardship: A Common Reference for Ecosystem Management, vol. 1, ed. N. C. Johnson, A. J. Malk, R. C. Szaro, and W. T. Sexton, 129–134. United Kingdom: Elsevier Science, Ltd., 1999. Malone, Charles R. “Ecosystem Management Policies in State Government of the USA.” Landscape and Urban Planning, vol. 48 (2000): 57–64. Manring, Nancy J. “Collaborative Resource Management: Organizational Benefits and Individual Costs.” Administration and Society, vol. 30, no. 3 (1998): 2742–90. Margerum, Richard D., and Debra Whitall. “The Challenges and Implications of Collaborative Management on a River Basin Scale.” Journal of Environmental Planning and Management, vol. 47, no. 3 (2004): 4074– 27. McClosky, Michael. “Local Communities and the Management of Public Forests.” Ecology Law Quarterly, vol. 25 (1999): 624–629. McKinney, Matthew, Craig Fitch, and Will Harmon. “Regionalism in the West: An Inventory and Assessment.” Public Land and Resources Law Review, vol. 23 (2002): 101–123. Meffe, Gary K. “Biological and Ecological Dimensions: Overview.” In Ecological Stewardship: A Common Reference for Ecosystem Management, vol. 1, ed. N. C. Johnson, A. J. Malk, R. C. Szaro, and W. T. Sexton, 39–44. United Kingdom: Elsevier Science, Ltd., 1999. Mog, Justin Matthew. “Managing Development Programs for Sustainability: Integrating Development and Research Through Adaptive Management.” Society and Natural Resources, vol. 19 (2006): 531–546. Moody, James Bradfield, and Mark Dodgson. “Managing Complex Collaborative Projects: Lessons from the Development of a New Satellite.” Journal of Technology Transfer, vol. 31 (2006): 567–588. Moore, Elizabeth A., and Tomas M. Koontz. “A Typology of Collaborative Watershed Groups: Citizen-Based, Agency-Based, and Mixed Partnerships.” Society and Natural Resources, vol. 16 (2003): 451-460. Mortenson, Kristin G., and Richard S. Krannich. “Wildlife Managers and Public Involvement: Letting the Crazy Aunt Out.” Human Dimensions of Wildlife, vol. 6 (2001): 277–290. O’Neill, Karen M. “Can Watershed Management Unite Town and Country?” Society and Natural Resources, vol. 18 (2005): 241–253. Office of Management and Budget and Council on Environmental Quality. Memorandum on Environmental Conflict Resolution. Washington, D.C., November 28, 2005. Paulson, Deborah D. “Collaborative Management of Public Rangeland in Wyoming: Lessons in Co-Management.” Professional Geographer, vol. 50, no. 3 (1998): 301–315. Payton, Michelle A., David C. Fulton, and Dorothy H. Anderson. “Influence of Place Attachment and Trust on Civic Action: A Study at Sherburne National Wildlife Refuge.” Society and Natural Resources, vol. 18 (2005): 511–528. Peterson, M. Nils, Markus J. Peterson, and Tarla Rai Peterson. “Conservation and the Myth of Consensus.” Conservation Biology, vol. 19, no. 3 (2005): 762–767. Plummer, Ryan. “Sharing the Management of a River Corridor: A Case Study of the Comanagement Process.” Society and Natural Resources, vol. 19 (2006): 709–712. Raedeke, Andrew H., J. Sanford Rikoon, and Charles H. Nilon. “Ecosystem Management and Landowner Concern About Regulations: A Case Study in the Missouri Ozarks.” Society and Natural Resources, vol. 14 (2001): 741– 759. Ramirez, Ricardo and Maria Fernandez. “Facilitation of Collaborative Resource Management: Reflections from Practice.” Systemic Practice and Action Research, vol. 18, no. 1 (2005): 5–20. Rigg, Catherine M. “Orchestrating Ecosystem Management: Challenges and Lessons from Sequoia National Forest.” Conservation Biology, vol. 15, no. 1 (2001): 78–90. Rodriguez, Daniel B. “The Role of Legal Innovation in Ecosystem Management: Perspectives from American Local Government Law.” Ecology Law Quarterly, vol. 24 (1997): 745–769. Ryan, Clare M. and Jacqueline S. Klug. “Collaborative Watershed Planning in Washington State: Implementing the Watershed Planning Act.” Journal of Environmental Planning and Management, vol. 48, no. 4 (2005): 491– 506. Sayre, Nathan F. “Interacting Effects of Landownership, Land Use, and Endangered Species on Conservation of Southwestern U.S. Rangelands.” Conservation Biology, vol. 19, no. 3 (2005): 783–792. Scarlett, P. Lynn. “A New Approach to Conservation: The Case for the Four Cs.” Natural Resources and Environment, vol. 17, no. 1 (2002-2003): 73– 113. Scheutt, Michael A., Steve W. Selen, and Deborah S. Carr. “Making It Work: Keys to Successful Collaboration in Natural Resource Management.” Environmental Management, vol. 27, no. 4 (2001): 587–593. Schusler, Tania M., Daniel J. Decker, and Max J. Pfeffer. “Social Learning for Collaborative Natural Resource Management.” Society and Natural Resources, vol. 15 (2003): 309–326. Selin, Steve, Michael Scheutt, and Debbie Carr. “Modeling Stakeholder Perceptions of Collaborative Initiative Effectiveness.” Society and Natural Resources, vol. 13, no. 8 (2000): 735–745 Sexton, William T., Robert C. Szaro, and Nels C. Johnson. “The Ecological Stewardship Project: A Vision for Sustainable Resource Management.” In Ecological Stewardship: A Common Reference for Ecosystem Management, vol. 1, ed. N. C. Johnson, A. J. Malk, R. C. Szaro, and W. T. Sexton, 1–8. Elsevier Science, Ltd., 1999. Singleton, Sara. “Collaborative Environmental Planning in the American West: The Good, the Bad and the Ugly.” Environmental Politics, vol. 11, no. 3 (2002): 54–75. Smith, Patrick D. and Maureen H. McDonough. “Beyond Public Participation: Fairness in Natural Resource Decision Making.” Society and Natural Resources, vol. 14 (2001): 239–249. Smith, Patrick D., Maureen H. McDonough, and Michael T. Mang. “Ecosystem Management and Public Participation: Lessons from the Field.” Journal of Forestry, vol. 97, no. 10 (1999): 32–39. Smutko, L. Steven. “Environmental Dispute Resolution Processes for Resource Conservation.” Department of Agricultural and Resource Economics, North Carolina State University. Susskind, Lawrence E., and Jennifer Thomas-Larmer. “Conducting Conflict Assessment.” In The Consensus Building Handbook: A Comprehensive Guide to Reaching Agreement, ed. Lawrence E. Susskind, Sarah McKearnan, and Jennifer Thomas-Larmer. Thousand Oaks, Calif.: Sage Publications, 1999. Swanson, Louis E. “Rural Policy and Direct Local Participation: Democracy, Inclusiveness, Collective Agency, and Locality-Based Policy.” Rural Sociology, vol. 66, no. 1 (2001): 1–21. Thompson, Jan R., William F. Elmendorf, Maureen H. McDonough, and Lisa L. Burban. “Participation and Conflict: Lessons Learned from Community Forestry.” Journal of Forestry, June 2005, 174–178. Thompson, Jonathan R., Mark D. Anderson, and Norman Johnson. “Ecosystem Management across Ownerships: The Potential for Collision with Antitrust Laws.” Conservation Biology, vol. 18, no. 6 (2004): 1475– 1481. Tilt, Whitney. “Collaboration in Our Backyard: Lessons from Community- Based Collaboration in the West.” The Quivira Coalition Newsletter, vol. 7, no. 2 (2005): 22–30. Tilt, Whitney. “Getting Federal Land Management Agencies to the Collaborative Table: Barriers and Remedies.” Sonoran Institute, 2005. Tipa, Gail and Richard Welch. “Comanagement of Natural Resources: Issues of Definition from and Indigenous Community Perspective.” Journal of Applied Behavorial Science, vol. 42, no 3 (2006): 373–391. Twarkins, Martha, Larry Fisher, and Tahnee Robertson. “Public Involvement in Forest Management Planning: A View from the Northeast.” Journal of Sustainable Forestry, vol. 13, no. 1/2 (2001): 237–251. U.S. Institute for Environmental Conflict Resolution. Off-Highway Vehicle Use and Collaboration: Lessons Learned from Project Implementation, edited by Larry Fisher. Tucson, Arizona, August 26, 2005. Waage, Sissel. “Collaborative Salmon Recovery Planning: Examining Decision Making and Implementation in Northeastern Oregon.” Society and Natural Resources, vol. 16 (2003): 295–307. Walker, Peter A. and Patrick Hurley. “Collaboration Derailed: The Politics of ‘Community-Based’ Resource Management in Nevada County.” Society and Natural Resources, vol. 17 (2004): 735–751. Weber, Edward P. Bringing Society Back In: Grassroots Ecosystem Management, Accountability, and Sustainable Communities. Cambridge, Mass.: The MIT Press, 2003. Weber, Edward P. “The Question of Accountability in Historical Perspective: From Jackson to Contemporary Grassroots Ecosystem Management.” Administration and Society, vol. 31, no. 4 (1999): 451–495. Weber, Edward P. “A Theory of Institutional Change for Tough Cases of Collaboration: ‘Ideas’ in the Blackfoot Watershed.” Washington State University, submitted for publication, December 20, 2006. Weber, Edward P. “Wicked Problems, Knowledge Challenges, and Collaborative Capacity Builders in Network Settings.” Public Administration Review (forthcoming). Weber, Edward P., Nicholas P. Lovrich, and Michael Gaffney. “Assessing Collaborative Capacity in a Multidimensional World.” Administration and Society, vol. 39, no. 2 (2007): 194–220. Weber, Edward P., Nicholas P. Lovrich, and Michael Gaffney. “Collaboration, Enforcement, and Endangered Species: A Framework for Assessing Collaborative Problem-Solving Capacity.” Society and Natural Resources, vol. 18 (2005): 677–698. Webler, Thomas, Seth Tuler, Ingrid Shockey, Paul Stern, and Robert Beattie. “Participation by Local Governmental Officials in Watershed Management Planning.” Society and Natural Resources, vol. 16 (2003): 105–121. Western Governors’ Association. “Collaborative Conservation Strategies: Legislative Case Studies from Across the West.” Western Governors’ Association White Paper, June 2006. Wilson, Randall K. “Collaboration in Context: Rural Change and Community Forestry in the Four Corners.” Society and Natural Resources, vol. 19 (2006): 53–70. Wondolleck, Julia M. and Steven L. Yaffee. Making Collaboration Work: Lessons from Innovation in Natural Resource Management. Washington, D.C.: Island Press, 2000. Yaffee, Steven L. “Regional Cooperation: A Strategy for Achieving Ecological Stewardship.” In Ecological Stewardship: A Common Reference for Ecosystem Management, vol. 3, ed. W. T. Sexton, A. J. Malk, R. C. Szaro, and N. C. Johnson, 31–153. United Kingdom: Elsevier Science, Ltd., 1999. Yaffee, Steven L. “Three Faces of Ecosystem Management.” Conservation Biology, vol. 13, no. 4 (1999): 713–725.
Conflict over the use of our nation's natural resources, along with increased ecological problems, has led land managers to seek cooperative means to resolve natural resource conflicts and problems. Collaborative resource management is one such approach that communities began using in the 1980s and 1990s. A 2004 Executive Order on Cooperative Conservation encourages such efforts. GAO was asked to determine (1) experts' views on collaborative resource management, (2) how selected collaborative efforts have addressed conflicts and improved resources, and (3) challenges that agencies face as they participate in such efforts and how the Cooperative Conservation initiative has addressed them. GAO reviewed experts' journal articles, studied seven collaborative groups, and interviewed group members and federal and other public officials. Experts generally view collaborative resource management that involves public and private stakeholders in natural resource decisions as an effective approach for managing natural resources. Several benefits can result from using collaborative resource management, including reduced conflict and litigation and improved natural resource conditions, according to the experts. A number of collaborative practices, such as seeking inclusive representation, establishing leadership, and identifying a common goal among the participants have been central to successful collaborative management efforts. The success of these groups is often judged by whether they increase participation and cooperation or improve natural resource conditions. Many experts also note that there are limitations to the approach, such as the time and resources it takes to bring people together to work on a problem and reach a decision. Most of the seven collaborative resource management efforts GAO studied in several states across the country were successful in achieving participation and cooperation among their members and improving natural resource conditions. In six of the cases, those involved were able to reduce or avoid the kinds of conflicts that can arise when dealing with contentious natural resource problems. All the efforts, particularly those that effectively reduced or avoided conflict, used at least several of the collaborative practices described by the experts. For example, one effort obtained broad community representation and successfully identified a common goal of using fire, after decades of suppression, to restore the health of a large grasslands area surrounding the community. Also, members of almost all the efforts studied said they have been able to achieve many of their goals for sustaining or improving the condition of specific natural resources. However, for most of these efforts no data were collected on a broad scale to show the effect of their work on overall resource conditions across a large area or landscape. Federal land and resource management agencies--the Department of the Interior's Bureau of Land Management, U.S. Fish and Wildlife Service, and National Park Service, and the Department of Agriculture's Forest Service--face key challenges to participating in collaborative resource management efforts, according to the experts, federal officials, and participants in the efforts GAO studied. For example, the agencies face challenges in determining whether to participate in a collaborative effort, measuring participation and monitoring results, and sharing agency and group experiences. As a part of the interagency Cooperative Conservation initiative led by the Council on Environmental Quality (CEQ), the federal government has made progress in addressing these challenges. Yet, additional opportunities exist to develop and disseminate tools, examples, and guidance that further address the challenges, as well as to better structure and direct the initiative to achieve the vision of Cooperative Conservation, which involves a number of actions by multiple agencies over the long term. Failure to pursue such opportunities and to create a long-term plan to achieve the vision may limit the effectiveness of the federal government's initiative and collaborative efforts.
All international mail and packages entering the United States through the U.S. Postal Service and private carriers are subject to potential CBP inspection at the 14 USPS international mail facilities and 29 express consignment carrier facilities operated by private carriers located around the country. CBP inspectors can target certain packages for inspection or randomly select packages for inspection. CBP inspects for, among other things, illegally imported controlled substances, contraband, and items— like personal shipments of noncontrolled prescription drugs—that may be inadmissible. CBP inspections can include examining the outer envelope of the package, using X-ray detectors, or opening the package to physically inspect the contents. Each year the international mail and carrier facilities process hundreds of millions of pieces of mail and packages. Among these items are prescription drugs ordered by consumers over the Internet, the importation of which is prohibited under current law, with few exceptions. Two acts—the Federal Food, Drug, and Cosmetic Act and the Controlled Substances Import and Export Act—specifically regulate the importation of prescription drugs into the United States. Under the Federal Food, Drug, and Cosmetic Act, as amended, FDA is responsible for ensuring the safety, effectiveness, and quality of domestic and imported drugs and may refuse to admit into the United States any drug that appears to be adulterated, misbranded, or unapproved for the U.S. market as defined in the act. Under the act and implementing regulations, this includes foreign versions of FDA-approved drugs if, for example, neither the foreign manufacturing facility nor the manufacturing methods and controls were reviewed by FDA for compliance with U.S. statutory and regulatory standards. The act also prohibits reimportation of a prescription drug manufactured in the United States by anyone other than the original manufacturer of that drug. According to FDA, prescription drugs imported by individual consumers typically fall into one of these prohibited categories. However, FDA has established a policy that allows local FDA officials to use their discretion to not interdict personal prescription drug imports that do not contain controlled substances under specified circumstances, such as importing a small quantity for treatment of a serious condition, generally not more than a 90-day supply of a drug not available domestically. The importation of prohibited foreign versions of prescription drugs like Viagra (an erectile dysfunction drug) or Propecia (a hair loss drug), for example, would not qualify under the personal importation policy because approved versions are readily available in the United States. In addition, the Controlled Substances Import and Export Act, among other things, generally prohibits personal importation of those prescription drugs that are controlled substances, such as Valium. (See app. II for a general description of controlled substances.) Under the act, shipment of controlled substances to a purchaser in the United States from another country is only permitted if the purchaser is registered with DEA as an importer and is in compliance with the Controlled Substances Import and Export Act and DEA requirements. As outlined in the act, it would be difficult, if not impossible, for an individual consumer seeking to import a controlled substance for personal use to meet the standards for registration and related requirements. Figure 1 illustrates the two acts that specifically govern the importation of prescription drugs into the United States. It also presents the roles of FDA, DEA, and CBP in implementing those acts. CBP is to seize illegally imported controlled substances it detects on behalf of DEA. CBP may take steps to destroy the seized and forfeited substance or turn the seized substance over to other federal law enforcement agencies for further investigation. CBP is to turn over packages suspected of containing prescription drugs that are not controlled substances to FDA. FDA investigators may inspect such packages and hold those that appear to be adulterated, misbranded, or unapproved, but must notify the addressee and allow that individual the opportunity to present evidence as to why the drug should be admitted into the United States. If the addressee does not provide evidence that overcomes the appearance of inadmissibility, then the item is refused admission and returned to the sender. Investigations that may arise from CBP and FDA inspections may fall within the jurisdiction of other federal agencies. DEA, ICE, and FDA investigators have related law enforcement responsibilities and may engage in investigations stemming from the discovery of illegally imported prescription drugs. Although USPS’s Inspection Service does not have the authority, without a federal search warrant, to open packages suspected of containing illegal drugs, it may collaborate with other federal agencies in certain investigations. Also, ONDCP is responsible for formulating the nation’s drug control strategy and has general authority for addressing policy issues concerning the illegal distribution of controlled substances. ONDCP’s authority does not, however, include prescription drugs that are not controlled substances. CBP and FDA do not systematically collect data on the volume of prescription drugs and controlled substances they encounter at the mail and carrier facilities. On the basis of their own observations and limited information they obtained at selected mail and carrier facilities, CBP and FDA officials believe the volume of prescription drug importation into the United States is substantial and increasing. However, neither agency has developed reliable estimates of the number of prescription drugs imported into the country. Further, the available information shows that some imported prescription drugs can pose safety concerns. We reported in June 2004 that prescription drugs purchased from some foreign-based Internet pharmacies posed safety risks for consumers. FDA officials said that they cannot assure the public of the safety and quality of drugs purchased from foreign sources that are largely outside the U.S. regulatory system. Of particular concern is the access to highly addictive controlled substances, which can be imported by consumers of any age sometimes without a prescription or consultation with a physician. CBP and FDA do not systematically collect data on the volume of prescription drugs and controlled substances they encounter at the mail and carrier facilities. Without an accurate estimate of the volume of importation of prescription drugs, federal agencies cannot determine the full scope of the importation issue. Yet FDA officials have often testified regarding the large and steadily increasing volume of packages containing prohibited prescription drugs entering the United States through the international mail and carrier facilities. CBP and FDA officials have said that in recent years they have observed increasingly more packages containing prescription drugs being imported through the mail facilities. However, neither agency has complete data to estimate volume of importation. For example, a CBP official recently testified that the agency did not have data on the total number of packages containing imported controlled substances. A CBP official at a mail facility told us that to determine the total volume of prescription drug importation would require that the CBP personnel inspect each mail item—which they currently do not do, in part because mail from certain countries bypasses inspection— and tally those that were suspected of containing prescription drugs. This official said that he did not have the resources at his facility for such an undertaking. In addition, neither CBP nor FDA tracked the number of packages suspected of containing prescription drugs that were held for FDA review. FDA officials told us that CBP and FDA currently have no mechanism for keeping an accurate count of the volume of illegally imported drugs, because of the large volume of packages arriving daily through the international mail and carriers. Furthermore, FDA officials told us that FDA did not routinely track items that contained prescription drugs potentially prohibited for import that they released and returned for delivery to the recipient. However, they said that FDA had begun gathering from the field information on the imported packages it handles, but as of July 2005, this effort was still being refined. CBP and FDA, in coordination with other federal agencies, have conducted special operations to gain insight regarding the volume of imported prescription drugs entering through selected mail facilities. Generally, these were onetime, targeted efforts to identify and tally the packages containing prescription drugs imported through a particular facility during a certain time period and to generate information for possible investigation. The limited data collected have shown wide variations in volume. For example, CBP officials at one mail facility estimated that approximately 3,300 packages containing prescription drugs entered the facility in 1 week. CBP officials at another mail facility estimated that 4,300 packages containing prescription drugs entered the facility in 1 day. While these data provide some insight regarding the number of packages containing prescription drugs at a selected mail facility during a certain time period, the data are not representative of other time periods or projectable to other facilities. Debate continues over the estimated volume of prescription drugs entering the United States through mail and express carrier facilities. During congressional hearings over the past 4 years, FDA officials, among others, have presented estimates of the volume of imported prescription drugs ranging from 2 million to 20 million packages in a given year. Each estimate has its limitations; for example, some estimates were extrapolations from data gathered at a single mail facility. More recently, a December 2004 HHS report stated that approximately 10 million packages containing prescription drugs enter the United States—nearly 5 million packages from Canada and another 5 million mail packages from other countries. However, these estimates also have limitations, being partially based on extrapolations from limited FDA observations at international mail branch facilities. Specifically, FDA officials told us that FDA developed its estimate for Canadian drugs entering the country using (1) IMS Health estimates that 12 million prescriptions sold from Canadian pharmacies were imported into the United States in 2003 and (2) FDA’s experience during special operations at various locations from which it concluded that there appeared to be about 2.5 prescriptions in each package. According to FDA officials, the estimate for other countries was an extrapolation using the estimated 5 million packages from Canada in conjunction with FDA’s observations, likewise made during special operations, that 50 percent of the mail packages enter from countries other than Canada. FDA officials have said that they cannot provide assurance to the public regarding the safety and quality of drugs purchased from foreign sources, which are largely outside of their regulatory system. Additionally, FDA officials said that consumers who purchase prescription drugs from foreign-based Internet pharmacies are at risk of not fully knowing the safety or quality of what they are importing. They further said that while some consumers may purchase genuine products, others may unknowingly purchase counterfeit products, expired drugs, or drugs that were improperly manufactured. CBP and FDA have done limited analysis of the imported prescription drugs identified during special operations, and the results have raised questions about the safety of some of the drugs. For example, during a special operation in 2003 to identify and assess counterfeit and potentially unsafe imported drugs at four mail facilities, CBP and FDA inspected 1,153 packages that contained prescription drugs. According to a CBP report, 1,019, or 88 percent, of the imported drug products were in violation of the Federal Food, Drug, and Cosmetic Act or the Controlled Substances Import and Export Act. Some of the drugs were foreign versions of U.S.- approved drugs that are unapproved for import, including Lipitor (a cholesterol-lowering drug), Viagra, and Propecia. Other drugs never had FDA approval. For example, Taro-warfarin, an apparent unapproved version of Warfarin, which is used to prevent blood clotting, was imported from Canada. The drug raised safety concerns because its potency may vary depending on how it is manufactured, and it requires careful patient monitoring because it can cause life-threatening bleeding if not properly administered. A CBP laboratory analyzed 180 of the 1,153 drugs inspected, which showed that many of the imported drugs could pose safety risks. The drugs tested included some that were withdrawn from the U.S. market for safety reasons, animal drugs not approved for human use, and drugs that carry risks because they require careful dosing or initial screening. In addition, other drugs tested were found to contain controlled substances prohibited for import, and some of the drugs contained no active ingredients. Figure 2 illustrates the results of the CBP laboratory analysis. In a past review we found that prescription drugs ordered from some foreign-based Internet pharmacies posed safety risks for consumers. Specifically, in a June 2004 report, we identified several problems associated with the handling, FDA approval status, and authenticity of 21 prescription drug samples we purchased from Internet pharmacies located in several foreign countries—Argentina, Costa Rica, Fiji, Mexico, India, Pakistan, the Philippines, Spain, Thailand, and Turkey. Our work showed that most of the drugs, all of which we received via consignment carrier shipment or the U.S. mail, were unapproved for the U.S. market because, for example, the labeling or the foreign manufacturing facility, methods, and controls were not reviewed by FDA. Of the 21 samples: None included dispensing pharmacy labels that provided instructions for use, and only about one-third included warning information. Thirteen displayed problems associated with the handling of the drug. For example, three samples that should have been shipped in a temperature- controlled environment arrived in envelopes without insulation, and five samples contained tablets enclosed in punctured blister packs, potentially exposing them to damaging light or moisture. Two were found to be counterfeit versions of the products we ordered. Two had a significantly different chemical composition than that of the product we had ordered. We found fewer problems among 47 samples purchased from U.S. and Canadian Internet pharmacies. Although most of the drugs obtained from Canada were of the same chemical composition as that of their U.S. counterparts, most were unapproved for the U.S. market. We said that it was notable that we identified numerous problems among the samples we received despite the relatively small number of drugs we purchased, consistent with problems that had been recently identified by state and federal regulatory agencies. Similarly, during our current review, we observed that some prescription drugs imported through the mail and carrier facilities were not shipped in protective packages, including some wrapped in foil or in plastic bags. In addition to being shipped without containers, the drugs also lacked product identifications, directions for use, or warning labels. For some drugs, the origin and contents could not be immediately determined by CBP or FDA inspection. Figure 3 illustrates an example of drugs that were sent without labeling. Federal agencies and professional medical and pharmacy associations have found that consumers, of any age, can obtain highly addictive controlled substances from Internet pharmacies, sometimes without a prescription or consultation with a physician. For example, a DEA official recently testified that Internet pharmacies that offer to sell controlled substances directly to consumers without a prescription and without requiring consultation with a physician can increase the possibility of addiction, access to counterfeit products, and adverse reactions to medications. According to the Office of National Drug Control Policy, Internet pharmacies that offer controlled substances bypass traditional regulations and established safeguards and expose consumers to potentially counterfeit, adulterated, and contaminated products. Both DEA and ONDCP have found that the easy availability of controlled substances directly to consumers over the Internet has significant implications for public health, given the opportunities for misuse and abuse of these addictive drugs. The American Medical Association recently testified that Internet pharmacies that offer controlled substances without requiring a prescription or consultation with a physician contribute to the growing availability and increased use of addictive drugs for nonmedical purposes. To demonstrate the ease with which controlled substances can be obtained via the Internet, the National Association of Boards of Pharmacy received prescription drugs from four different Internet pharmacies. From one of the Internet pharmacies, the association reported it received a shipment of Valium—a schedule IV controlled substance used to treat muscle spasm or anxiety—despite providing no prescription and the height and weight information for a small dog. The association also reported that 2 days after it received its shipment of 30 tablets of Xanax—a schedule IV controlled substance used to treat anxiety—the Internet pharmacy sent daily refill reminders via electronic mail. In our July 2004 testimony, we reported that while some targeted packages were inspected and interdicted, many others either were not inspected and were released to the addressees or were released after being held for inspection. At the time, FDA officials said that because they were unable to process the volume of targeted packages, they released tens of thousands of packages containing drug products that may violate current prohibitions and could have posed a health risk to consumers. In August 2004, FDA issued standard operating procedures to prioritize package selection, package examination, and admissibility determinations. While the new procedures may encourage uniform practices at the mail facilities, packages that contain potentially prohibited prescription drugs continue to be released to the addressee. Recently, CBP also issued a new policy for processing packages with controlled substances without using time- consuming seizure and forfeiture procedures. While the policy may reduce processing time and encourage the interdiction of more controlled substances, CBP officials do not know whether the new policy has had an impact on the volume of prohibited prescription drug importation. In our July 2004 testimony, we reported that CBP and FDA officials at selected mail and carrier facilities used different practices and procedures to inspect and interdict packages that contain prescription drugs. While each of the facilities we visited targeted packages for inspection, the basis upon which packages were targeted could vary and was generally based on several factors, such as the inspector’s intuition and experience, whether the packages originated from suspect countries or companies, or were shipments to individuals. At that time, CBP officials told us that the factors could also include intelligence gained from prior seizures, headquarters, or other field locations. Specifically, officials at one facility we visited targeted packages on the basis of the country of origin. At this facility, FDA provided CBP with a list of seven countries to target, the composition of which changed periodically, and asked that CBP hold the packages they suspected of containing prescription drugs from those countries. Typically, CBP officials at this facility released packages to the addressee containing prescription drugs that were not from one of the targeted countries. Officials at another facility targeted packages based on whether the packages were suspected of containing a certain quantity of prescription drugs. At this facility, CBP officials held packages containing prescription drugs that appeared to exceed a 90-day supply—a violation of one of the criteria in FDA’s personal importation policy. If the package contained prescription drugs, including in some cases controlled substances, that appeared to be 90 pills or less, it was typically released. FDA officials at this facility told us that every week CBP turned over to FDA hundreds of packages that contained quantities of prescription drugs that appeared to exceed the 90-day supply. However, the FDA officials said that they were able to process a total of approximately 20 packages per day and, as a result, returned many of the packages for release to the addressee. FDA officials explained that 20 packages a day is an approximation because some packages can take longer than others to inspect, particularly if the packages contain many different types of drugs that need to be examined. According to FDA officials and data, in fiscal year 2004, FDA field personnel physically inspected approximately 20,800 packages containing prescription drugs entering the United States through the international mail facilities. Of the packages inspected, FDA’s data showed that 98 percent were refused entry and marked returned to sender and the remaining, about 450, were released to the addressee. The FDA data indicate the number of packages physically inspected by FDA personnel and the results of that process; they do not specify the number of individual prescription drugs or smaller packages of drugs within a larger package. Most important, these data do not indicate the universe of packages of prescription drugs coming through the mail facilities. Figure 4 shows bins containing packages of suspected prescription drugs being held for FDA review and possible inspection at one mail facility. In August 2004, FDA issued standard operating procedures that, according to FDA officials, have been adopted nationwide. According to FDA, the purpose of the new procedures was to “provide a standard operating environment for the prioritized selection, examination and admissibility determination of FDA-regulated pharmaceuticals imported into the United States via international mail.” Under the procedures, CBP personnel are to forward to FDA personnel any mail items, from FDA’s national list of targeted countries and based on local criteria, that appear to contain prescription drugs. The procedures outline how FDA personnel are to prioritize packages for inspection, inspect the packages, and make admissibility determinations. Deviations from the procedures must be requested by facility personnel and approved by FDA management. While the new procedures should encourage processing uniformity across facilities, many packages that contain prescription drugs are still released. Specifically, according to the procedures, all packages forwarded by CBP but not processed by FDA inspectors at the end of each workday are to be returned for delivery by USPS to the recipient. However, according to the procedures, packages considered to represent a significant and immediate health hazard may be held over to the next day for processing. CBP and FDA officials at two facilities told us that the new procedures resulted in an increase in the number of packages CBP personnel refer to FDA. Officials at one facility estimated that CBP referrals have increased from approximately 500 to an average of 2,000 packages per day. The FDA officials noted that the procedures did not resolve the heavy volume of prescription drug importation or FDA’s ability to deal with the volume, nor were they designed to do so. While the packages that are not targeted are released without inspection, so are many packages that are targeted and referred to FDA personnel. At one facility, FDA officials estimated that each week they return without inspection 9,000 to 10,000 of the packages referred to them by CBP. They said these packages were given to USPS officials for delivery to the addressee. If this facility were to maintain that level of release, about half a million packages per year would be delivered to addressees. In our July 2004 testimony, we reported that CBP officials were to seize the illegally imported controlled substances they detected. However, at that time, some illegally imported controlled substances were not seized by CBP. For example, CBP officials at one mail facility told us that they experienced an increased volume of controlled substances and, in several months, had accumulated a backlog of over 40,700 packages containing schedule IV substances. To keep the drugs from entering U.S. commerce and to clear the backlog, a CPB official at the facility said that CBP’s headquarters office granted them permission to send most of the drugs back to the sender. CBP officials at another facility told us that certain controlled substances were a priority and seized when detected; priority substances included anabolic steroids (a category of schedule III drugs that promote muscle growth and potentially boost athletic performance), and gamma hydroxybutyrate (a schedule I drug that acts as a central nervous system depressant). At this facility, other controlled substances encountered that were not a priority and that were shipped in small amounts, less than a 90-day supply, could be released to the addressee. CBP officials at another facility we visited turned over packages they suspected of containing controlled substances in small amounts to FDA for processing. Neither returning an illegally imported controlled substance to the sender nor releasing it to the addressee is in accordance with federal law. CBP field personnel said they did not have the resources to seize all the controlled substances they detected. Officials said that the seizure process can be time-consuming, taking approximately 1 hour for each package containing controlled substances. According to CBP officials, when an item is seized, the inspector records the contents of each package— including the type of drugs and the number of pills or vials in each package. If the substance is a schedule I or II controlled substance, it is to be summarily forfeited without notice, after seizure. However, if it is a schedule III through V controlled substance, CBP officials are to notify the addressee that the package was seized and give the addressee an opportunity to contest the forfeiture by providing evidence of the package’s admissibility and trying to claim the package at a forfeiture hearing. To address the seizure backlog and give CBP staff more flexibility in handling controlled substances, in September 2004, CBP implemented a national policy for processing controlled substances, schedule III through V, imported through the mail and carrier facilities. According to the policy, packages containing controlled substances should no longer be transferred to FDA for disposition, released to the addressee, or returned to the sender. CBP field personnel are to hold the packages containing controlled substances in schedules III through V as unclaimed or abandoned property as an alternative to a seizure. According to a CBP headquarters official, processing a controlled substance as abandoned property is a less arduous process because it requires less information be entered into a database than if the same property were to be seized. Once CBP deems the controlled substance to be unclaimed property, the addressee is notified that he or she has the option to voluntarily abandon the package or have the package seized. If the addressee voluntarily abandons the package or does not respond to the notification letter within 30 days, the package will be eligible for immediate destruction. If the addressee chooses to have the package seized, there would be an opportunity to contest the forfeiture and claim the package, as described above. CBP also instituted an on-site data collection system at international mail and express carrier facilities to record schedule III through V controlled substances interdicted using this new process. From September 2004 to the end of June 2005, CBP reported that a total of approximately 61,700 packages of these substances were interdicted, about 61,500 at international mail facilities and 200 at express carrier facilities. Generally, CBP officials we interviewed told us that the recent policy improved their ability to quickly process the volume of schedule III through V controlled substances they detected. A CBP official at one facility said that the abandonment process is faster than the seizure process, as it requires much less paperwork. A CBP headquarters official told us that the abandonment process takes an inspector at a mail facility about 1 minute to process a package. He added that the new policy was intended to eliminate the backlog of schedule III through V controlled substances at the facilities. Figure 5 shows schedule III through V controlled substances that were abandoned during a 1-month period at one mail facility and awaiting destruction. While the recent policy may have expedited processing, CBP officials in the field and in headquarters said that they do not know whether the new policy has had any impact on the volume of controlled substances illegally entering the country that reach the intended recipient. Generally, CBP officials do not know the extent of packages that contain controlled substances that are undetected and released. For example, CBP officials at one facility told us that they used historical data to determine the countries that are likely sources for controlled substances and target the mail from those countries. They do not know the volume of controlled substances contained in the mail from the nontargeted countries. A CBP official at another facility said that he believed the volume of controlled substances imported through the facility had begun to decrease but had no data to support this claim. One CBP official at a carrier facility told us that because the express carrier environment is constantly changing with new routes, service areas, and increasing freight volume and because smuggling trends shift in response to past enforcement efforts, he could not ascertain the quantities of packages containing controlled substances that are undetected by CBP. Packages containing prescription drugs can also bypass FDA inspection at carrier facilities because of inaccurate information about the contents of the package. Unlike packages at mail facilities, packages arriving at carrier facilities we visited are preceded by manifests, which provide information from the shipper, including a description of the packages’ contents. While the shipments are en route, CBP and FDA officials are to review this information electronically and select packages they would like to inspect when the shipment arrives. FDA officials at two carrier facilities we visited told us they review the information for packages described as prescription drugs or with a related term, such as pharmaceuticals or medicine. CBP and FDA officials told us that there are no assurances that the shipper’s description of the contents is accurate. The FDA officials at the carrier facilities we visited told us that if a package contains a prescription drug but is inaccurately described, it would not likely be inspected by FDA personnel. According to FDA officials, FDA field personnel are not continually on-site at the two carrier facilities we visited. At the FDA field office that has responsibility for inspecting packages at one carrier facility, we observed FDA field personnel reviewing electronic information regarding packages that were en route to the carrier facility. The official said that the field office has electronic information regarding an average of 400 packages per day available for review. If the shipper does not provide enough information about its package, FDA field personnel can request that the carrier detain the package until more information is provided electronically or until the FDA personnel can visit the facility to conduct a physical inspection of the package. The number of physical inspections at the facilities we visited varied depending on the number of packages electronically reviewed. FDA field personnel, responsible for inspection at the other carrier facility, reported that in September 2004 they electronically requested that an average of 20 packages per day be held at the facility for a physical inspection. However, on occasion when the FDA personnel went to the facility to conduct the inspection, the packages were unavailable because they could not be found, had been delivered to the recipient by the carrier, or had been returned to the shipper. According to FDA headquarters officials, since our visit, FDA field personnel may now be visiting the carrier facility on a more routine basis. In contrast, CBP inspectors are located on-site at the carrier facilities we visited. As a result, CBP personnel are able to inspect packages upon arrival of the shipment. In addition, according to CBP officials at the facility, CBP’s on-site presence allows the inspectors to conduct random inspections, on a routine basis, of packages as they are processed at the facility. Instead of relying solely on the information provided by the shipper, CBP personnel said they conduct random inspections, on a daily basis, as another means to identify items that may be unapproved for import. CBP officials told us that they conduct these inspections because the shipper’s information can be inaccurate. During our visit we observed the CBP personnel randomly inspect several hundred packages selected. During these random inspections, CBP inspectors told us that they often come across packages containing noncontrolled prescription drugs, which they will set aside for FDA inspectors. For example, during a random inspection, CBP officials found and held for FDA 13 packages containing a human growth hormone—prohibited from import—that were inaccurately described as glassware. In contrast, according to FDA field personnel with inspection responsibility at the two carrier facilities we visited, few random inspections of packages were performed and when they occurred they were typically part of a special operation. For example, an FDA field official told us that FDA personnel planned to perform one random inspection effort per year. CBP officials told us that they would like to have FDA personnel on-site to improve coordination efforts. One CBP Port Director said that he would like to have FDA personnel on-site to share data, perform analysis to identify trends from CBP’s referrals, and be available to immediately review prescription drugs. A CBP headquarters official also said that it would be helpful if FDA personnel were on-site to enable CBP officials to confer with them to identify controlled substances that are not clearly labeled. FDA officials told us that because FDA personnel review information regarding the packages electronically, there was no advantage to being physically on-site. Further, they said the responsible district can supply personnel to physically work at a given carrier facility for field examinations on an as-needed basis. FDA officials also noted that FDA is not reimbursed by the carriers to maintain staff on-site. By contrast, private express carriers reimburse the federal government for the personnel and equipment costs of the CBP staff located on-site. FDA officials said that there is not a provision under current law that would enable carriers to reimburse FDA so that it could maintain an on-site presence. We identified three factors beyond inspection and interdiction that have complicated federal efforts to enforce the prohibitions on prescription drugs imported for personal use: (1) the volume of importation has strained limited federal resources; (2) Internet pharmacies, particularly foreign-based sites, can operate outside of the U.S. regulatory system for noncontrolled and controlled prescription drugs and can evade federal law enforcement actions; and (3) current law requires that FDA notify addressees that their packages have been detained because they appear unapproved for import and give them the opportunity to provide admissibility evidence regarding their imported items. The current volume of prescription drug imports, coupled with competing agency priorities, has strained federal inspection and interdiction resources allocated to the mail facilities. CBP and FDA officials told us that the increased incidence of American consumers ordering drugs over the Internet in recent years has significantly contributed to the increase in imports through the international mail. CBP officials said that they are able to inspect only a fraction of the large number of mail and packages shipped internationally. In 2004, FDA testified that each day thousands of individual packages containing prescription drugs are imported illegally into the United States. FDA officials have said that the large volume of imports has overwhelmed the resources they have allocated to the mail facilities. Officials add that they have little assurance that the available field personnel are able to inspect all the packages containing prescription drugs illegally imported for personal use through the mail. Agencies have multiple priorities, which can affect the resources they are able to allocate to the mail and carrier facilities. For example, FDA has multiple areas of responsibility, which include, among other things, regulating new drug product approvals, the labeling and manufacturing standards for existing drug products, and the safety of a majority of food commodities and cosmetics, which, according to FDA officials, all go to FDA’s mission of protecting the public health while facilitating the flow of legitimate trade. CBP’s primary mission is preventing terrorists and terrorist weapons from entering the United States while also facilitating the flow of legitimate trade and travel. FDA and CBP personnel operate in multiple venues, such as land border crossings and seaports. DEA’s multiple priorities include interdicting illicit drugs such as heroin or cocaine, investigating doctors and prescription forgers, and pursuing hijackings of drug shipments. DEA officials told us that they have limited resources and often have to balance efforts to address prescription drug importation with their other priorities. The Medicare Prescription Drug, Improvement, and Modernization Act of 2003 required the HHS Secretary to conduct a study on the importation of drugs that included a review of the adequacy of federal agency resources to inspect and interdict drugs unapproved for import. The report, issued in 2004, states that substantial resources are needed to prevent the increasing volume of packages containing small quantities of drugs from entering the country. The Secretary found that despite agency efforts, including those with CBP, FDA currently does not have sufficient resources to ensure adequate inspection of the current volume of personal shipments of prescription drugs entering the United States. CBP is also in the early stages of assessing the resources it needs at the mail facilities to address the volume of controlled substance imports. However, CBP officials admit that an assessment of resource needs is difficult because they do not know the scope of the problem and the impact of the new procedures. A CBP official told us that CBP has a statistician working on developing estimates on the volume of drugs entering mail facilities; however, he was uncertain whether this effort would be successful or useful for allocating resources. Likewise, in March 2005, FDA officials told us that they had begun to gather from the field information on the imported packages it handles, such as the number of packages held, reviewed, and forwarded for further investigation. However, as of July 2005, they could not provide any data because, according to the officials, this effort was new and still being refined. Internet pharmacies, particularly foreign-based sites, which operate outside the U.S. regulatory system, pose a challenge for regulators and law enforcement agencies. In our 2004 report, we described how traditionally, in the United States, the practice of pharmacy is regulated by state boards of pharmacy, which license pharmacists and pharmacies and establish and enforce standards. To legally dispense a prescription drug, a licensed pharmacist working in a licensed pharmacy must be presented a valid prescription from a licensed health care professional. The requirement that drugs be prescribed and dispensed by licensed professionals helps ensure patients receive the proper dose, take the medication correctly, and are informed about warnings, side effects, and other important information about the drug. However, the Internet allows online pharmacies and physicians to anonymously reach across state and national borders to prescribe, sell, and dispense prescription drugs without complying with state requirements or federal regulations regarding imports. Recently, FDA officials have testified that inadequately regulated foreign Internet sites have become portals for unsafe and illegal prescription drugs. FDA officials state that if a consumer has an adverse drug reaction or other problem, he or she may have little to no recourse because the operator of the pharmacy is often not known and FDA has limited authority to take action against foreign operators. The nature of the Internet has challenged U.S. law enforcement agencies investigating Internet pharmacies, particularly foreign-based sites. Internet sites can easily be installed, moved, or removed in a short period of time. FDA officials said that one Internet site can be composed of multiple related sites and links, thereby making their investigations complex and resource intensive. This fluidity makes it difficult for law enforcement agencies to identify, track, monitor, or shut down those sites that operate illegally. Further, FDA officials said that some Internet pharmacies do not disclose enough information on their Web sites to allow consumers to determine if the drugs they purchased were approved in the United States and dispensed according to state and federal laws. Some Internet pharmacies also do not disclose enough or accurate information regarding the source of the drugs they offer. An Internet pharmacy can claim that the drugs they offer originate in one country, but the drugs may actually be manufactured in another country. Similarly, the anonymous nature of the Internet allows consumers of any age to obtain drugs without a legitimate medical need. According to FDA, when the Internet is used for an illegal sale of prescription drugs, to conduct an investigation they may need to work with the Department of Justice to establish grounds for a case, develop charges, and take action as they would if another sales medium, such as a store or magazine, had been used. Investigations can be more difficult when they involve foreign-based Internet sites, whose operators are outside of U.S. boundaries and may be in countries that have different drug approval and marketing approaches than the United States has. For example, according to DEA officials, drug laws and regulations regarding controlled substances vary widely by country. DEA officials told us their enforcement efforts with regard to imported controlled substances are hampered by the different drug laws in foreign countries. Internet pharmacy sites can be based in countries where the marketing and distribution of certain controlled substances are legal. Steroids, for example, sold over the Internet may be legal in the foreign country in which the online pharmacy is located. Federal agencies can face challenges when working with foreign governments to share information or develop mechanisms for cooperative law enforcement. For example, FDA officials have testified that they possess limited investigatory jurisdiction over sellers in foreign countries and have had difficulty enforcing the law prohibiting prescription drug importation when foreign sellers are involved. A DEA official told us that the agency introduced a resolution at the March 2004 International Narcotics Control Board conference in Vienna, Austria, to encourage member states to work cooperatively on Internet pharmacy issues. However, the DEA official told us that it was difficult to convince some foreign governments that the illegal sales of prescription drugs over the Internet is a global problem and not restricted to the United States. FDA and DEA officials told us that they work with commercial firms, including express carriers, credit card organizations, Internet providers, and online businesses to obtain information to investigate foreign pharmacies, but these investigations are complicated by legal and practical considerations. FDA and DEA officials said that the companies have been willing to work with government agencies to stop transactions involving prescription drugs prohibited from import, and some have alerted federal officials when suspicious activity is detected. However, officials also identified current legal and practical considerations that complicated obtaining information from organizations, such as credit card organizations. These considerations included privacy laws; federal law enforcement agencies’ respective subpoena authority, priorities, and jurisdictions; and the ease with which merchants engaged in illegal activity can enter into a new contract with a different bank to use the same payment system. For example, privacy laws sometimes limit the extent to which companies (e.g., credit card organizations) will provide information to federal agencies about parties to a transaction. According to FDA, DEA, and ICE officials, credit card organizations and banks and other financial institutions that issue credit cards will not provide to the agencies information about the parties involved in the transaction without a subpoena. Representatives from the credit card companies we contacted explained that these issues generally are resolved if the agency issues a properly authorized subpoena for the desired information. (See app. III for information on federal enforcement agencies’ work with credit card organizations to enforce prohibitions on prescription drug importation.) FDA headquarters officials said that packages that contain prescription drugs for personal use that appear to be prohibited from import pose a challenge to their enforcement efforts because these packages cannot be automatically refused. Before any imported item is refused, the current law requires FDA to notify the owner or consignee that the item has been held because it appears to be prohibited and give the product’s owner or consignee an opportunity to submit evidence of admissibility. If the recipient does not respond or does not present enough evidence to overcome the appearance of inadmissibility, then the item can be returned to the sender, or in some cases destroyed. FDA officials told us that this requirement applies to all drug imports that are held under section 801(a) of the Federal Food, Drug, and Cosmetic Act. Nonetheless, they said that they believe this notification process is time consuming because each package must be itemized and entered into a database; a letter must be written to each addressee; and the product must be stored. The process can take up to 30 days per import—and can hinder their ability to quickly process packages containing prescription drugs prohibited from import. According to FDA investigators, in most instances, the addressee does not present evidence to support the drugs’ admissibility, and the drugs are ultimately provided to CBP or the U.S. Postal Service for return to sender. FDA headquarters officials told us that the Standard Operating Procedures, introduced in August 2004 and discussed earlier in this report, were an attempt to help FDA address the burden associated with the notification process because it was designed to focus resources on packages containing drugs considered to be among the highest risk. FDA concerns about the notification process are not new. In testimony before Congress, FDA and the Secretary of HHS raised concerns about the notification process, noting that it is time-consuming and resource intensive. However, FDA’s testimony did not propose any legislative changes to address the concerns it identified. In May 2001, FDA’s Acting Principal Deputy Commissioner wrote a memorandum to the Secretary of HHS expressing concern about the growing number of drugs imported for personal use and the dangers they posed to public health. The memorandum explained that because of the notice and opportunity to respond requirements, detaining and refusing entry of mail parcels was resource intensive. The Acting Principal Deputy Commissioner proposed, among other things, the removal of the requirement that FDA issue a notice before it could refuse and return personal use quantities of FDA- regulated products that appear violative of the Food, Drug, and Cosmetic Act. He noted that removal of the notification requirement would likely require legislation, but without this change, FDA could not effectively prohibit mail importation for personal use. As of July 2005, according to FDA officials and an HHS official, the Secretary had not responded with a specific legislative proposal to change FDA’s notification requirement. FDA officials said that there are some complicating issues associated with eliminating the notification requirement. For example, they said that one of the arguments against eliminating the notification requirement is the importance of providing due process, which basically gives individuals the opportunity to present their case as to why they should be entitled to receive the property, in this case prescription drugs that they ordered from a foreign source. Another is to what extent the law should be changed to cover all imported prescription drugs and other products. In addition, USPS indicated that any discussion of options to expedite the processing and disposition of prescription drugs must consider international postal obligations, specifically the requirements of the Universal Postal Union (UPU). FDA officials said that currently, the notification requirement also applies to large commercial quantities of prescription drugs and other nonpharmaceutical products for which the requirement is not a problem. They said it has become a burden only because FDA and CBP are overwhelmed with a large volume of small packages. FDA officials said that they have considered other options for dealing with this issue, such as summarily returning each package to the sender without going through the process. However, they said that the law would likely need to be changed to allow this, and, as with the current process, packages that are returned to the sender could, in turn, be sent back by the original sender to go through the process again. They said that another option might be destruction, but they were uncertain whether they had the authority to destroy drugs FDA intercepts; they indicated that the authority might more likely lie with CBP. Regardless, FDA officials said that whatever approach was adopted, FDA might continue to encounter a resource issue because field personnel would still need to open and examine packages to ascertain whether they contained unapproved prescription drugs. Federal agencies have been taking steps to address Internet sales of prescription drugs since 1999, but these efforts have not positioned them to successfully prevent the influx of prescription drugs that are being imported through foreign pharmacies. CBP has recently organized a task force to coordinate federal efforts related to prescription drugs imported for personal use. This task force appears to be a step in the right direction. However, its efforts could be further enhanced if the task force established a strategic framework to promote accountability and guide resource and policy decisions. In January 2004, CBP organized an interagency task force to address various issues associated with unapproved prescription drugs entering the United States from foreign countries. Although CBP, FDA, ONDCP, DEA, and ICE appear to be working together to address these very complex issues, their efforts could be enhanced by a strategic framework that guides resource and policy decisions and promotes accountability. Such a framework that establishes measurable, quantifiable goals and strategies for achieving these goals, including a determination of resources needed to achieve the goals, would enhance the ability of agency officials and congressional decision makers to ensure accountability and consistent and focused attention to enforcing the prohibitions on personal importation. Congress enacted the Government Performance and Results Act of 1993 to have agencies focus on the performance and results of programs, rather than on program resources and activities. The principles of the act include (1) establishing measurable goals and related measures, (2) developing strategies for achieving results, and (3) identifying the resources that will be required to achieve the goals. The act does not require agencies to use these principles for individual programs, but our work related to the act and the experience of leading organizations have shown that a strategic approach or framework is a starting point and basic underpinning for performance-based management—a means to strengthen program performance. A strategic framework can serve as a basis for guiding operations and help policy makers, including congressional decision makers and agency officials, make decisions about programs and activities. Our work has also shown that a strategic framework can be useful in providing accountability and guiding resource and policy decisions, particularly in relation to issues that are national in scope and cross agency jurisdictions, such as prescription drug importation. When multiple agencies are working to address aspects of the same problem, there is a risk that overlap and fragmentation among programs can waste scarce funds, confuse and frustrate program customers, and limit overall program effectiveness. Use of a strategic framework may help mitigate this risk. Since 1999, federal law enforcement and regulatory agencies have organized various task forces and working groups to address issues associated with purchasing prescription drugs over the Internet; however, recent efforts have begun to focus particular attention on imported prescription drugs. For example, according to an FDA official, many of FDA’s efforts, started in 1999, focused on Internet pharmaceutical sales by illicit domestic pharmacies and the risks associated with purchasing those drugs, rather than drugs that are being imported from foreign countries. This official said that although FDA had established working groups and advanced media campaigns to address problems associated with drugs purchased over the Internet from domestic sources, imported drugs have added a new dimension that was only incidentally recognized during efforts begun in 1999. He said that the plans developed by FDA in 1999 are still viable as far as domestic sales are concerned, but they have not been refocused to reflect concerns about imported prescriptions and did not position federal law enforcement agencies to anticipate the increased volume of drugs that are imported by individuals. More recent efforts have focused on prescription drugs entering international mail and express carrier facilities. In January 2004, the CBP Commissioner initiated an interagency task force on pharmaceuticals, composed of representatives from CBP, FDA, DEA, ICE, and ONDCP as well as legal counsel from the Department of Justice. According to the Commissioner, the proposal to create the task force was prompted by “intense public debate and congressional scrutiny, which has resulted in increasing pressure being applied to regulatory and law enforcement agencies to develop consistent, fair policies” to address illegal pharmaceuticals entering the United States. The Commissioner proposed that the task force achieve five specific goals, and according to a CBP official, five working groups were established to achieve these goals. Figure 6 shows the task force goals, the five working groups, and the goals of each working group. A CBP official told us that the task force is designed to foster cooperation among the agencies responsible for enforcing the laws governing prescription drugs imported for personal use. The task force was created to go beyond interdiction at the mail and carrier facilities. The official also said that the task force was fashioned to deal with supply and demand issues, thereby reducing the volume of drugs entering these facilities. For example, on the demand side, the public awareness working group is responsible for conveying information about the health and safety risks of imported prescription drugs, and on the supply side, the working cooperatively with industry group is responsible for, among other things, ways of identifying rogue Internet sites. CBP officials and other members of the task force provided examples of activities being carried out or planned by task force working groups that are discussed below. The working group on mail and express consignment operator facilities procedures has carried out special operations at five international mail and three express carrier facilities to examine parcels suspected of containing prohibited prescription drugs over specific periods of time, such as 2 or 3 days. While similar operations have occurred since 2000, a CBP official told us that those conducted under the task force are multiagency efforts. Among other things, task force members gather data about the source, type, and recipients of the drugs and test the contents of the parcels to determine whether they are counterfeit or otherwise prohibited. These operations are expected to continue during the remainder of 2005 at all of the remaining mail facilities and some of the carrier facilities. The working group on targeting/data research is using the results of special operations to analyze data retrieved during the special operations and determine how these data can be used to guide future operations and enforcement efforts. Also, ICE was working with CBP and the government of an Asian country to identify and track controlled substances destined for the United States. ICE plans to use this approach to identify and take possible law enforcement action against illegal enterprises. The working group on increasing public awareness has been developing and disseminating public service announcements on the risks associated with purchasing drugs over the Internet. The working group has placed public service announcements on the FDA and CBP Web sites and is coordinating with FDA on its efforts, ongoing since 1999, to disseminate similar material in magazines, online, and in pharmacies. Also, the working group has entered into an agreement with a major Internet service provider and others to have a public service announcement link on screen when someone tries to access online pharmacy sites. The working group on working cooperatively with industry has met with Internet businesses, such Internet service providers and companies that operate search engines, to discuss how task force members can work with Internet businesses to stem the flow of imported drugs coming into the country, including discussing standards for identifying legitimate Web sites. It has also met with representatives of express carriers and plans to meet with representatives of credit card organizations in late summer 2005. In addition, task force members are working with ONDCP to address the importation of controlled substances through international mail and carrier facilities. In October 2004, ONDCP issued a plan for addressing demand and trafficking issues associated with certain man-made controlled substances—such as pain relievers, tranquilizers, and sedatives. Among other things, ONDCP recommended that DEA, CBP, ICE, State Department, National Drug Intelligence Center, and FDA work with USPS and private express mail delivery services to target illegal mail order sales of chemical precursors, synthetic drugs, and pharmaceuticals, both domestically and internationally. ONDCP officials said that a multiagency working group is meeting to discuss what can be done to confiscate these controlled substances before they enter the country. An ONDCP official said that participants at these meetings included officials from CBP, USPS, and DEA. Finally, USPS is exploring what additional steps it can take to further help the task force. Although USPS has participated in task force activities, USPS officials said USPS is concerned about a conflict between its mission to keep the mail moving and whether it is positioned to determine the admissibility of mail. USPS officials said that they proposed, during a July 2004 hearing, the possibility of cross-designating U.S. Postal Inspectors with Customs’ authority so that Postal Inspectors can conduct warrant-less searches, at the border, of incoming parcels or letters suspected of containing illegal drugs. According to USPS officials, such authority would facilitate interagency investigations. They said that their proposal has yet to be finalized with CBP. In addition, internationally, USPS has drafted proposed changes to the U.S. listing in the Universal Postal Union List of Prohibited Articles. A U.S. Postal Service official told us that USPS is awaiting a response to a letter it sent to FDA last year requesting FDA’s views on the proposed changes. The official said that, without FDA input, USPS does not have the expertise to determine whether the proposed changes are accurate. In August 2005, FDA officials said that after receiving the letter last year, they met with USPS officials regarding drug importation, including this proposal. However, according to FDA officials, USPS had not subsequently engaged FDA on this particular issue, and FDA did not believe USPS was awaiting a formal written response. FDA officials stated that if USPS would like to discuss this matter further, they would be happy to work with USPS. Although the task force has taken positive steps toward addressing issues associated with enforcing the laws on personal imports, it has not fully developed a strategic framework that would allow the task force to address many of the challenges we identify in this report. Carrying out enforcement efforts that involve multiple agencies with varying jurisdictions is not an easy task, especially since agencies have limited resources and often conflicting priorities. The challenges identified in this report could be more effectively addressed by using a strategic framework that more clearly defines the scope of the problem by estimating the volume of drugs entering international mail and carrier facilities, establishes milestones and performance measures, determines resources and investments needed to address the flow of imported drugs entering the facilities and where those resources and investments should be targeted, and evaluates progress. Our review showed that the task force has already begun to establish some elements of a strategic framework, but not others. For example: In light of the Commissioner’s January 2004 memo discussed earlier, the task force has a clear picture about its purpose and why it was created. However, it has not defined the scope of the problem it is trying to address because, as discussed earlier, CBP and FDA have yet to develop a way to estimate the volume of imported prescription drugs entering specific international mail and carrier facilities. Without doing so, it is difficult to assess what resources are necessary to effectively inspect parcels and interdict those that contain unapproved drugs. Whereas the task force and individual working groups have goals that state what they are trying to achieve, the task force has not established milestones and performance measures to gauge results. A CBP official said that the goals are intended to be guidelines rather than goals to be measured; he would expect progress or results to be measured within the context of strategic plans prepared by individual agencies. However, without task force-specific milestones and performance measures, it is difficult to measure improvement over time and ensure accountability, particularly if the goals and measures of individual task force members do not directly address, or are not in harmony with, the goals of the task force. The task force has not addressed the issue of what its efforts will cost so that it can target resources and investments, balancing risk reduction with costs and considering task force members’ other law enforcement priorities. Instead, according to a CBP official, working group projects are done on an ad hoc basis wherein resources are designated for specific operations. Nonetheless, the absence of cost and resource assessments makes effective implementation harder to achieve because over time, alternative agency priorities and resource constraints may hinder the ability of the task force to meet its goals. We acknowledge that such a strategic framework needs to be flexible to allow for changing conditions, but it could be helpful to organize it in a logical flow, from conception to implementation. Specifically, the strategy’s purpose leads to definition of the problems and risks it intends to address, which in turn leads to specific actions for tackling those problems and risks, allocating and managing appropriate resources, identifying different organizations’ roles and responsibilities, and finally integrating action among all the relevant parties and implementing the strategy. Advancing a strategic framework could establish a mechanism for accountability and oversight and set the stage for defining specific activities needed to achieve results and specific performance measures for monitoring and reporting on progress. In so doing, task force officials could measure progress over time, identify new and emerging barriers or obstacles to carrying out goals and objectives, develop strategies to overcome them, and inform decision makers about the implications of taking or not taking specific actions. For example, CBP, FDA, and the other agencies could work jointly to develop statistically valid estimates of the number of parcels suspected of containing imported prescription drugs entering particular facilities and begin to develop realistic risk-based estimates of the number of CBP and FDA staff needed to interdict parcels at mail facilities. Task force members could also take steps to explore how they can work more collaboratively and strategically with private organizations, such as credit card organizations and express carriers. In doing so, task force members and representatives of these organizations could examine what can be done within the context of current law and establish strategies and goals for overcoming any practical considerations that act as barriers to enforcing the prohibition on imported pharmaceuticals, including controlled substances. They could also identify any legislative barriers they face in aggressively enforcing the prohibition and work together to develop legislative proposals aimed at stemming the flow of imported prescription drugs into the country. In addition, agencies could work collaboratively among themselves to examine the resources and investments needed to address particular strategies. Any effort to implement task force objectives would require sustained high-level leadership and commitment to ensure that resources are available to carry out task force goals, commensurate with the goals and priorities of the individual agencies involved with the task force. According to a CBP official involved in the task force, agencies have made a high-level commitment to supporting the task force. Nonetheless, in the absence of a strategic framework and, in particular, measurable goals and milestones, there is little assurance that this commitment will continue as the goals and priorities of individual agencies change. A strategic framework could also enable the task force to adjust to changing conditions. As mentioned earlier, FDA had developed plans and initiated steps in 1999 to deal with Internet sales of prescription drugs, but most of those efforts focused on domestic sales. However, plans to address Internet sales had not been refocused to reflect prescriptions imported from foreign countries for personal use, partly because FDA and other agencies did not anticipate that the volume of imported drugs would overwhelm available resources. A strategic framework, with ongoing problem definition and risk assessment, might help task force members, including FDA and others to identify the impact of this emerging threat and give the task force members the opportunity to adjust their enforcement strategies to address the threat on a proactive, rather than a reactive, basis. It also might help them consider interrelationships between the enforcement strategies and priorities of the task force and their own strategies and priorities. Furthermore, a strategic framework could help agencies adjust to potential changes in the law governing the importation of prescription drugs for personal use. During recent sessions of Congress, members introduced a number of bills that could have changed how personal prescription drug imports were treated under the law. Some proposals would have allowed importation of selected prescription drugs under certain conditions, for example, allowing importation from certain countries, such as Canada. Another proposal would have maintained the current prohibitions, but would have allowed for expedited disposal of illegally imported prescription drugs, such as controlled substances available by prescription. Those bills that would have allowed some personal importation also included provisions for expediting the process of disposing of those drugs that still may not be imported for personal use. Although none of these changes were adopted, continued congressional interest could prompt changes in the future. If that occurred, a strategic framework could better position agencies to adjust to any changes; identify any new threats or vulnerabilities; and redefine strategies, roles, and responsibilities. Enforcing the laws governing prescription drug imports for personal use is a complex undertaking that involves multiple agencies with various jurisdictions and differing priorities. We acknowledge these complexities, but current inspection and interdiction efforts at the international mail branches and express carrier facilities have not prevented the reported substantial and growing volume of prescription drugs from being illegally imported from foreign Internet pharmacies into the United States. CBP and other agencies have taken a step in the right direction by establishing a task force designed to address many of the challenges discussed in this report. Although agencies responsible for enforcing these laws have a mechanism in place to jointly address the threat posed by prohibited and sometimes addictive drugs entering the country via the international mail and express carriers, many packages that may contain these drugs enter the United States daily. Furthermore, according to officials, resources are strained as the volume of prescription drugs entering the country is large and increasing. Our past work has shown how a strategic framework can be useful in promoting accountability and guiding policy and resource decisions. In the case of the task force, a strategic framework that facilitates comprehensive enforcement of prescription drug importation laws and measures results would provide it an opportunity to better focus agency efforts to stem the flow of prohibited prescription drugs entering the United States. The task force could become more effective as it becomes more accountable. An assessment of the scope of the problem would help the task force prioritize activities and help ensure that resources are focused on the areas of greatest need. With milestones and performance measures, it could be able to better monitor progress and assess efforts to enforce the laws. An analysis of resources and investments is critical because of current resource constraints, a point highlighted by the Secretary of Health and Human Services’ report under the Medicare Modernization Act. Moreover, without these elements culminating in concrete plans for implementation, it will be difficult for the task force to maximize effectiveness in reducing the flow of prohibited imported prescription drugs into the United States. In addition to the broader issues being addressed by the task force, FDA has said it faces a significant challenge handling the substantial volume of prescription drugs imported for personal use entering international mail facilities. Specifically, in recent years, FDA has expressed continuing concern to Congress that it encounters serious resource constraints enforcing the law at mail facilities because packages containing personal drug imports cannot automatically be refused. Instead, under current law, FDA is to notify recipients that they are holding packages containing drugs that appear to be prohibited from import and give them the opportunity to provide evidence of admissibility. FDA has stated that it cannot effectively enforce the law unless the requirement to notify recipients is changed. FDA has suggested that the HHS Secretary consider proposing changes to this requirement, but the HHS Secretary has not yet responded with a legislative proposal. Although there may be complex issues associated with changing the requirement to notify, including an individual’s due process right to provide evidence of admissibility and consideration of Universal Postal Union requirements, assessing the ramifications of such a proposal would help decision makers as they consider how best to address FDA’s resource constraints and responsibility to enforce the law and protect the health and safety of the American public. To help ensure that the government maximizes its ability to enforce laws governing the personal importation of prescription drugs, we recommend that the CBP Commissioner, in concert with ICE, FDA, DEA, ONDCP, and USPS, develop and implement a strategic framework for the task force that would promote accountability and guide resource and policy decisions. At a minimum, this strategic framework should include establishment of an approach for estimating the scope of the problem, such as the volume of drugs entering the country through mail and carrier facilities; establishment of objectives, milestones, and performance measures and a methodology to gauge results; determination of the resources and investments needed to address the flow of prescription drugs illegally imported for personal use and where resources and investments should be targeted; and an evaluation component to assess progress, identify barriers to achieving goals, and suggest modifications. In view of the FDA’s continuing concern about the statutory notification requirement and its impact on enforcement, we also recommend that the Secretary of HHS assess the ramifications of removing or modifying the requirement, report on the results of this assessment, and, if appropriate, recommend changes to Congress. We requested comments on a draft of this report from the Secretary of Homeland Security, Attorney General, Director of the Office of National Drug Control Policy, Secretary of Health and Human Services, and Postmaster General. DHS, DEA, ONDCP, HHS, and USPS provided written comments, which are summarized below and included in their entirety in appendixes IV through VIII. DHS generally agreed with the contents of our report. Since our recommendation that the CBP-led task force develop and implement a strategic framework to address prescription drug importation issues affects other agencies, DHS said that CBP would convene a task force meeting to discuss our report and recommendation and is to provide us with additional information after the meeting. Responding for DOJ, DEA generally agreed with our recommendation that the CBP task force develop and implement a strategic framework. Specifically, DEA agreed that a strategic framework can be useful in promoting accountability and guiding policy and resource decisions, but it said that the interagency task force is a cooperative initiative and DEA must balance priorities in accordance with agency mandates. DEA also said that its strategic plan clearly establishes a framework to articulate agency priorities and assess its performance. Noting that our report acknowledges that such a framework needs to be flexible to allow for changing conditions, DEA stated that, in concert with other task force agencies, it will support the CBP Commissioner’s strategic framework for the interagency task force. ONDCP generally concurred with our recommendation that the CBP-led task force develop and implement a strategic framework. ONDCP also “strongly” suggested that the ONDCP-led Synthetic Drug Interagency Working Group play a significant role in integrating prescription drug considerations with all of the other synthetic drug concerns that potentially inflict harm on our society. ONDCP noted that our report documented well the problems associated with effectively policing Internet purchases and identified the significance played by credit card use as a facilitator of the problem. In addition, ONDCP stated that it encouraged law enforcement proposals that may curtail some of these dangerous practices and concurred with our identification of the cumbersome nature of currently required enforcement practices dealing with the use of the mails to transfer illicit narcotics. HHS generally concurred with both recommendations. With regard to the strategic framework, HHS said that it would work with its federal partners to discuss the development of a more formalized approach for addressing the issues associated with the importation of unapproved drugs. However, HHS questioned whether the framework should include an approach for developing more reliable volume estimates, because HHS believes the volume estimates already provided in HHS’s December 2004 report on drug importation are valid. HHS said that volume may depend on the incentive for the public to import unapproved drugs, as well as other external factors, and said that, short of opening and counting each package as it enters the United States, the reliability of estimates would always be in question given the fluid nature of unapproved prescription drug imports and the number of mail and courier facilities involved. HHS also stated that volume estimates would not alter the resource calculations articulated in HHS’s December 2004 report, which, according to HHS, were derived from special operations, called blitzes, by CBP and FDA at various international mail facilities. According to HHS, these calculations were based on personnel time and salaries needed to process each package. HHS further noted that our statement that the task force agencies could develop statistically valid volume estimates and realistic risk-based estimates of the number of staff needed to interdict parcels at mail facilities did not recognize that FDA is not always able to process the current number of packages set aside by CBP. In addition, HHS said that FDA must always be cognizant of competing priorities regardless of fluctuations in the volume of illegally imported prescription drugs. We recognize that any number of factors can influence the volume of unapproved drugs entering the country at any point in time or location. However, HHS’s current estimates are based on estimates of drugs imported from Canada during 2003 and, in part, on extrapolations from FDA’s limited observations during special operations at international mail branch facilities. We believe a more reliable and systematic approach might begin by using information already being collected by CBP and FDA at the various field locations, including the number of packages deemed abandoned by CBP and the number of imported packages FDA handles. With regard to resource calculations, as more reliable estimates are developed, FDA and other task force agencies would be better positioned to define the scope of the problem so that the task force and other decision makers can make informed choices about resources devoted to this problem, especially in light of competing priorities. Regarding our recommendation that the HHS Secretary assess FDA’s statutorily required notification process, HHS said that it intends to pursue an updated assessment. HHS observed that, given the increased volume of illegally imported prescription drugs since its initial request for modification of FDA’s notification process, other actions might be needed, and HHS would work with its federal partners to determine the actions required. HHS also provided technical comments that have been included, as appropriate. USPS did not state whether it agreed or disagreed with our recommendations but expressed a concern about possible procedural and legislative changes to the current notification requirements governing the processing and disposition of imported pharmaceuticals. Specifically, USPS requested that the report acknowledge the United States’s international postal obligations and stated that any discussion of options to expedite the processing and disposition of prescription drugs should consider these obligations. USPS further noted that recognizing these obligations is particularly important with respect to registered or insured mail for which the Postal Service can be held financially responsible if it is not delivered or returned. We acknowledge USPS’s concerns and have added language to the report accordingly. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Secretary of the Department of Homeland Security, the Secretary of Health and Human Services, and interested congressional committees. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me on (202) 512-8777 or stanar@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix IX. This report addresses the following questions: (1) What do available data show about the volume and safety of prescription drugs imported into the United States for personal use through the international mail and private carriers? (2) What procedures and practices are used at selected facilities to inspect and interdict prescription drugs unapproved for import? (3) What factors affect federal agency efforts to enforce the prohibition on prescription drug importation for personal use through international mail and carrier facilities? (4) What efforts have federal agencies undertaken to coordinate the enforcement of the prohibitions on personal importation of prescription drugs? We performed our work at the Department of Homeland Security’s U.S. Customs and Border Protection (CBP) and U.S. Immigration and Customs Enforcement (ICE), the Department of Health and Human Services’ (HHS) Food and Drug Administration (FDA), the Department of Justice’s Drug Enforcement Administration (DEA), the U.S. Postal Service (USPS), and the Office of National Drug Control Policy (ONDCP). We also carried out work at 3 of the 14 international mail facilities—New York, Los Angeles, and Chicago—and 2 of the 29 carrier facilities—Cincinnati (DHL Corporation) and Memphis (FedEx Corporation). We selected the New York and Los Angeles mail facilities because they (1) processed among the highest overall number of packages, representing 27 percent of the total number of estimated packages going through international mail facilities in 2002 and (2) also received prescription drugs. The Chicago facility was selected because it received prescription drugs and provided geographic dispersion. The 2 carrier facilities selected were (1) different companies; (2) handled the highest overall number of packages, according to data provided by CBP; and (3) were located near each other. At each of these locations, we collected and reviewed available relevant importation and interdiction data from FDA and CBP; observed inspection and interdiction practices; met with CBP and FDA management, inspectors, and investigators to discuss issues related to inspection and pharmaceutical importation volume; and reviewed relevant documents on inspection and interdiction procedures. At the international mail facilities, we also met with USPS officials to discuss mail handling and processing procedures. The information from our site visits is limited to the 3 international mail facilities and 2 carrier facilities and is not generalizable to the remaining 10 international mail facilities and 27 carrier facilities. To determine what the available data show about the volume and safety of imported prescription drugs, we interviewed CBP, FDA, DEA, ICE, and USPS headquarters officials and CBP and FDA officials at the 3 international mail facilities and 2 carrier facilities. We obtained and analyzed available data on the volume and safety of imported prescription drugs (1) collected from the facilities we visited and (2) gathered through multiagency special operations at selected mail facilities and provided to us by CBP headquarters. The available CBP and FDA information on the volume and safety of prescription drugs imported through the mail and carrier facilities we visited was primarily based on estimates and limited to observations at these locations. To obtain additional views on the overall volume or safety of imported prescription drugs, we reviewed ONDCP and HHS reports and testimony from the American Medical Association. We discussed with FDA officials the methodology used to develop the volume estimates presented in the 2004 HHS report on prescription drug importation and we reviewed the methodology to determine any limitations. In addition, we interviewed an official and reviewed documents from the National Association of Boards of Pharmacy to obtain the association’s findings on the safety of prescription drugs imported from foreign-based Internet pharmacies. We also relied on existing GAO work on the safety of prescription drugs imported from some foreign- based Internet pharmacies. To understand procedures and practices, we reviewed current federal law and CBP and FDA policies, procedures, and guidance regarding or applicable to prescription drugs and controlled substance importation. We interviewed officials at CBP, FDA, DEA, ICE, and USPS headquarters. To understand inspection procedures and practice, at each of 3 international mail facilities and 2 carrier facilities, we carried out site visits, observing the inspection process and interviewing CBP and FDA officials. At the selected international mail facilities, we also interviewed USPS officials to obtain information about their procedures and practices. In addition, when FDA and CBP implemented new procedures at the international mail facilities and carrier facilities, we carried out additional interviews at FDA and CBP headquarters, pursued telephone interviews with CBP and FDA officials at the facilities we had visited, and revisited 2 of the mail facilities to determine how the new procedures were being implemented, working in practice; and being monitored and evaluated. We also obtained from FDA fiscal year data on the number of mail packages containing prescription drugs it processed. From CBP we obtained data on the number of packages interdicted using its new procedures for processing schedule III through V controlled substances. Because these data were used for contextual purposes, we did not assess the reliability of these data. However, we discussed the scope of the FDA and CBP data with the respective agency officials and have noted the limitations in the report. To determine what factors affect federal agency efforts to enforce the prohibitions on prescription drug importation for personal use through international mail and carrier facilities, we interviewed CBP, FDA, DEA, ICE, and USPS officials. We asked these officials to identify any factors that affected their respective agency’s efforts to process or interdict prescription drugs imported through the mail and carriers. The information presented in this report is limited to the views expressed by the officials interviewed. In addition, we met with representatives from MasterCard International and Visa U.S.A., Inc., the two credit card associations identified by DEA as the organizations used by the majority of Internet drug sites. These associations also testified in July 2004 at congressional hearings on matters related to the illegal importation of prescription drugs. We discussed with them each association’s efforts to assist federal enforcement of the prohibitions on prescription drug importation. To determine what efforts federal agencies have undertaken to coordinate the enforcement of the prohibitions on personal importation of prescription drugs, we interviewed CBP, USPS, FDA, DEA, ICE, and ONDCP headquarters officials. We obtained and reviewed documents describing these initiatives, their status, and any studies or data describing the results of the initiatives. These documents included agency guidelines and memorandums, indicating changes to agency policies and procedures; congressional hearings; and selected legislative proposals. We obtained these documents from agency officials; agency Web sites, as directed by agency officials; and congressional Web sites. We also interviewed CBP and FDA field officials at the selected international mail facilities and private carrier facilities to ascertain the status of the implementation of these initiatives. We analyzed and synthesized the information gathered from the interviews and documents. In addition, in appendix III of this report, we used data from FDA on the number of open and closed investigations it had undertaken related to Internet drug sales and imported prescription drugs. We also used data from DEA on the number of arrests related to the illegal diversion of pharmaceuticals. Because these data were used for contextual purposes, we did not assess their reliability. We conducted our review between April 2004 and August 2005 in accordance with generally accepted government auditing standards. The drugs and drug products that come under the Controlled Substances Act are divided into five schedules. A general description and examples of the substances in each schedule are outlined below. During congressional hearings in July 2004, representatives from MasterCard International and Visa U.S.A., Inc., testified on issues concerning the use of credit cards to purchase prescription drugs for importation from Internet pharmacies, including discussions with federal law enforcement agencies to address these issues. Accordingly, we met with Drug Enforcement Administration (DEA), Food and Drug Administration (FDA), and Immigration and Customs Enforcement (ICE) officials, as well as representatives from MasterCard International and Visa U.S.A., Inc. to more fully understand how these organizations are working together to address prohibitions on prescription drug importation. The agency officials and credit card association representatives described their working relationship as cooperative, but complicated by legal and practical considerations. The following section summarizes our discussions. According to FDA, DEA, and ICE officials, their agencies have worked with credit card organizations to obtain information to investigate the importation of prescription drugs purchased with a credit card from Internet pharmacies, but these investigations were complicated by legal and practical considerations. Such considerations included privacy laws; federal law enforcement agencies’ respective subpoena authority, priorities, and jurisdictions; and the ease with which merchants engaged in illegal activity can enter into a new contract with a different bank to use the same payment system. In addition, according to the two credit card associations we contacted, their respective associations have also undertaken searches of the Internet for Web sites that appeared to be selling problematic materials and accepting their respective payment cards, but these investigations can also be complicated by legal considerations. Privacy laws can sometimes limit the extent to which companies, including credit card organizations, will provide information to federal law enforcement agencies about parties to a transaction. FDA and DEA officials told us that credit card organizations and/or banks and other financial institutions, when they have the direct contractual relationship with the merchants, have provided to the agencies information regarding transactions involving prescription drugs prohibited from import, as well as alerting federal officials when suspicious activity is detected. However, they said that the companies do not provide information about the parties involved in the transaction without a subpoena. Representatives from the two associations with whom we met explained that law enforcement usually needs to issue a subpoena because of company concerns about possible legal action by the subject of the investigation (for example, if the subject asserted that information was provided by the association or bank to law enforcement in violation of federal privacy laws). They further noted, however, that their respective associations would provide law enforcement information without a subpoena, when properly requested under certain circumstances, including matters of national security or when a human life was in immediate jeopardy. DEA, ICE, and FDA officials confirmed that they are able to obtain information from credit card companies and/or banks and other financial institutions through subpoenas, although the agencies have different subpoena authority with regard to entities, such as banks and credit card companies. DEA and ICE have the authority to subpoena information directly from such entities, but FDA must ask a U.S. Attorney to obtain a grand jury subpoena requesting the information. DEA and ICE may also use grand jury subpoenas. For example, DEA officials told us that usually they are able to obtain needed information using administrative subpoenas; however, they may use a grand jury subpoena if a company will not provide the requested information or a U.S. Attorney prefers that approach. DEA, FDA, and ICE could not readily provide data on the number of subpoenas served because (1) data on DEA and ICE administrative subpoenas were maintained at the field office requesting the subpoena and were not organized according to payment method and (2) none of the agencies could share grand jury information. Agencies’ priorities also affect their ability to conduct investigations of credit card purchases of prescription drugs for importation. According to FDA, DEA, and ICE officials, their investigations, including those involving imported prescription drugs, focused on commercial quantities, rather than quantities to be consumed for personal use. DEA officials also said that DEA seeks to dismantle major drug supply and money laundering organizations; therefore, its investigations of prescription drug violations focused on the suppliers of Internet pharmacies, not individual consumers. DEA reported no active cases on individuals who were illegally importing controlled substance pharmaceuticals over the Internet for personal consumption. FDA, DEA, and ICE officials said that investigations involving smaller quantities may be handled by state and local law enforcement. In addition to the quantity of drugs being imported, federal enforcement agencies consider jurisdiction when determining whether to pursue an investigation, including investigations of Internet pharmacies using credit card payment systems that cross U.S. borders. For a federal enforcement agency to determine whether it has jurisdiction to investigate potential illegal activity outside the United States, it generally needs to consider whether (1) the federal statute or statutes violated apply to activity outside the country and (2) there is sufficient evidence of an intent to produce effects in the United States or some other connection to the United States, such as a U.S. distributor. Pursuit of investigations of Internet pharmacies using credit card payment systems presents both jurisdictional and practical limitations, when some or all of the operations (e.g., pharmacies, Web sites, and bank accounts) are located in foreign countries and there is no U.S. distributor. According to FDA officials, in cases that FDA does not have jurisdiction to pursue, it may ask its foreign counterparts for assistance. ICE officials told us that they focused on transporters of commercial quantities across U.S. borders from a foreign country into the United States. By contrast, DEA enforces a statute that specifically applies to manufacturers or distributors of certain prescription drugs who are located in foreign countries. Specifically, DEA has jurisdiction over a manufacturer or distributor of schedule II controlled substances in a foreign country who knows or intends that such substances will be unlawfully imported into the United States. However, the relevant statute does not apply to prescription drugs that are schedules III through V controlled substances. Therefore, according to a DEA official, to pursue such investigations, DEA has to devise other ways to reach those operating outside the United States. A DEA official said that another practical consideration affecting investigations of credit card purchases of imported prescription drugs was the ease with which merchants engaged in illegal activities were able to open new merchant credit card accounts. Credit card association representatives confirmed that the reappearance of the same violators using a different name or bank, or even disguising the illegal activity as a different and legal activity, can be a problem. They said that unlike law enforcement, credit card organizations do not have the authority to arrest the violators, and some of the merchants engaged in such illegal activities are skilled at moving from bank to bank and masking their illegal activities. In addition to investigations by federal law enforcement agencies, each of the credit card associations we contacted had also undertaken searches of the Internet for Web sites that appeared to be selling problematic materials and using its payment cards. One association used a vendor to carry out the searches and then provided the information to its member banks regarding their merchants who appear to have been involved in selling controlled substances. The other association’s security personnel conducted the Internet search, identified the sites, and then attempted to contact the member bank that had contracted with the merchant. Representatives of the latter association told us that as a result of this effort, at the association’s request, contracts with approximately 500 merchants had been terminated by the member banks that had authorized the particular merchants to accept the association’s credit card. Representatives from both associations agreed that federal law enforcement agencies were in the best position to enforce the prohibition on prescription drug importation, because they have arrest authority and can remove the violators. However, these representatives had differing opinions concerning the desirability of their taking any additional enforcement steps in this area. Representatives of one association told us they did not want the authority to make purchases to confirm that illegal transactions were occurring. They said once their investigators identified a site willing to sell drugs, they contacted the bank that authorized the merchant’s account so that the bank could take appropriate action. Further, they told us that the association was not set up to make such purchases safely and its mail room was not structured to take delivery. Representatives of the other association told us that their association would like the authority to make such purchases, noting that their investigations were complicated by the inability of the association’s security personnel to purchase controlled substances. However, these representatives told us that, if they were allowed to make such transactions, they would expect to turn over the controlled substances to federal law enforcement immediately upon receipt. A DEA official told us that currently credit card organizations are not exempt from the general prohibition against possessing controlled substances, and therefore it is illegal for them to purchase controlled substances from an Internet pharmacy to show that the pharmacy is acting illegally. He also said that even if the law were changed to allow such transactions, executing them could be unmanageable, because the companies would have to comply with federal regulations for handling and storing controlled substances. For example, federal regulations require that controlled substances be stored in a safe, vault, steel cabinet, or cage. The regulations also specify the methods and materials to be used to construct the storage facility, as well as the type of security system (alarms, locks, and anti-radiation devices) required to prevent entry. Even if a credit card company planned to turn over purchased controlled substances to federal law enforcement upon receipt, it would need to have a facility as prescribed by federal regulations to hold and store the substances until a DEA agent could take possession of them. Federal enforcement agencies and credit card organizations have had periodic discussions about credit card enforcement issues involving purchases of prescription drugs for importation from Internet pharmacies. In addition, the associations told us that they had provided information about this issue to banks and other financial institutions. According to FDA and DEA officials and representatives of the two credit card associations we contacted, meetings have been held periodically, between individual agencies (e.g., DEA and FDA) or as part of the Customs and Border Protection (CBP) Interagency Task Force (discussed earlier in this report) and with representatives of one or more companies present. Association representatives told us that they believed that the meetings, which began in late 2003, have provided an educational opportunity for both the credit card companies and the federal law enforcement agencies. For example, the representatives of one association said that during the meetings they had described how the association’s payment system operated, explaining (1) the relationship among the association, the banks and other financial institutions, merchants, and cardholders, and (2) which entities maintained the transactional information needed by law enforcement for investigations of Internet pharmacies. They said that DEA and FDA had explained federal laws related to the importation of prescription drugs, both controlled and noncontrolled substances. Representatives of the other association said that the meetings helped to educate its officials about issues, concerns, and risks related to the illegal importation of prescription drugs. In addition, agency officials and association representatives said that they had discussed the role credit card organizations can play with regard to illegal importation. No minutes of these meetings are maintained. According to association representatives, information obtained at these meetings was disseminated to the banks and other financial institutions through bulletins. Through association bulletins, both credit card associations provided to banks and other financial institutions information concerning the illegal importation of prescription drugs. The bulletins reminded the recipients of their obligation to ensure that the credit card system was not to be used for illegal activity, alerted them to the risk of illegal activity involving transactions for prescription medications purchased over the Internet, and underscored the need for due diligence to ensure that merchants were not engaged in illegal activities. One association also issued a press release that, according to the association’s representatives, was to communicate to the public information similar to that which had been sent to the banks. FDA and DEA officials and association representatives said that the dialogue was continuing and described the relationship between the agencies and associations as good. A meeting between credit card organizations and the CBP task force is to be held in late summer 2005. Moreover, they noted that informal contacts between the agencies and the credit card organizations occurred, as needed, on specific matters related to prescription drug importation. However, agency officials confirmed that they had no plan or written strategy for dealing with credit card organizations related to the illegal importation of prescription drugs purchased with a credit card. In addition to the above, John F. Mortin, Assistant Director; Leo M. Barbour; Frances A. Cook; Katherine M. Davis; Michele C. Fejfar; Yelena T. Harden; James R. Russell; and Barbara A. Stolz made key contributions to this report.
Consumers can be violating the law and possibly risking their health by purchasing imported prescription drugs over the Internet. U.S. Customs and Border Protection (CBP), in the Department of Homeland Security (DHS), and the Food and Drug Administration (FDA), in the Department of Health and Human Services (HHS), work with other federal agencies at international mail and express carrier facilities to inspect for and interdict prescription drugs illegally imported for personal use. This report addresses (1) available data about the volume and safety of personal prescription drug imports, (2) the procedures and practices used to inspect and interdict prescription drugs unapproved for import, (3) factors affecting federal efforts to enforce the laws governing prescription drugs imported for personal use, and (4) efforts federal agencies have taken to coordinate enforcement efforts. The information currently available on the safety of illegally imported prescription drugs is very limited, and neither CBP nor FDA systematically collects data on the volume of these imports. Nevertheless, on the basis of their own observations and limited information they collected at some mail and carrier facilities, both CBP and FDA officials said that the volume of prescription drugs imported into the United States is substantial and increasing. FDA officials said that they cannot assure the public of the safety of drugs purchased from foreign sources outside the U.S. regulatory system. FDA has issued new procedures to standardize practices for selecting packages for inspection and making admissibility determinations. While these procedures may encourage uniform practices across mail facilities, packages containing prescription drugs continue to be released to the addressees. CBP has also implemented new procedures to interdict and destroy certain imported controlled substances, such as Valium. CBP officials said the new process is designed to improve their ability to quickly handle packages containing these drugs, but they did not know if the policy had affected overall volume because packages may not always be detected. We identified three factors that have complicated federal enforcement of laws prohibiting the personal importation of prescription drugs. First, volume has strained limited federal resources at the mail facilities. Second, Internet pharmacies can operate outside the U.S. regulatory system and evade federal law enforcement actions. Third, current law requires FDA to give addressees of packages containing unapproved imported drugs notice and the opportunity to provide evidence of admissibility regarding their imported items. FDA and HHS have testified before Congress that this process placed a burden on limited resources. In May 2001, FDA proposed to the HHS Secretary that this legal requirement be eliminated, but according to FDA and HHS officials, as of July 2005, the Secretary had not responded with a proposal. FDA officials stated that any legislative change might require consideration of such issues as whether to forgo an individual's opportunity to provide evidence of the admissibility of the drug ordered. Prior federal task forces and working groups had taken steps to deal with Internet sales of prescription drugs since 1999, but these efforts did not position federal agencies to successfully address the influx of these drugs imported from foreign sources. Recently, CBP has organized a task force to coordinate federal agencies' activities to enforce the laws prohibiting the personal importation of prescription drugs. The task force's efforts appear to be steps in the right direction, but they could be enhanced by establishing a strategic framework to define the scope of the problem at mail and carrier facilities, determine resource needs, establish performance measures, and evaluate progress. Absent this framework, it will be difficult to oversee task force efforts; hold agencies accountable; and ensure ongoing, focused attention to the enforcement of the relevant laws.
To trace the information flow and document key data systems used to procure, control, and pay for JSLIST, we reviewed and analyzed procedures and system documentation. Further, we discussed business processes with managers and observed processing at key DOD organizations, including the JSLIST Program Office, DLA, DFAS-Columbus, and the Defense Contract Management Agency. We discussed and observed JSLIST production with managers at the Southeastern Kentucky Rehabilitation Industries and discussed JSLIST inventory and issue to the warfighter at selected military units. To trace the information flow and identify key data systems related to a computer bought using the government purchase card, we reviewed established procedures and discussed processes with managers of key organizations, including DOD’s Purchase Card Program Office, DFAS- Columbus, and two selected military service units. To compare certain aspects of DOD’s JSLIST inventory management and business processes related to a computer bought using the government purchase card, we discussed best business practices used by two leading retailers—Sears and Wal-Mart. We selected Sears and Wal-Mart based on our review of the study Achieving World-Class Supply Chain Alignment: Benefits, Barriers, and Bridges, by the Center For Advanced Purchasing Studies, Tempe, Arizona: 2001. We discussed and observed best practices used by these companies to manage their supply chain and compared these practices to the DOD business processes to identify opportunities to improve DOD’s business processes. We briefed DOD managers, including officials from DOD’s JSLIST Program Office, DLA, and DFAS, on the details of our review, including our objectives, scope, and methodology and our findings and conclusions. DOD officials generally agreed with our findings and conclusions. We relied upon our past work and that of the DOD Inspector General in regard to accuracy and reliability of the information systems DOD uses to support JSLIST processing. Further, we did not audit the financial data provided by DOD or contained in its inventory systems. Details on where we performed our audit work are included in appendix I. We conducted our audit work from July 2001 through June 2002 in accordance with U.S. generally accepted government auditing standards, and we performed our investigative work in accordance with the standards prescribed by the President’s Council on Integrity and Efficiency. We found that DOD’s processes for procuring, controlling, and paying for JSLIST rely on manual data transmissions and entry into as many as 13 nonintegrated data systems. Much of the data required to procure and field JSLIST are transmitted using e-mails, faxes, telephones, and hard-copy documents that must be read and manually entered into automated systems. This reliance on manual data results in slow, error-prone business processes. In addition to these inefficiencies, the use of manual, stovepiped, and nonintegrated processes and systems have limited DOD’s ability to know how many JSLIST it has and where they are located. This lack of visibility was due to several factors. First, not all military units maintained the same JSLIST data. For example, some military units tracked key data such as manufacturer, manufacture date, and production lot number, while other units maintained little or no data. Second, military units maintained inventory data in nonstandard, stovepiped systems that did not share data with other DOD systems. The methods used to control and maintain visibility over JSLIST ranged from stand-alone automated systems, to spreadsheet applications, to pen and paper. One military unit we visited did not have any inventory system for tracking JSLIST. DOD’s inability to quickly identify and locate JSLIST has contributed to some military units declaring them excess to their immediate needs, while at the same time DOD had been attempting to expedite the issuance of the JSLIST to military units in response to the events of September 11, 2001. Discussions with two leading private sector companies identified innovative best practices that offer opportunities for DOD to improve its business processes. Unlike DOD, Sears and Wal-Mart have highly automated inventory management processes and use standard data and systems and electronic data transmission and entry. From the corporate level, these two entities maintain continuous visibility over inventory from their suppliers to the store shelf. During Operation Desert Shield/Desert Storm, DOD noted that its chemical and biological equipment (1) could cause unacceptable heat stress to the wearer, (2) could limit freedom of movement and impair job performance, (3) was bulky, and (4) was not fully interoperable across the services. Furthermore, most of the existing suits were no longer manufactured and those still in service would expire by 2007, given the 14-year expected life. To address these issues, DOD developed new, lightweight individual protective equipment such as the JSLIST, which DOD began procuring in 1997. An improved, multipurpose overboot is in procurement and new protective gloves are under development to improve manual dexterity and/or reduce heat stress on the wearer. Similarly, since the existing masks may cause some breathing difficulty, DOD is developing a new mask but does not expect to begin procurement until fiscal year 2006. JSLIST is a universal, lightweight, two-piece garment—coat and trousers— designed to provide maximum protection against chemical and biological contaminants. Figure 1 shows the entire ensemble, which in addition to the coat and trousers includes footwear, gloves, protective mask, and breathing device. Our study did not include these other components. Together, the ensemble is designed to provide maximum protection to the warfighter against chemical and biological contaminants without negatively affecting the ability to perform mission tasks. The focus of our review was to map the flow of data associated with the procurement, inventory control, and payment for JSLIST. According to DOD, for each JSLIST coat and trousers set they pay approximately $204. DOD began procuring JSLIST in fiscal year 1997 and expects to purchase about 4.4 million garments at a cost of about $1 billion over a 14-year period ending in fiscal year 2011. According to DOD, this amount includes the JSLIST procurement cost and a DLA surcharge for services, such as clarifying requirements, developing contract specifications and negotiating production levels with the contractors, developing and maintaining delivery schedules, and storing JSLIST until issued to the military services. According to the JSLIST Program Office, by the end of fiscal year 2001, DOD had procured approximately 1.6 million JSLIST, and about 1.2 million had been issued to the military services. According to the Joint Service Set-Aside Project office, the JSLIST are expected to last about 14 years. The Joint Service Set-Aside Project office is responsible for testing JSLIST after 5 years in inventory, which represents the manufacturer’s warranty period. Officials indicated that they have started to test JSLIST that were procured in 1997 and to date none have failed. Figure 2 shows the private and public sector organizations involved in the production of JSLIST and the relationship among the various entities. These organizations include 8 private manufacturing companies, 1 private testing and technical support firm, and 11 DOD organizations. Of the 8 private sector companies, 5 actually manufacture the JSLIST garments and the other 3 provide the component parts—the outer shell, carbon spheres, and protective liner. All these organizations play a role in JSLIST production ranging from requirements development to issuance of JSLIST to the warfighter. At this Subcommittee’s June 2000 hearing on individual chemical and biological protective equipment, the DOD Inspector General testified that the DLA had weak inventory controls over the Battle Dress Overgarment (BDO)—the JSLIST predecessor. DLA had major problems identifying and removing from inventory defective BDO protective suits. As a result, some of the defective suits had been shipped to U.S. forces in high-threat areas. The DOD Inspector General also pointed out that DLA had “materially misstated” the number of protective suits being stored. According to DLA, misplacement of items in the wrong storage areas and incorrect counts when the material was received contributed to the inventory inaccuracy. Our analysis of the data flows for the different JSLIST processes documented 128 steps. Of these 100 steps—78 percent—were manual, meaning that much of the data are transmitted using e-mails, faxes, telephones, and paper documents that must be read, interpreted, and entered into the 13 nonintegrated systems. The remaining 28 steps—22 percent—were by automated means. Appendix II provides a brief description of each system and identifies the function performed and the DOD system owner. With so many manual processes, substantial data entry is required. We also found that even data transmitted electronically are manually verified before being entered into another data system. Such practices are highly inefficient and prone to error. DOD has acknowledged that in today’s environment, current processes are slow and susceptible to errors. The following three sections highlight the data flows for the procurement, inventory control, and payment process. They provide a simplified representation of the actual processes and data flows, and the methods used for data transmission. In mapping the data flow for JSLIST, we found the procurement process to be the least automated. Figure 3 demonstrates the extensive use of manual processes between the JSLIST Program Office, the Defense Supply Center- Philadelphia, the contractors, and the Defense Distribution Center. Figure 3 does not include all of the processes that are associated with the procurement process. As shown, most of the data transmissions are manual—e-mail, fax, and regular mail. For example, JSLIST garments requirements data—which show the number and specific sizes that are to be manufactured—are e- mailed from the JSLIST Program Office to DLA’s Defense Supply Center, Philadelphia, which is responsible for negotiating the terms of the contract with the five manufacturers. The contractor— via fax—notifies the Defense Supply Center, Philadelphia, that the JSLIST garments have been produced and shipped to the Defense Distribution Center for storage. The contractors also send shipping documents, including the Material Inspection and Receiving Report (DD Form 250), with the JSLIST shipment to the Defense Distribution Center. The inventory control process is slightly more automated than the procurement process. This is due to DLA’s use of the Distribution Standard System (DSS) and the Standard Automated Material Management System (SAMMS). However, as shown in figure 4, the military service units still use extensive manual data entry in their efforts to control the JSLIST garments that have been distributed to them. According to DLA personnel, DSS contains data on the number of JSLIST procured, the number in DLA’s warehouse facilities, and the number of JSLIST that have been distributed to the military services. The data must be manually entered into DSS from the shipping documents that are received from the contractors. Once entered into DSS, shipping receipt data are electronically passed from DSS to SAMMS at the Defense Supply Center-Philadelphia. DLA also pointed out, however, that once JSLIST are distributed to the military services, DSS does not maintain any inventory control. At this point, JSLIST data are removed from DSS and DLA loses visibility of JSLIST. As shown in figure 4, military services use various methods to maintain inventory control. Of the three Army units that we visited, one used an automated system—Standard Army Retail Supply Systems (SARSS), one used a spreadsheet application, and one used paper and pen. Of the two Navy units visited, one used a dry eraser board, with handwritten notes and one did not maintain an inventory of JSLIST. Both of the Air Force units visited used the Mobility Inventory Control Accountability System (MICAS) to control their JSLIST inventory. Since MICAS is a stand-alone system that operates independently at each location, data cannot be shared between the various locations, nor does it have the capability to provide data to higher command levels. The payment process is the most automated. DFAS—the central organization in the payment process—uses more automated processes than any other organization visited. As shown in figure 5, electronic exchange of data was used more often in the payment process than in the procurement and inventory control processes. As shown in figure 5, once the invoice is received from the contractor—via the mail—DFAS electronically obtains shipping data from the SAMMS, and contract data from the Mechanization of Contract Administration System (MOCAS). Invoice, contracting, and shipping data are all needed for DFAS to process the payment to the contractor by electronic funds transfer through the Standard Accounting and Budgeting Reporting System (SABRS). Once the data enters DFAS, the payment process is automated and each DFAS division involved in the payment process has the ability to use the same data. For example, payment data are transmitted to the JSLIST Program Office via the SABRS. However, DFAS still relies on some manual processing. In DFAS’ Entitlement Division, individuals manually check to ensure that required invoice data are in the Electronic Document Management system, and then manually enter these data into MOCAS system. This system helps supports the contract administration aspects of the JSLIST program. We have previously reported on long-standing problems in contract pay through MOCAS. For example, for fiscal year 1999, DFAS data showed that almost $1 of every $3 in contract payment transactions was for adjustments to previously recorded payments—$51 billion of adjustments out of $157 billion in transactions. We have also reported that the manual entry of data into systems is prone to keypunch errors, errors caused when data entry personnel are required to interpret sometimes illegible documents, and inconsistencies among data in the systems. DOD has acknowledged that the systems used to support its business operations do not provide relevant, reliable, and timely information. As discussed in our June 4 testimony, the department has begun efforts to develop an enterprise architecture that should detail the target or “to be” environment for DOD’s business operation systems and show how these systems will interact. Managed properly, an integrated system development effort can clarify and thus help to optimize the interdependencies and interrelationships among an organization’s business operations and the underlying data systems supporting these operations. DOD and the military services lack asset visibility and control over JSLIST. There is no DOD-wide system that contains the data needed—number of JSLIST, manufacturer, manufacture date, and production lot number—to locate specific JSLIST garments that are in the possession of the military services. As a result, if the JSLIST garments had to be recalled for any reason, there is no assurance that DOD can readily or accurately locate the 1.2 million JSLIST that have been issued to the military services. In essence, DOD is faced with the same predicament today as it had in June 2000, when hearings by this Subcommittee chronicled DOD’s inability to identify the location of the BDOs—the predecessor of JSLIST. BDOs needed to be recalled and removed from the inventory because they were found to be defective, but even after a data call DOD was unable to retrieve all of the BDOs. Our September 2001 report noted that as of April 2001, DOD had not found about 250,000 of the defective BDO suits. DOD was not certain if the suits had been used, were still in supply, or were sent to disposal. That report also pointed out that DOD could not (1) monitor the status of its protective equipment inventory because the military services and DLA used at least nine different nonintegrated data systems, (2) determine whether all of its older chemical suits would adequately protect service members because some of the inventory systems did not contain essential data needed to determine usability of inventoried chemical suits, and (3) easily identify, track, and locate defective suits because inventory records did not always include contract and lot numbers. These shortcomings are consistent with the long-term problems in DOD’s inventory management that we have identified as a high-risk area due to a variety of problems, including ineffective and wasteful management systems and procedures. To improve DOD’s control and accountability of chemical and biological equipment, we made several recommendations, one of which was to implement a fully integrated inventory management system. Our visits to DLA’s Defense Distribution Center, Albany, GA, and selected military service units found that these weaknesses remain today. DOD does not have reliable asset visibility for JSLIST throughout the department. This problem can be attributed to several factors. First, according to the DOD Inspector General in testimony before this Subcommittee in June 2000, DSS—a relatively new and modern system is “chronically inaccurate.” The DOD Inspector General pointed out that its physical count of chemical protective suits disclosed that 420,000 suits were not on-hand as recorded in the inventory balance in DSS. Even if DSS were accurate, it only provides visibility and control over JSLIST located in DLA’s warehouse facilities. DSS does not contain any data that can be used from a departmentwide perspective to identify the location of the 1.2 million JSLIST garments that have been distributed to the military services. Second, once JSLIST are issued to the military service units, the lack of standard data and nonintegrated systems hinders asset visibility. Our visits to Army, Navy, and Air Force military units disclosed that all units did not maintain key data such as manufacturer, manufacture date, and production lot number. These data would be essential if JSLIST had to be recalled. Without these data, DOD would have to initiate a worldwide data call, with no assurance of the accuracy of the result. Of the three Army units visited, only one maintained these data, while neither of the two Navy units maintained these key data. Both Air Force units maintained the manufacturer, manufacture date, and production lot number. In addition, the units we visited used stovepiped, nonintegrated systems to track their JSLIST. As shown in figure 4, the method used varied from an automated system to no tracking of any kind. Of the Army units, one unit used the Standard Army Retail Supply System, another unit used a stand- alone spreadsheet application, and the third unit used paper and pen to control its JSLIST inventories. At the two Navy units visited, one used a marker and dry eraser board and the other Navy unit did not maintain a JSLIST inventory—manual or automated. Both Air Force units used MICAS to control JSLIST. According to Air Force personnel, this is a standard system used to maintain comprehensive control of assets from receipt to disposal. Information must be entered manually into MICAS. Air Force personnel also stated that they are able to identify and locate service personnel that have JSLIST in their possession by using MICAS. The Air Force personnel noted that MICAS was designed for use at the unit level, but the Air Force plans to upgrade the system to provide more visibility over JSLIST to higher command levels. Personnel at the Army and Navy units were interested in the potential for using MICAS. We provided these personnel with a point of contact in the MICAS program office. As of May 2002, one Army unit decided to try MICAS in a stand-alone mode to test its suitability and one Navy unit decided not to consider the use of MICAS it only used JSLIST for training and therefore it determined that a system was not needed. The other Army and Navy units are considering the use of MICAS. Because of DOD’s weaknesses locating and recalling defective BDOs, we inquired of the Defense Threat Reduction Agency—responsible for funding the JSLIST program—if they had the means to locate all JSLIST departmentwide if a similar situation were to occur. A program official stated that they could account for the JSLIST up to the point they are distributed to the military services. As noted previously, once suits are distributed, accountability becomes more difficult because each service has a separate logistics, supply, and maintenance management system for tracking items. Further, the official noted that these systems are not connected. The program official also stated that the requirement to track location, manufacturer, manufacture date, and production lot number of each JSLIST would be the responsibility of DLA’s Business System Modernization (BSM) program. BSM is an 8-year (fiscal year 2000 through fiscal year 2007), four-phased program that is intended to modernize DLA’s business functions such as materiel management, distribution, and cataloguing by replacing obsolete, nonintegrated data systems with a web/network-based logistics system using commercial, off-the-shelf products. The project is estimated to cost nearly $900 million. As discussed in our June 2001 report, BSM is intended to modernize DLA’s current materiel management business function from being a mere provider and manager of physical inventory to becoming primarily a manager of supply chains—linking customers with appropriate suppliers and tracking physical and financial assets. However, we believe reliance on BSM to provide adequate visibility over JSLIST is ill advised for several reasons. First, as pointed out in our June 2001 report, BSM was being implemented without the benefit of a DLA architecture or a DOD-wide logistics management architecture. Further, we noted that DLA did not have the management controls in place to develop, implement, and maintain an architecture. As discussed in our June 4 testimony, without an architecture to guide and constrain information technology investments, DOD runs the serious risk that its system efforts will perpetuate the existing system environment that suffers from system duplication, limited interoperability, and unnecessarily costly operation and maintenance. Second, even if DLA successfully implements the inventory control phase of BSM by March 2005, the majority of JSLIST may have already been procured and issued to the military services without asset visibility, including a record of critical tracking data, such as manufacturer, manufacture date, and production lot number. As of the end of fiscal year 2001, about 1.6 million JSLIST had been purchased and about 1.2 million garments had already been issued to the military services. At the expected procurement rate of 330,000 to 350,000 JSLIST annually, DOD will have purchased about 3 million of the 4.4 million of the JSLIST by fiscal year 2005. DOD’s lack of asset visibility over the JSLIST has resulted in poor inventory control. While DOD expedited the issue of the JSLIST garments to the military services in response to the events of September 11, 2001, Army, Navy, and Air Force units have sent JSLIST to the Defense Reutilization Marketing Office (DRMO) as being excess to their immediate needs. From January 2001 through June 2002, 1,934 JSLIST coats and trousers valued at about $207,000 were turned into DRMO. Of the 1,934 coats and trousers declared excess, 1,813 were turned-in after September 11, 2001. Table 1 shows the disposition of the 1,934 coats and trousers. As shown in the table 1, 275 of the coats and trousers were reissued to other government entities. One of the purposes of DRMO is to reallocate inventory that is excess to one organization’s needs to an organization that has insufficient inventory to meets its needs. We do not have any information regarding the rationale as to why 917 coats and trousers were scrapped and 313 are considered pending, which means they are eligible for reutilization. According to DLA, the 429 coats and trousers that were sent to a DOD contractor, Government Liquidation, and reportedly sold, at internet auction for approximately $1,100—or less than $3 each. As of June 18, 2002, none of the JSLIST reportedly sold by Government Liquidation had been released and remained at the company’s warehouse in Kapolei, Hawaii. We met with personnel at Hickam Air Force Base, Hawaii, and the Navy Explosives Ordnance Disposal Unit, Barbers Point, Hawaii, to determine why the JSLIST were excessed and sent to DRMO. Officials from the Air Force unit stated that JSLIST was sent to DRMO because (1) they did not belong to their unit and had been in their warehouse for at least 3 years, (2) the boxes containing JSLIST were marked “training only,” and (3) although still in vacuum-sealed packages, they thought JSLIST had exceeded their expiration date. They also indicated that prior to turning JSLIST in to DRMO, they checked with the Base Supply Office and were informed that no one else on the base needed JSLIST. The Navy unit stated that JSLIST were sent to DRMO because they had more than the 32 required to meet their immediate needs. Prior to turning JSLIST in to DRMO, the Navy unit did not consult with the Supply Office to determine if they could be used elsewhere. They indicated that they thought this was a DRMO responsibility. Believing that the garments were in excellent condition, they coded them “E” upon turning them in to DRMO. However, an item code of “E” signifies that the goods are damaged. Our physical inspection of the JSLIST garments in the Government Liquidators warehouse found that all but 30 were marked “training only.” These 30 were turned in by the Navy unit and appeared to be in good condition. The “training only” JSLIST should not be used in a combat environment because they are considered to be defective for that purpose. However, since they were still in vacuum-sealed packages, they appeared suitable for training purposes. When JSLIST are issued to the warfighter, they generally receive a number of sets—coat and trousers—based upon their assignment. For example, at one of the Air Force units we visited, each member is to have five JSLIST sets—four for operations and one for training. Without a “training only” JSLIST, one that would have otherwise been available for operations must be used for training. On June 19, 2002, we told the JSLIST Program Manager about this situation. He stated that he was not aware that JSLIST garments were being excessed and sold and acknowledged that DOD does not have visibility over the JSLIST garments. He also stated that military service units were “clamoring” for JSLIST garments for training purposes. Further, he stated that none of these garments should have been turned in to DRMO. We suggested that he take action to terminate the sale of these garments. He indicated that he would initiate immediate action to do so. Private sector companies, driven by today’s globally competitive business environment, have developed innovative best business practices to cut costs and meet customer needs by streamlining their logistics operations. Best business practices refer to the processes, practices, and systems identified in public and private organizations that performed exceptionally well and are widely recognized as improving an organization’s performance and efficiency in specific areas. Some of the most successful improvement efforts include a combination of practices that are focused on improving the entire logistics pipeline—an approach known as supply chain management. DOD has acknowledged that best business practices of private industry offer opportunities for making significant improvements to its business operations. As evidenced by the information presented today, implementation of fundamental private sector supply chain management practices by DOD would substantially improve it efficiency and effectiveness. Our discussions with two leading-edge retail companies—Sears and Wal- Mart—identified business practices that are vastly different than those employed by DOD. Unlike DOD, which has a proliferation of nonintegrated systems, nonstandard data, extensive use of manual processes, and limited visibility over inventory, Sears and Wal-Mart are at the other end of the spectrum. Sears and Wal-Mart are highly automated, use standard data, and make extensive use of electronic data interchange (EDI). Further, each entity is able to maintain visibility of its inventory throughout the various levels of its organization. Sears, a leading retailer of apparel, home and automotive products, and services, had reported annual revenue of over $41 billion and net income of approximately $735 million for its fiscal year 2001. Sears operates 867 mall- based retail stores, most with co-located Sears Auto Centers, and an additional 1,318 specialty stores including hardware, outlet, tire and battery stores as well as independently owned stores, primarily in smaller and rural markets. Wal-Mart Stores, Inc., is the world’s largest retailer with reported annual net revenue of over $193 billion and net income of almost $6.3 billion for its fiscal year 2001. The company operates 4,189 retail stores in all 50 states and 9 foreign countries. Of these stores, 2,348 are regular stores, 1,294 are supercenters, 528 are Sam’s Clubs, and 19 are neighborhood markets. As previously discussed, the processes DOD uses to procure, control, and pay for the JSLIST garments are characterized by numerous manual interventions with support from as many as 13 nonintegrated automated information systems. With 78 percent of the data used to support the JSLIST program involving some form of manual entry, DOD’s logistics processes are slow and susceptible to error. As a result, DOD’s business processes do not provide relevant, reliable, and timely financial and logistical information. In contrast, Sears and Wal-Mart have systems that provide relevant, reliable, and timely information. As noted in our June 4 testimony before this Subcommittee, systems have proliferated within DOD. At the time of the hearing, DOD acknowledged that it used at least 1,127 systems in the processing of financial information. For the most part, these systems are not integrated with each other. In the past, DOD’s system development efforts have been stovepiped within the department’s organizational entities, with system development money spread across DOD and no central control. In addition, standard data were not always used across organization boundaries. These limitations preclude DOD and the Congress from receiving the relevant information that is needed in the decision-making process. This is clearly demonstrated by the use of 13 nonintegrated systems associated with JSLIST. In our discussions with Wal-Mart officials, they noted that Wal-Mart does not permit its subsidiaries or components to develop their own system solutions. System funding and development is viewed from a corporate perspective. Therefore, stovepiped efforts that exist in DOD would not occur within Wal-Mart. Wal-Mart also noted that when an acquisition is made, the new entity is required to convert to the Wal-Mart system—this brings about the standardization of data. Standardization of data and integration of systems is important because it aids in financial accounting and inventory management, including asset visibility. In dealing with suppliers, both Sears and Wal-Mart make extensive use of EDI—which means that data are received and transmitted to and from suppliers electronically. In essence, using EDI virtually eliminates the need for human intervention and thereby helps to reduce the risks of errors being made. Sears and Wal-Mart representatives stated that the more manual intervention in the process, the less likely the information will be relevant, reliable, and timely. Sears’ personnel pointed out that over 99 percent of vendors’ purchase orders are processed using EDI. According to Sears’ representatives, if a supplier does not have EDI capability, they are required to contract with a third party to submit the data to Sears electronically. Similar to Sears, Wal-Mart also makes extensive use of EDI. According to Wal-Mart representatives, about 85 percent of their suppliers use EDI. As previously discussed, DOD cannot readily determine the location of the 1.2 million JSLIST that have been issued to the military services because of nonstandard systems and the lack of standard data across DOD— manufacturer, manufacture date, and production lot number—that would be needed to quickly locate and remove JSLIST from inventory, if recalled. These data should also be maintained to locate the JSLIST and, if necessary move them where needed in the event of a chemical or biological attack. Unlike DOD, Sears and Wal-Mart have integrated systems with standard data across the organizations and as a result have visibility over inventory regardless of location. For example, at our request, Wal-Mart headquarters staff in Bentonville, Arkansas immediately identified for us the number of 6.4 ounces tubes of a brand-name toothpaste on the shelf at one of their retail stores in Fairfax, Virginia. In addition to identifying 25 tubes of this toothpaste at Fairfax, Virginia, at approximately 1:15 PM, on June 12, 2002, Wal-Mart’s system showed daily and weekly product sales and the date of the last shipment and the quantity received. Figure 6 compares Wal-Mart’s and DOD’s visibility over their respective inventories. According to Wal-Mart representatives, the level of visibility they have over inventory items as shown in figure 6, is critical to quickly remove from the shelf any recalled items. Wal-Mart views the efficient and effective removal of recalled items essential to maintaining credibility with its customers. Wal-Mart also demonstrated control and visibility over its inventory at the Bentonville, Arkansas Distribution Center. The information in the system showed the specific location and number of a certain brand of 27-inch televisions in the warehouse. We selected 4 of the 202 televisions listed and verified that all 4 were at the specific location indicated in the system. In addition to using technology to streamline their inventory processes, Sears and Wal-Mart personnel identified several other keys to their success. For example, they stated there needs to be an understanding throughout the organization of what it is trying to achieve. Clearly, all must understand the goals and objectives and it is imperative that all parties work in a cooperative manner. At DOD, as discussed in our June 4 testimony, this has not always been the case. Cultural resistance to change and military service parochialism have played a significant role in impeding past attempts to implement broad-based management reforms at DOD. If the barriers to change are not removed, DOD will continue to be faced with the business-as-usual mentality and its current endeavors to bring about substantive change to the department’s current flawed business operations will be unsuccessful. If this occurs, as it has in the past, billions of dollars will have been spent without any marked improvement in departmental operations. Wal-Mart officials also noted that another key element in their success has been the use of individual performance metrics and incentives throughout the organization. Whether it is the manager of a given store or someone working in the warehouse, performance metrics have been established and each person is evaluated against those metrics on a routine basis. If the person’s performance exceeds the metrics, he or she is rewarded. For example, hourly workers can receive wage increases for exceeding corporate productivity and inventory accuracy goals. Store managers have metrics such as store profitability and inventory shrinkage and receive bonuses for achieving the metrics. For DOD we previously identified the lack of incentives as one of the major underlying causes for the failure of past reform efforts within the department. Using computers acquired by government purchase cards as a case study, we found that inefficient billing procedures at DFAS-Columbus have increased the costs being incurred by some DOD customers for the payment of monthly purchase card statements. For certain transactions processed through DFAS-Columbus, monthly credit card statements are mailed or faxed and each purchase is manually re-entered because (1) the Navy has chosen not to electronically submit its purchase card statements, (2) the payment system is not capable of accepting electronic purchase card statements from CitiBank, the purchase card contractor, and (3) defense agencies have not implemented electronic purchase card processing. DFAS-Columbus charges customers over $17 per line if the data are manually entered and about $7 per line if the data are transmitted electronically. According to the DFAS-Columbus Commercial Pay Services Business Manager, across all DFAS Centers purchase card statements are processed electronically for about 90 percent of the Air Force’s statements, about 80 percent of the Army’s statements, and about 50 percent of the Navy’s statements. The purchase card is a governmentwide commercial credit card issued under a government contract to federal agency employees to more efficiently purchase needed goods and services directly from vendors. The purchase card can be used for both micropurchases and payment of other purchases. The Federal Acquisition Regulation, Part 13, “Simplified Acquisition Procedures,” establishes criteria for using purchase cards to place orders and make payments. In addition, the Department of the Treasury, DOD and the military services have issued regulations, policy, and guidelines governing the use of the purchase card. Prior to DOD’s implementation of the purchase card program in 1994, buying goods and services was a labor- and paper-intensive process— requisitions were prepared and sent to procurement offices. The procurement offices issued purchase orders, goods and services were delivered, receiving reports were prepared, and payments were then made. The purchase card program was designed to simplify the purchase process by eliminating the need to process purchase requests through procurement offices and avoiding the administrative and documentation requirements of the traditional contracting processes. In mapping the flow of data for the use of the purchase card to procure, control, and pay for a computer item, we identified 19 systems. Appendix III provides a brief description of each system identified, the function performed by the system, and the system owner. When scanning the purchase card to obtain authorization through the bank network, merchants are to verify the validity of the transactions using a point of sale scanning device. This device can perform up to 50 authorization checks such as verifying the expiration date and account number, ensuring the card has not been reported lost or stolen, and determining that the purchase amount is within the prescribed dollar limits. In fiscal year 2001, DOD reported that it used the purchase card in procuring goods and services valued at over $6.1 billion. Although we support a well-controlled purchase card program to streamline the government’s acquisition process, significant breakdowns in internal controls have contributed to fraudulent, improper, and abusive purchases and theft and misuse of government property. Our March 13, 2002, testimony highlighted the vulnerability of two Navy units to fraudulent, improper, and abusive use of government purchase cards. Currently, we have additional efforts ongoing to review internal controls over purchase card processes used by selected Army, Air Force, and Navy units. At DFAS-Columbus, we observed that much of the purchase card payment process is manual. Certified monthly purchase card statements are manually received from Navy working capital fund activities and defense agencies. Upon receipt of the monthly statements, DFAS-Columbus accounting technicians manually enter line-by-line transaction data into the Computerized Accounts Payable System (CAPS) for payment. The data entered include information such as document number, year, activity and funding code, cost code, and dollar amount for each individual transaction. The manual entry of the data is the result of CAPS not being capable of accepting purchase card statements electronically from CitiBank—the government contractor providing purchase card services to the Navy. Further, DFAS-Columbus personnel informed us that even if CAPS had the capability, Navy working capital fund purchase card transactions would have to be entered manually because the Navy has decided not to electronically submit purchase card statements. According to DFAS-Columbus officials, DFAS charges $17.13 for each line on the monthly statement that must be manually entered into the payment system. However, the processing fee is reduced to $6.96 per document line, if the monthly statement is electronically processed. Since DFAS is a working capital fund activity, the fee charged should represent the actual cost being incurred in providing the service. We did not audit these fees to determine if they represented actual costs. As noted previously, in our discussions with Sears and Wal-Mart, we were informed that the use of EDI is critical. For example, at Sears, over 99 percent of the purchase orders are transmitted via EDI, which greatly reduces the amount of manual entry that is needed and also reduces the risk that errors will be made in the re- entry of data. The following examples show the cost of manual entry of purchase card transactions. On February 13, 2002, DFAS-Columbus received a certified purchase card monthly statement detailing 271 purchases totaling nearly $24 million from the Defense Commissary Agency, Fort Lee, Virginia. At $17.13 per document line, the DFAS fee for manually processing this invoice was over $4,600. If the Defense Commissary Agency could have submitted the invoice electronically, the DFAS fee would have been about $1,890, or less than half the charge of manual processing. On January 29, 2002, DFAS-Columbus received a certified purchase card monthly statement detailing 228 lines on the monthly statement for purchases costing nearly $957,000 from the Navy Fleet Material Support Office in Mechanicsburg, Pennsylvania. Since the 228 lines had to be manually entered into CAPS, the Navy incurred a processing fee of $3,900. However, if the monthly statement had been electronically processed, the Navy would have paid DFAS approximately $1,590. As shown in table 2, we found instances in which the amount of the purchase was less than the amount charged for processing the one line from the monthly statement. DFAS-Columbus officials informed us that purchase card statements from Navy working capital fund activities that are paid by DFAS-Columbus are manually processed for two reasons. First, the Navy has chosen not to electronically send purchase card statement paid from Navy working capital fund activities. Second, DFAS-Columbus has not yet made the necessary enhancements to the payment system to receive electronic invoices from the Citidirect system—the system used by the contractor providing the Navy purchase cards. Third, defense agencies have not implemented electronic purchase card processing. According to DFAS- Columbus personnel, monthly statements they receive from defense agencies, including the Defense Contract Management Agency, Defense Commissary Agency, and the Defense Information Systems Agency are to be received electronically beginning this summer. Further, our November 2001 report discussed concerns we had with the failure to record accountable items in the property records. Accountable property includes easily pilferable or sensitive items such as computers and related equipment, digital cameras, and cell phones. Our report pointed out that at two Navy activities we identified instances where computer monitors and laptop computers were not recorded in their property records and could not be found. Recording these items in the property records is an important step to ensure accountability and financial control over these assets. In addition, our also report expressed concern about the use of the government purchase card to procure computers that could be more economically and efficiently procured through bulk purchases. We made recommendations to the Commander of the Naval Supply Systems Command aimed at correcting both of these problems. The JSLIST and purchase card case studies clearly demonstrate that DOD’s current business operations are inefficient and ineffective. Specifically, these case studies are real-time examples of the high cost of nonintegrated systems that require substantial manual intervention in nearly every step of the process. In addition, mission performance is also affected by these processes as shown by DOD’s lack of visibility over the JSLIST. These case studies are small examples of the broader financial and inventory management and systems modernization challenges facing DOD that were highlighted in our June 2, 2002 testimony before this Subcommittee. The integrated, automated processes used by Wal-Mart and Sears offer a glimpse of the cost savings and improved mission performance that DOD could achieve with successful reform. Unlike DOD, market forces and a strong system of accountability drive Sears and Wal-Mart to operate as efficiently and effectively as possible. As we have previously stated, for DOD to succeed in its reform efforts, strong leadership from the Secretary will be necessary to develop a system of accountability and incentives and to cut through the deeply embedded cultural resistance to change and service parochialism. In addition, continued congressional oversight such as the hearing today will be critical to successfully reforming DOD’s business operations. The Secretary has recognized the importance of reform and estimated that DOD could save 5 percent of its budget—or about $15 billion to $18 billion annually—by successfully transforming DOD’s business processes. Mr. Chairman, this concludes our statement. We would be pleased to answer any questions you or other members of the Subcommittee may have at this time. For further information about this testimony, please contact Gregory D. Kutz at (202) 512-9095 or kutzg@gao.gov, or David R. Warren at (202) 512- 8412 or warrend@gao.gov. Other key contributors to this testimony include Lon Chin, Francine Delvecchio, Ted Hu, Richard Newbold, Sanford Reigle, John Ryan, Darby Smith, and Earl Woodard. In mapping the information flow for the procurement, inventory control, and payment of the Joint Service Lightweight Integrated Technology Suit, we visited the following locations. JSLIST Program Office, Quantico, Virginia. Defense Supply Center, Philadelphia, Pennsylvania. Defense Finance and Accounting Service, Columbus, Ohio. Defense Contract Management Agency, Cincinnati, Ohio. Defense Distribution Center, Albany, Georgia. Joint Set-Aside Project, Marine Corps Logistics Base, Albany, Georgia. Air Force’s 919th Special Operations Wing, Eglin Air Force Base, Florida. Air Force’s 16th Special Operations Wing/Logistics, Hurlburt Field, Florida. Air Force’s Chemical Training Unit, Hickam Air Force Base, Hawaii. Army’s 101st Airborne Division, Fort Campbell, Kentucky. Army’s 5th Special Operations Forces, Fort Campbell, Kentucky. Army’s 160th Special Operations Aviation Regiment, Fort Campbell, Kentucky. Navy’s Disaster Preparedness, Naval Air Station, Pensacola, Florida. Navy’s Explosive Ordnance Disposal Unit, Eglin Air Force Base, Florida. Navy’s Explosive Ordnance Disposal Unit, Barbers Point, Hawaii. In mapping the information flow for the procurement, inventory control, and payment of a computer item using a government purchase card, we visited the following locations. DOD Purchase Card Program Office, Falls Church, Virginia. Defense Finance and Accounting Service, Columbus, Ohio. Air Force Materiel Command, Wright-Patterson Air Force Base, Ohio. Army Soldier Biological and Chemical Command, Natick, Massachusetts. In order to compare DOD business processes with those of leaders in the retail industry we visited: Sears, Roebuck, and Company, Hoffman Estates, Illinois. Wal-Mart Incorporated, Bentonville, Arkansas. This section includes general information describing each of the 13 information systems used to support the procurement, inventory control, and payment processes for the JSLIST protective chemical/biological equipment purchased through contracts. Distribution Standard System (DSS) Inventory control Supports management of all business processes of the department’s warehouse operations, including the processing of material requisition orders, reporting shipping information to customers, and providing visibility of asset quantity, condition, and location. Standard Automated Materiel Management System (SAMMS) Supports wholesale consumable item inventory management processes at defense supply centers, including processing requisitions, forecasting requirements, generating purchase requests, and maintaining stock levels, technical data, item identification, and asset visibility. Electronic Document Access (EDA) Stores documents such as contracts, contract modifications, government bills of lading, and payment vouchers as electronic images and provides personnel from multiple DOD communities access to these documents. Converts and stores paper documents such as contracts, invoices, and receiving reports as electronic images providing document imaging, electronic folders, and workflow processing to DFAS personnel at a single location. Program Budget Accounting System – Funds Distribution (PBAS-FD) Records and controls obligation and expenditure authority for all organizational levels except the allotment holder allowing DOD financial managers to electronically receive and issue funds for the Office of the Secretary of Defense, Army, and Navy. Standardizes all Marine Corps transactions and provides a transaction driven general ledger in compliance with the U.S. Standard General Ledger Charts of Accounts. Supports the administration and payment of supply and service contracts by contract administration offices, payment offices, procurement offices, funding stations, and consignees. Inventory control Provides comprehensive asset control and shelf life management, including receiving, accounting, controlling, tracking, issuing, deploying, and reporting of chemical and biological equipment. Standard Army Retail Supply System (SARSS-O) Inventory control Supports retail supply operations and maintains the accountable record of material received, stored and issued. Standard Property Book System – Redesign (SPBS-R) Inventory control Automates overall property accountability and asset visibility functions, including the creating of master hand receipts and the passing of asset data on item shortages and overages to other Army systems. Shipboard Non-Tactical Automated Data Processing System (SNAP) Inventory control Provides numerous applications for shipboard use, including processing of material requirements, requisitions, and receipts; tracking inventory stock location, balances, demand and usage; providing individual custody records; and reconciling requirements, requisition, inventory, and financial data. Standard Automated Logistics Tool Set (SALTS) Inventory control Provides means to move logistics and administrative data from a single point of entry to databases and data services world-wide, including DLA’s SAMMS, Army’s Total Asset Visibility system, and Air Force’s Air Force Logistics Information File. Inventory control Maintains total asset visibility over chemical and biological protection equipment held for future testing and tracking results using a spreadsheet application. This section includes general information describing each of the 19 information systems used to support the procurement, inventory control, and payment processes for computer equipment purchased using the government purchase card. Provides DFAS and Air Force financial service offices with on-line access to current status information of procurement programs, allotments, initiations, commitments, obligations, and disbursements for central procurement appropriations. Provides standard installation center level vendor pay system using a personal computer-based application with interfaces with DOD standard procurement, disbursing and accounting systems. Defense Business Management System (DBMS) Supports the major accounting functions of general ledger accounting, budgetary accounting and funds control, job order and cost accounting, accounts receivable and payable, and accounting and managerial reporting for DFAS, DLA depot and supply centers, Defense Contract Audit Agencies, and the Defense Commissary Agencies. Defense Industrial Financial Management System (DIFMS) Provides about 17 Navy, Marine Corps, and Air Force field-level and headquarters-level activities with transaction-driven funds control, accounting for budget execution, and management information, including cash, labor, other cost, material, cost summary, job order and customer order, billing, general ledger accounts, fixed asset accounting, and cost competition data. Converts and stores paper documents such as contracts, invoices, and receiving reports as electronic images providing document imaging, electronic folders, and workflow processing to DFAS personnel at a single location. Provides rapid and timely vendor payments to Air Force vendors by processing commitment transactions electronically to the GAFS; compares invoice, receiving report and contract data to create a payment vouchers; and concurrently passes electronic funds transfer data to both disbursing and accounting systems. Program Budget Accounting System – Funds Distribution (PBAS-FD) Records and controls obligation and expenditure authority for all organizational levels except the allotment holder allowing DOD financial managers to electronically receive and issue funds for the Office of the Secretary of Defense, Army, and Navy. Standardizes all Marine Corps transactions and provides a transaction driven general ledger in compliance with the U.S. Standard General Ledger Charts of Accounts. Incorporates military pay, travel, accounts payable, accounting, and disbursing functions into an on-line, interactive menu-driven system for DFAS to produce cash payments, vouchers, and reports.
This testimony reviews two case studies that clearly demonstrate the need for the Department of Defense (DOD) to reform its business operations. These two case studies are microcosms of the broad management challenges facing DOD that were highlighted in GAO's June 2002 testimony (See GAO-02-784T). GAO provided views on the underlying or root causes of DOD's long-standing inability to successfully reform its business operations, including a lack of sustained top-level leadership, cultural resistance to change, and military service parochialism. In addition, GAO found seven key elements necessary for successful reform, including approaching DOD's broad array of management challenges using an integrated, enterprisewide approach.
Treasury’s Office of Homeownership Preservation within the Office of Financial Stability, which administers Treasury’s TARP-related efforts, is tasked with finding ways to help prevent avoidable foreclosures and preserve homeownership. The $27.8 billion in TARP funds that Treasury has obligated for MHA is to be used to encourage the modification of eligible mortgages and to provide other relief to distressed borrowers. Only loans that were originated on or before January 1, 2009 and that meet other requirements are eligible for assistance under the MHA program. In December 2015, Congress mandated that the MHA program be terminated on December 31, 2016, with an exemption for HAMP loan modification applications made before that date. Congress also provided Treasury with the authority to extend its authority under EESA with respect to the Housing Finance Agency Innovation Fund for the Hardest Hit Housing Markets (Hardest Hit Fund) to December 31, 2017 for current program participants and to obligate up to $2 billion of TARP funds to that program. Treasury uses contracts with its servicers to establish the amount of funds that each servicer may receive under MHA for incentives or other payments. Treasury initiated HAMP and the other TARP housing programs using its authority under EESA and authorized Fannie Mae to act as a financial agent. At Treasury’s request, Fannie Mae signed contracts with banks and other mortgage servicers. As prescribed by EESA, the contracts took the form of agreements to purchase financial instruments from the servicers. For the MHA program, in these contracts, known as servicer participation agreements, Treasury, through Fannie Mae, committed to pay servicers for completing modifications of mortgage loans according to the terms of HAMP and other MHA programs. Each of these contracts establishes a maximum amount that Treasury, through Fannie Mae, is obligated to pay the servicer. Each of the contracts established a maximum amount that Treasury would have to pay, and Treasury recorded these amounts for each contract as obligations, for a total of approximately $27.8 billion. All of the contracts were signed and the corresponding funds obligated in fiscal years 2009 and 2010. Treasury has not obligated any new funds for MHA since the end of 2010 but has made many adjustments to the amounts originally set out in the contracts, pursuant to provisions set forth in the contracts. The contracts do not require upfront payments of the full maximum amounts; Treasury expends funds as servicers enroll borrowers in modifications and complete other activities. As of October 2015, $12.6 billion had been expended for all the MHA programs, leaving $17.2 billion in obligated but unexpended funds. Of this $17.2 billion, according to Treasury’s estimate, a maximum of $9.5 billion could be expended through future payments to servicers for HAMP loan modifications completed before October 2015 and for other activities that servicers have already initiated. The remaining $7.7 billion in obligations represents the amounts potentially available to servicers for future HAMP modifications and other MHA transactions, as established in the original contracts. Due to restrictions imposed by the Dodd-Frank Act, Treasury may obligate TARP funds only for programs that were initiated prior to June 25, 2010. MHA consists of several programs designed to help struggling homeowners and prevent avoidable foreclosures. HAMP first-lien modifications. The largest component of MHA is the first-lien modification program. The program was intended to help eligible borrowers stay in their homes and avoid potential foreclosures by reducing the amount of their monthly payments to affordable levels. Modifications are available for single-family properties (one to four units) with mortgages no greater than $729,750 for a one-unit property. Borrowers are eligible only if companies servicing their mortgages have signed program participation agreements. Participating loan servicers use a standardized net present value (NPV) model to compare a modified loan’s expected cash flows to the cash flows that would be expected from the same loan with no modifications, using certain assumptions. If the expected cash flow with a modification is positive (i.e., more than the estimated cash flow of the unmodified loan), the participating loan servicer is required to offer the loan modification. HAMP provides both one-time and ongoing incentives to mortgage investors, loan servicers, and borrowers for up to 6 years after a loan is modified. These incentives take into consideration the servicers’ and investors’ costs for making the modifications and are designed to increase the likelihood that the program will produce successful modifications over the long term. They include principal balance reductions for borrowers who make payments on time and incentives for servicers tied to the amount by which a modification reduces the borrower’s monthly payment. The HAMP first-lien modification program has three components—the original HAMP (Tier 1), an additional first-lien modification known as HAMP Tier 2, and a modification with reduced borrower documentation requirements known as Streamline HAMP. Announced in March 2009, HAMP Tier 1 is generally available to qualified borrowers who occupy their properties as their primary residence and whose first-lien mortgage payments are more than 31 percent of their monthly gross income, as calculated using the front-end debt-to- income (DTI) ratio. HAMP Tier 2, which was announced in January 2012, is available for both owner-occupied and rental properties, and borrowers’ monthly mortgage payments prior to modification may be less than 31 percent DTI. Streamline HAMP, which was announced in July 2015 and requires servicers to have an implementation policy in place as of January 2016, offers modification terms and eligibility criteria similar to those of HAMP Tier 2 but does not require documentation (or verification) of borrower income. As part of the HAMP Tier 1 modification, servicers reduce a borrower’s interest rate until the DTI is 31 percent or the interest rate reaches 2 percent. The new interest rate is fixed for the first 5 years of the modification. It then gradually increases by increments of no more than 1 percent per year until it reaches the cap, which is the Freddie Mac Primary Mortgage Market Survey rate at the time of the modification agreement. The interest rate is then fixed at that rate for the remaining loan term. In contrast, under HAMP Tier 2 and Streamline HAMP, the interest rate is adjusted to a rate that remains fixed for the life of the loan. The fixed rate is set using the weekly Freddie Mac Primary Mortgage Market Survey Rate at the time of the modification agreement. For HAMP Tier 1, HAMP Tier 2, and Streamline HAMP, borrowers must demonstrate their ability to pay the modified amount by successfully completing a trial period of 3 months or more before a loan is permanently modified and any government payments are made. When a borrower defaults—misses three consecutive monthly mortgage payments—after the loan has been permanently modified, Treasury stops paying financial incentives to the borrower, servicer, and investor for that modification. Borrowers who have received a HAMP Tier 1 modification may be eligible for a HAMP Tier 2 or Streamline HAMP modification under certain conditions. These include having undergone a change in circumstances, having entered into a permanent HAMP Tier 1 loan modification at least 12 months earlier, or having defaulted on the HAMP Tier 1 modification (referred to as redefault). In all cases, borrowers must otherwise meet HAMP eligibility criteria, such as having a financial hardship. The Second Lien Modification Program (2MP). 2MP is designed to work in tandem with HAMP modifications to provide a comprehensive solution to help borrowers afford their mortgage payments. Under 2MP, when a borrower’s first lien is modified under HAMP and the servicer of the second lien is a 2MP participant, that servicer must offer a modification and/or full or partial extinguishment of the second lien. Treasury provides incentive payments to second lien mortgage holders in the form of a percentage of each dollar in principal reduction on the second lien. Treasury has doubled the incentive payments offered to second lien mortgage holders for 2MP permanent modifications that include principal reduction and have an effective date on or after June 1, 2012. Principal Reduction Alternative (PRA). In October 2010, PRA took effect as a component of HAMP to give servicers more flexibility in offering relief to borrowers whose homes were worth significantly less than their mortgage balance. Under PRA, Treasury provides mortgage holders/investors with incentive payments in the form of a percentage of each dollar in principal reduction. Treasury has tripled the PRA incentive amounts offered to mortgage holders/investors for permanent modifications with trial periods effective on or after March 1, 2012. Participating servicers of loans not owned by the housing enterprises (Fannie Mae or Freddie Mac) must evaluate the benefit of principal reduction for mortgages with a loan-to-value (LTV) ratio that is greater than 115 percent when evaluating a homeowner for a HAMP first-lien modification. Servicers must adopt and follow PRA policies that treat all similarly situated loans in a consistent manner but are not required to offer principal reductions, even when the NPV calculations show that the expected value of the loan’s cash flows would be higher with a principal reduction than without it. When servicers include principal reduction in modifications under PRA, the reduction is initially treated as noninterest-bearing principal forbearance. If the borrower is in good standing on the first, second, and third anniversaries of the effective date of the modification’s trial period, one-third of the principal reduction amount is forgiven on each anniversary. Home Affordable Foreclosure Alternatives (HAFA) Program. Under this program, servicers offer foreclosure alternatives (short sales and deeds-in-lieu of foreclosure) to borrowers who meet the basic eligibility requirements for HAMP and do not qualify for a HAMP trial modification, do not successfully complete a HAMP trial modification, default on a modification (miss three or more consecutive payments), or request a short sale or deed-in-lieu. The program provides incentive payments to investors, servicers, and borrowers for completing these foreclosure alternatives. Home Affordable Unemployment Program. This program offers assistance to borrowers who are suffering financial hardship due to unemployment. Borrowers are eligible for a 12-month forbearance period during which monthly mortgage payments are reduced or suspended. Servicers can extend the forbearance period at their discretion if the borrower is still unemployed after the 12-month period. Borrowers who later find employment or whose forbearance period expires should be considered for a HAMP loan modification or a foreclosure alternative, such as the HAFA program. No TARP funds are provided to servicers under this program. Federal Housing Administration (FHA) and Rural Housing Service (RHS) modification programs (FHA-HAMP and Rural Development, or RD-HAMP, respectively). These programs are similar to HAMP Tier 1 and cover FHA-insured and RHS-guaranteed mortgage loans. If a modified FHA-insured or RHS-guaranteed mortgage loan meets Treasury’s eligibility criteria, the borrower and servicer can receive TARP-funded incentive payments from Treasury. In 2009, Treasury entered into agreements with Fannie Mae and Freddie Mac to act as financial agents for MHA. Fannie Mae serves as the MHA program administrator and is responsible for developing and administering program operations, including registering, executing participation agreements with, and collecting data from servicers and providing ongoing servicer training and support. Freddie Mac serves as Treasury’s compliance agent and has a designated independent division, Making Home Affordable Compliance, which is responsible for assessing servicers’ compliance with program guidelines, including conducting on- site and remote servicer loan file reviews and audits. Several indicators of distress among homeowners with mortgages have shown improvements since the height of the housing crisis, and evidence suggests that recent loans are less risky than those originated before the crisis. As shown in figure 1, the percentage of mortgages in default— delinquent 90 days or more—is lower than it was when HAMP was introduced in 2009, according to data published by the Mortgage Bankers Association. The percentage of mortgages that are seriously delinquent (those in default or foreclosure) has declined from a peak in 2009 but remains elevated relative to the period from 2000 to 2007. In most parts of the country, a smaller proportion of homeowners owed 95 percent or more of their home’s value on a mortgage in 2015 than in 2008. According to published data from CoreLogic, the percentage of properties with mortgages that are in negative equity or near negative equity has declined in most states since 2008, but several states have seen little improvement (see fig. 2). In two states, Nevada and Florida, 20 percent or more of homes that have mortgages fell into the negative equity or near negative equity category as of the second quarter of 2015. In other states—Rhode Island, Maryland, Illinois, New Jersey, Connecticut, and New Mexico—the percentage of homes with mortgages that fall into this category was not substantially lower in 2015 than in 2008. According to the Housing Credit Availability Index (HCAI) developed by the Urban Institute, the expected default risk of mortgages at origination has declined since 2006. The HCAI is based on the historical default rates of loans originated in selected years, for categories of loans defined by borrower characteristics (such as credit scores and debt-to-income ratios) and loan characteristics (such as the presence or absence of prepayment penalties or adjustable interest rates). The historical default rates, combined with data about loan terms and borrower characteristics at origination, are used to generate a measure of expected default risk at origination. The HCAI presents this measure in terms of the percentage of loans originated in a given quarter that will probably default, that is, become 90 or more days delinquent. This overall percentage is the sum of product risk (risk due to characteristics of loans) and borrower risk (risk due to characteristics of borrowers). When few loans with risky characteristics are originated, the product risk will be low. As shown in figure 3, the overall index remained between 10 and 17 percent for almost a decade (from 1998 through the 3rd quarter of 2007) and was below 6 percent as of the 2nd quarter of 2015. By the 2nd quarter of 2015, the index had increased from a low in 2013, but by less than 1 percentage point. This change over time in the HCAI may also suggest that mortgage credit is not as easily available as it was before the financial crisis. According to HOPE NOW, nearly 1.8 million mortgage loan modifications were completed in 2010, and this number has steadily decreased since that time, to a total of about 330,000 modifications in the first 3 quarters of 2015. The HOPE NOW estimate includes both HAMP and non-HAMP modifications and is based on data from Treasury as well as data from mortgage servicers, extrapolated to produce an estimate of the entire U.S. mortgage market. According to the HOPE NOW data, in 2009, the first year in which HAMP modifications were available, HAMP Tier 1 permanent modifications accounted for about 5 percent of the total number of modifications completed that year. However, HAMP was not in place for the full year, and servicers did not report the first permanent HAMP modifications until the latter part of 2009. According to the HOPE NOW data, HAMP permanent modifications (Tier 1 and Tier 2 combined) have accounted for between 22 and 34 percent of the total number of modifications completed each year since 2010. The same data indicate that HAMP’s percentage of the total number of modifications decreased in 2012 and 2013 over the previous years, increased in 2014, and remained above the 2012 level through the first 3 quarters of 2015. Regarding the vintage of loans being modified, OCC data suggest that pre-2009 loans represented the majority of modifications completed in the first half of 2015 and in earlier years. OCC data, which are based on loans serviced by national banks that report to OCC for its quarterly Mortgage Metrics report, show that loans originated before 2009 represent the vast majority of all modifications performed since HAMP was introduced in 2009, but the share represented by pre-2009 loans has been decreasing. In the first 3 quarters of 2015, modifications of pre- 2009 loans represented 68 percent of the total number of modifications in OCC’s Mortgage Metrics Report portfolio. According to the OCC Mortgage Metrics data, HAMP modifications have resulted in greater payment reductions than non-HAMP modifications. As described above, HAMP modifications are designed to help borrowers stay in their homes by reducing monthly payments. Figure 4 compares modifications under one of the HAMP programs—including FHA-HAMP and enterprise HAMP modifications—with modifications performed outside of HAMP. In both cases, only modifications of mortgages originated before 2009 are included. As shown in figure 4, modifications performed outside of HAMP in 2009 resulted in an approximately 10 percent median payment reduction, while modifications performed in one of the HAMP programs resulted in an approximately 39 percent median payment reduction. These two numbers have gradually converged over time, and, by the third quarter of 2015, non-HAMP modified mortgages received a median 22 percent payment reduction, while HAMP-modified mortgages received a median 29 percent payment reduction. Treasury did not update its original estimate of borrower participation in the Making Home Affordable program between 2009 and 2015. Our prior work has concluded that conducting reviews of unexpended balances can help agencies identify funds that are not likely to be used. Treasury officials previously indicated that they cannot reliably estimate future borrower participation and likely program expenditures due to inherent limitations of the available data. While no estimate of future participation and expenditures can be made with complete certainty, our own analysis of data from Treasury and a private vendor resulted in estimates of borrower participation and cost projections that ranged from Treasury using all available MHA funds to an estimated surplus of $2.5 billion. By assessing likely future program participation and related expenditures, Treasury could create opportunities for it and Congress to identify and use any likely unexpended funds for other priorities. In providing technical comments to this report, Treasury officials provided us with analysis of expected future program participation and related expenditures for the MHA program as a whole—the first such analysis since 2009. Prior GAO work has concluded that conducting reviews of unexpended balances can help agencies identify opportunities to achieve budgetary benefits. This work identified four key questions to consider when evaluating unexpended balances. 1. What mission and goals is the account or program supporting? 2. What are the sources and fiscal characteristics of the funding? 3. What factors affect the size and composition of the unexpended balance? 4. How does the agency estimate and manage unexpended balances? This last question has particular relevance for the MHA program given the approaching deadline of December 31, 2016, for entry into the program. Understanding an agency’s processes for estimating and managing carryover balances provides information that can be assessed to determine how effectively the agency anticipates program needs and helps ensure the most efficient use of resources. In our September 2013 report on evaluating account balances in federal accounts, we identified several things to consider when attempting to understand how an agency estimates and manages carryover balances, as the following examples illustrate. 1. What assumptions or factors did the agency incorporate into its estimate of the account’s carryover balance (e.g., historical experience, demand models)? 2. Does the agency have a routine mechanism for reviewing its obligations and determining whether there are opportunities to deobligate funds (e.g., written procedures or ad hoc processes)? 3. What is the agency’s timeline for obligating and expending funds in the account? 4. What is the spendout rate after funds have been obligated? We also found in our 2013 report that if an agency does not have a robust strategy in place to manage carryover balances or is unable to adequately explain or support the reported carryover balance, balances may either fall too low to efficiently manage operations or rise to unnecessarily high levels. This produces potential opportunities for those funds to be used more efficiently elsewhere. For example, if Treasury were to identify and deobligate any MHA funds that are likely to not be expended, these funds may then be available for Congress to permanently rescind and use elsewhere for other priorities. In 2009, Treasury announced that as many as 3 million to 4 million borrowers who were at risk of default and foreclosure could be offered a loan modification under HAMP. In our July 2009 report, which reviewed these estimates, we found that Treasury’s estimate may have been overstated, reflecting uncertainty resulting from data gaps and the numerous assumptions that had to be made. In addition, we noted that documentation of the many assumptions and calculations necessary for the analysis was incomplete and that Treasury had not specified its plans for systematically updating key assumptions and calculations. We concluded that to improve the validity of the projection, the process would need to be supported by detailed information and complete documentation and the key assumptions and calculations would need to be regularly reviewed and updated. Based on those findings, we recommended that Treasury institute a system to routinely review and update key assumptions and projections about the housing market and the behavior of mortgage holders, borrowers, and servicers, revising the projections as necessary to assess the program’s effectiveness and structure. To address our recommendation, Treasury began obtaining information from the Mortgage Bankers Association to update its estimate of the number of HAMP-eligible borrowers. In August 2009 Treasury began publicly reporting monthly data on the estimated eligible loans and in January 2010 began publicly reporting data on the estimated eligible borrowers for the HAMP Tier 1 program. However, Treasury subsequently discontinued that practice after its February 2014 MHA Program Performance Report, moving instead to quarterly reporting. Instead of producing updated estimates of future program participation and related expenditures, Treasury historically had assumed that all funds obligated for MHA would be spent. Officials said that they focus on monitoring the housing market and the behavior of its participating loan servicers. For example, Treasury has been using a monthly report based on servicer-reported data of individual transactions to monitor expenditures in the aggregate and at the individual servicer level across all MHA programs. In addition to Treasury’s monitoring reports, Fannie Mae, in its role as financial agent, provides Treasury with a consolidated estimate of potential future HAMP participation based on survey data that it receives from MHA servicers. Additionally, Fannie Mae continues to provide Treasury with an internal estimate of potentially eligible HAMP Tier 1 borrowers based on a combination of industry data and information received from MHA servicers (known as a “waterfall”). Treasury officials have indicated that they have historically assumed that all funds obligated for MHA would be spent under that program, and, therefore, had not been analyzing likely unexpended or excess MHA funds that could potentially be deobligated. Additionally, Treasury officials previously indicated that they had not found the servicer survey data to be reliable predictors of future participation. Instead, Treasury uses the servicer surveys to look for trends in the HAMP modification activity data and as a vehicle for discussions with servicers on their approaches to the MHA program. Treasury officials also questioned the utility of the waterfall, given several limitations. First, it may not be possible to acquire a full knowledge of the factors that would make a borrower eligible, such as current income, the current occupancy/use of the property, any financial hardship, the borrower’s ability to meet applicable underwriting criteria, and the modified loan’s net present value status from available industry data sources. Second, estimating the potentially eligible population for HAMP Tier 2, which the waterfall does not attempt to do, is difficult because (1) all borrowers are first considered for HAMP Tier 1, raising the possibility of double counting; (2) non-owner-occupied units are only eligible if they are used for rental purposes; (3) each servicer determines its own DTI range within Treasury’s established parameters; and (4) servicers have limited historical data for HAMP Tier 2 on which to base estimates. Third, Treasury officials noted that Fannie Mae’s waterfall is only a point-in-time estimate and does not account for borrowers who might become eligible in the future (a number that depends on a variety of changing economic and market factors). Instead, as we found in our July 2015 report, Treasury has focused on identifying ways to increase the reach and effectiveness of MHA programs by making program changes and modifications. For example, Treasury’s internal Action Memorandums that describe program changes and modifications, including its Streamline HAMP modification process, for senior management indicate that Treasury has assumed that all funds obligated for MHA would be spent. Over the course of the MHA program, Treasury has extended program deadlines and introduced new features designed to increase program participation and program expenditures. However, as previously noted, in December 2015 Congress mandated that the MHA program be terminated on December 31, 2016. Treasury has stated that any program expansions or modifications resulting in additional expenditures would remain within the amount obligated to the MHA program. Treasury’s Action Memorandum that discussed Streamline HAMP also explained that the amount that would ultimately be expended in connection with the program change was difficult to estimate and would depend on a number of factors that Treasury could not predict at that time (e.g., national mortgage delinquency rates and other economic conditions, borrower application rates, and the performance of modified loans over time). Treasury officials previously told us that Treasury’s mandate is to help as many struggling homeowners as possible. Treasury officials also told us that the MHA Program Administrator will be conducting program readiness assessments for Streamline HAMP starting in January 2016 and will request the servicers’ Streamline HAMP policies at that time. However, Treasury officials indicated that they would not require servicers to report estimates of the population eligible for Streamline HAMP as they do for HAMP Tier 1 and Tier 2. Given the common eligibility criteria, Treasury expects that the potentially eligible population for Streamline HAMP will significantly overlap with that of HAMP Tier 1 and HAMP Tier 2. Estimating the potentially eligible population for Streamline HAMP is challenging, in part because servicers can tailor the eligibility criteria to their unique loan portfolios. Because Treasury has not routinely estimated expenditures associated with estimated likely future MHA program participation, it may not have identified in a timely manner whether it is retaining funds that may not be needed. Estimating likely future participation and associated expenditures would provide Treasury greater assurance that the funds it has obligated are necessary. As previously stated, as of October 2015, Treasury had $7.7 billion in unexpended obligations representing the amounts potentially available to servicers for future MHA transactions, including, but not limited to, HAMP modifications. The President’s fiscal year 2017 budget submission indicates that Treasury is now estimating a $4.7 billion reduction in total outlays for the MHA program. This estimate is based on analysis Treasury prepared assuming that future activity will be similar to recent activity. Treasury deobligated $2 billion of this $4.7 billion on February 25, 2016. Treasury officials told us that deobligating all MHA funds in excess of the current cost estimate would unduly increase the risk of insufficient funding for future program expenditures. Additionally, Treasury has indicated it plans to evaluate whether to deobligate additional funds after the complete universe of MHA transactions (i.e., modifications, short sales, and deeds-in-lieu of foreclosure) is known, sometime after entry into the MHA program is complete in late 2017. In our 2012 assessment of federal grant programs, we found that deobligating excess funds helps ensure that federal agency resources are not improperly spent and helps agencies maintain accurate accounting of their budgetary resources. Further, by preparing such estimates on a periodic basis, Treasury can achieve greater certainty over time and provide Congress with the opportunity to use those funds more efficiently elsewhere. Because prior to the President’s 2017 budget submission Treasury had not projected expenditures associated with likely future MHA participation or the likely resulting unexpended balances, we performed our own analysis to illustrate the potential range of estimated future participation in the various HAMP programs and generate cost projections from those estimations. These estimates are derived from data provided by a vendor and are based on the vendor’s two datasets, for which it had September 30, 2015, loan-level information, and an extrapolation to the remaining universe of loans for which the vendor had no data. We limited the analysis to loans that were determined by the vendor to be originated prior to January 2009 that were not owned by the housing enterprises or insured or guaranteed by the government. These are just two of the criteria for determining whether a loan meets basic eligibility criteria for the MHA program. We also limited the analysis to reflect other program requirements, as discussed further below. Based upon these estimates of potential eligibility and program data on the typical costs of modifications, we produced estimates of potential future costs. We compared these estimates of future cost with available, but unused, funds as of October 16, 2015, to produce estimates of potential excess funds. For a complete description of our methodology, see appendix II. Our results provide a range of potential future costs and excess funds using various assumptions for three important factors—the extent to which potentially eligible loans are serviced by HAMP-approved servicers, whether the loan had been previously modified, and whether the loan was 60 or more days delinquent or otherwise met certain measures of risk that might indicate that the loan was in imminent danger of default. Specifics of each of the three factors follow. 1. We prepared higher and lower estimates of costs and excess funds assuming for the higher estimate that 62 percent of loans were serviced by HAMP-approved servicers, and for the lower estimate that 57 percent of loans were serviced by HAMP-approved servicers. 2. Within the higher and lower estimates, we projected estimated future program costs for (a) loans that were 60 or more days delinquent that met the basic HAMP eligibility criteria and (b) loans that were 60 or more days delinquent combined with loans that were in imminent danger of default. Loans that are 60 or more days delinquent are potentially eligible for all three HAMP loan modification categories (HAMP Tier 1, Tier 2, and Streamline HAMP). Loans in imminent danger of default are potentially eligible for HAMP Tier 1 and Tier 2 modifications. 3. Also within the higher and lower estimates, we estimated future costs based on two different assumptions—that 25 percent of the at-risk loans were currently in a loan modification (HAMP or non-HAMP) and that none of the at-risk loans were currently in a loan modification. Our analysis found that when considering the most inclusive of the assumptions—that more loans are serviced by HAMP-approved servicers, that both loans that are 60 or more days delinquent and loans that might be at risk of default would be eligible, and that no loans are already in modification—results in an estimate of Treasury using all available funds. Conversely, using the least inclusive assumptions—that fewer loans are serviced by HAMP-approved servicers, that only loans that are currently 60 or more days delinquent would be eligible, and that 25 percent of loans are already in modification—results in an estimated surplus of $4.8 billion, not considering other non-HAMP MHA costs. (See table 1.) Our high and low estimates of potential unused or excess funds are based upon assumptions that generally would result in higher expected eligibility and participation, higher overall costs and, therefore, lower unused or excess funds, than might actually be realized. In particular, for all scenarios, we assumed a borrower participation rate of 100 percent. That is, we assumed that all borrowers that are eligible for modification are offered and accept the modification. We also assumed that all borrowers offered a trial modification would successfully complete their trial and convert the modification into a permanent one, which is not likely to be the case. In some scenarios we also established measures of mortgage risk to approximate a definition of loans that are not 60 or more days delinquent, but in imminent danger of default—a potential eligibility qualification for modification under the Tier 1 and Tier 2 programs. Based upon our analysis of at-risk loans, we assumed an approximate equal split between the modifications for the 60 days or more delinquent and the imminent default loans. According to Treasury data, approximately 20 percent of HAMP loan modifications went to borrowers that servicers determined were in imminent danger of default. If instead one assumed that loans that are in imminent danger of default would comprise 20 percent of the loans receiving a modification, then even our most inclusive estimate of future cost would be $5.3 billion (versus $8.6 billion) and the estimate of unused or excess funds would be much higher—$2.4 billion (versus a deficit of $0.9 billion). Not shown in table 1 are estimates for program costs related to the non- HAMP programs of MHA. Treasury officials told us these non-HAMP MHA programs are important because servicers can use the funds obligated under the participation agreement for MHA programs other than HAMP.Therefore, funds that are not used for HAMP loan modifications could be used for non-HAMP MHA programs, such as the Home Affordable Foreclosure Alternative and the FHA/RD HAMP programs. According to Treasury, it expended approximately $467 million for the non-HAMP programs during fiscal year 2015. Extrapolating that amount over an additional 6 years (entry into the MHA programs ends on December 31, 2016, and incentive payments can extend up to 5 years after entry, as in the case of the 2MP program) would result in an additional $2.3 billion in MHA expenditures. Together with our lowest estimate of HAMP program costs, total MHA program costs could therefore be $5.2 billion, which would leave an estimated $2.5 billion in potentially excess funds. In contrast, our high estimate resulted in all available MHA program funds being spent. Also, as previously noted, our analysis does not include estimates for HAMP modification of loans owned by the housing enterprises or insured or guaranteed by the government. CBO has also conducted analysis that illustrates the uncertainty about whether or not Treasury will likely spend all the funds allocated to the MHA programs. CBO’s most recent analysis, published in March 2015, projected a $9 billion surplus (with Treasury estimating full use of $37 billion in funds and CBO estimating use of $28 billion) over the amount that Treasury has estimated, because CBO anticipated that fewer households would participate in housing programs. CBO had increased its estimate of likely expenditures of TARP-funded housing programs by $2 billion from its previous year’s estimate, primarily because of Treasury’s announcement in November 2014 of an additional $5,000 in principal reduction for participants in the sixth year of a mortgage modification. CBO’s projections were made before Treasury’s announcement of its Streamline HAMP program, which if taken into account will likely decrease CBO’s estimated surplus. Treasury has not consistently been estimating expenditures related to likely future program participation and the likely resulting funding balances for the MHA programs because of concerns about the inherent limitations of the available data. In addition, Treasury assumed that it would use all funds obligated for MHA. However, conducting reviews of unexpended balances, including those that have been obligated, can help agencies redirect resources to other priorities or identify opportunities to achieve budgetary benefits. Additionally, if Treasury were to deobligate MHA funds that it determines were likely to not be expended, this may provide Congress with the opportunity to use those funds for other priorities. Our eligibility estimates, cost projections, and estimated unexpended balance figures represent a wide range of possible future outcomes, including a potential surplus in some cases, even when including potential future costs of non-HAMP housing programs. Treasury, with the assistance of its program administrators and servicers, is in a better position to conduct its own estimates of the number of eligible borrowers, potential costs of the program, and any balances that remain unexpended. We recognize that no estimate of future participation and expenditures can be made with complete certainty. However, Treasury has historically assumed that all MHA program funds will be spent and has instead focused on ways to expend the existing balance by making program changes and modifications. Congress recently enacted legislation that effectively terminates entry into the MHA programs after December 31, 2016, and authorized Treasury to move up to $2 billion in TARP funds to the Hardest Hit Fund. In February 2016, Treasury deobligated $2 billion and announced plans to move these funds to the Hardest Hit Fund. By taking action to estimate likely MHA expenditures and potential excess funds, Treasury could identify additional opportunities to deobligate those funds. To better ensure that taxpayer funds are being used effectively, Congress should consider permanently rescinding any Treasury-deobligated excess MHA balances that Treasury does not move into the Hardest Hit Fund. To provide Congress and others with accurate assessments of the funding that has been and will likely be used to help troubled borrowers and to identify any potential obligations not likely to be used, the Secretary of the Treasury should (1) review potential unexpended balances by estimating future expenditures of the MHA program; and (2) deobligate funds that its review shows will likely not be expended and obligate up to $2 billion of such funds to the TARP-funded Hardest Hit Fund as authorized by the Consolidated Appropriations Act, 2016. We provided a draft of this report to Treasury, OCC, FHFA, Fannie Mae, and Freddie Mac for review and comment. OCC, FHFA, and Fannie Mae had no technical comments and did not provide written comments. Treasury provided written comments which are presented in Appendix III. In addition, Treasury and Freddie Mac provided technical comments that we incorporated as appropriate throughout the report. Additionally, Treasury provided information on recent analyses of MHA obligations and actions to deobligate funds, which are discussed below. In its comment letter, Treasury agreed with our recommendations and stated that it had updated its cost estimates for MHA and planned to deobligate $2 billion from the program, which it did on February 25, 2016. Treasury stated in its comment letter that it agreed with the statement in the draft report that the recent Congressional action to terminate MHA on December 31, 2016, provided Treasury with greater certainty and opportunity with respect to estimating and reprogramming excess MHA fund balances. In addition, Treasury provided information related to actions to deobligate certain funds, which we have incorporated as appropriate. Specifically, Treasury noted that its updated cost estimates had identified an additional $2.7 billion in potential excess MHA funds but that deobligating all MHA funds in excess of the current cost estimate would unduly increase the risk of insufficient funding for future program expenditures. Instead, Treasury stated that it will evaluate whether to deobligate additional funds after the complete universe of MHA transactions is known in late 2017. According to Treasury, once servicers have reported all final transactions to the MHA system of record, they plan to calculate the maximum potential expenditures under MHA and deobligate excess funds, as appropriate. Given the uncertainties in estimating future participation and the associated expenditures, in particular the impact of the Streamline HAMP program, it will be important for Treasury to update its cost estimates as additional information becomes available and take timely action to deobligate likely excess funds. We have updated the relevant sections of the report to reflect these new developments and added language reflecting Treasury’s planned actions. We are sending copies of this report to the appropriate congressional committees. This report will be available at no charge on our website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or sciremj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. In response to a provision in the Emergency Economic Stabilization Act of 2008 that requires GAO to issue a report on TARP every 60 days, this report examines to what extent the Department of the Treasury (Treasury) is reviewing unexpended balances and cost projections for the Troubled Asset Relief Program (TARP)-funded Making Home Affordable (MHA) program. To assess changes in mortgage performance since 2009 and the state of the loan modification market, we analyzed summary data from (1) the Mortgage Bankers Association, CoreLogic, Inc., and the Urban Institute on mortgage delinquencies, negative equity, and credit availability, and (2) the HOPE NOW Alliance and the Office of the Comptroller of the Currency (OCC) on mortgage loan modifications completed between January 1, 2009, and September 30, 2015, by servicers that report data to the Department of the Treasury’s Office of the Comptroller of the Currency (OCC) for OCC’s Mortgage Metrics Report. At our request, OCC provided us with data summaries not published in the Mortgage Metrics reports, such as analyses of the Mortgage Metrics portfolio by date of origination of the modified loans. We also used data on negative equity on homes published by CoreLogic, Inc. and data on a measure of housing credit availability published by the Urban Institute. While we did not independently confirm the accuracy of these summary data that we obtained, we took steps to ensure the data were sufficiently reliable for our purposes, such as reviewing the data with officials familiar with generating the data and reviewing related documentation. We found that the data were sufficiently reliable for our purposes. To assess the extent to which Treasury is reviewing unexpended balances and cost projections for the MHA program, we collected and reviewed internal Treasury memorandums on the purpose and justification of program changes made in 2014 and 2015. We reviewed Fannie Mae servicer survey results as well as Fannie Mae projections of eligible borrowers and loans to understand the factors that might affect program participation. We reviewed internal Treasury estimates of the average cost of modifications, and of obligations, future expenditures, and remaining funds for the MHA programs. We also reviewed a prior GAO report on best practices concerning reviews of unexpended balances and cost projections, which we used as criteria to evaluate the extent to which Treasury is reviewing unexpended balances and cost projections for the MHA programs. In addition, we conducted our own analysis of potential future program participation and the likely associated costs to illustrate the potential for unexpended balances. To do so, we used analyses as of September 30, 2015, that we directed and that were prepared by a private vendor of mortgage data—Black Knight Data & Analytics, LLC (Black Knight)—as detailed in appendix II. We took a number of steps to help ensure the reliability of the data and analyses we purchased from Black Knight. For example, we reviewed related documentation, such as Black Knight’s technical quote in response to our solicitation. We discussed with Black Knight officials Black Knight’s internal procedures for ensuring data reliability and the process by which they completed the work we requested. We reviewed information provided by Black Knight describing its quality control process. We also conducted reasonableness checks on certain data elements comparing the Black Knight data to that of other industry data sources, such as the Mortgage Bankers Association and CoreLogic, Inc. We determined the data were sufficiently reliable for our purposes. Further, we analyzed the Congressional Budget Office’s (CBO) most recent published analysis of projected TARP spending. We had previously spoken to CBO officials about their cost estimates for the MHA program. We confirmed that they had not changed how they calculated their cost estimates. We also conducted interviews and reviewed past records of interviews with Treasury officials about the status of the programs, including any future program changes, and their projections for completing expenditure of TARP-housing funds. We conducted this performance audit from August 2015 to March 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on the audit objectives. We performed our own analysis to illustrate the potential range of estimated future participation in HAMP and generate cost projections. These estimates are derived from September 30, 2015, summary data provided by a vendor and are based on the vendor’s two datasets and an extrapolation to the remaining universe of loans for which the vendor had no data. The first of the datasets, which includes only data reported by HAMP-approved servicers, includes loans for which loss mitigation actions are known (loss mitigation known). The second dataset includes loans for which loss mitigation actions are not identified (loss mitigation unknown). These data were provided to the vendor by a variety of loan servicers, the majority of which were approved for HAMP, according to the vendor. Based on known factors such as loan type and vintage, the vendor extrapolated to the remaining universe of loans for which it had no loan-specific data. The third part of our analysis estimated future HAMP expenditures and potential unexpended balances for the MHA program using estimated costs for the various HAMP loan modification types and various exclusion scenarios. To perform this analysis, we first assumed that modifications of potentially eligible loans that were 60 or more days delinquent would be split between HAMP Tier 1 and HAMP Tier 2/Streamline HAMP modifications at a ratio of 24.5 percent to 75.5 percent. This assumption was based on the expectation that Streamline HAMP is expected to have similar participation rates based on current trends reported by the enterprises and the increases in HAMP Tier 2 relative to HAMP Tier 1 observed during 2015. We further assumed that modifications of potentially eligible loans that were not 60 days delinquent and had two or more risk factors would be split between HAMP Tier 1 and HAMP Tier 2 at a ratio of 50 percent and 50 percent (loans that are not 90 days-plus delinquent are not eligible for Streamline HAMP) based on the split between new HAMP modifications made during calendar year 2015. We then reduced the estimate of potentially at-risk eligible loans to account for various exclusions or reasons that a loan modification may not be offered. Depending on the particular modification program, these exclusions can include such things as unemployed borrower, property being vacant, debt-to-income ratio of less than 31 percent, negative net present value test results, and investor restrictions. Overall, based on servicer survey data provided by Treasury, we assumed that the combined exclusions would be about 50 percent for HAMP Tier 1 and 42 percent for both HAMP Tier 2 and Streamline. The estimate further accounted for two scenarios—one in which 25 percent of the at-risk loans were assumed to currently be in modification, and one in which we assume that no at-risk loans were currently in modification—of loans that would not be offered further modification. Finally, we applied the average expected cost of a HAMP Tier 1 modification and the average expected cost of HAMP Tier 2 modifications to both HAMP Tier 2 and Streamline modifications. Our analyses resulted in a range of estimated unused or excess funds, from a surplus of $4.8 billion to Treasury using all available funds depending on the share of the HAMP-approved servicers represented in the data, the definition of the at-risk borrowers, and the percentage of loans that are currently in modification (see table 5). It is important to recognize that these high and low estimates of potential unused or excess funds are based upon assumptions that generally would result in higher expected eligibility and participation, higher overall costs and, therefore, lower unexpended balances, than might be realized. In particular, we assume a borrower participation rate of 100 percent. Furthermore, according to Treasury, approximately 20 percent of HAMP loan modifications are imminent default borrowers. In our calculations shown in table 5, the analysis had an approximate equal split between the modifications for the 60 days-plus delinquent and the imminent default loans. If instead one assumed that loans that are in imminent danger of default would comprise 20 percent of the at-risk loans, then the estimated future cost of low estimates of participation would only be $5.3 billion (versus $8.6 billion) and the estimate of unused or excess funds would be much higher—$2.4 billion (versus -$0.9 billion). Not shown in the table are estimates for program costs related to the non- HAMP programs of MHA. These non-HAMP MHA programs are important because servicers can use the funds obligated under the participation agreement for MHA programs other than HAMP, and therefore funds that are not needed for HAMP loan modifications could be used for non- HAMP MHA programs, such as the Home Affordable Foreclosure Alternative and the FHA/RD HAMP programs. According to Treasury, it expended approximately $467 million for the non-HAMP programs during fiscal year 2015. Extrapolating that amount over an additional 6 years (entry into the MHA programs ends on December 31, 2016, and incentive payments for certain non-HAMP programs can extend up to 5 years after entry) would result in an additional $2.3 billion in MHA expenditures. Together with our lowest estimate of HAMP program costs, total MHA program costs could therefore be $5.2 billion, which would leave an estimated $2.5 billion in potentially excess funds. In contrast, our high estimate resulted in all available MHA program funds being spent. Also, as previously noted, our analysis does not include estimates for HAMP modification of loans owned by the housing enterprises or insured or guaranteed by the government. In addition to the contact named above, John A. Karikari and Harry Medina (Assistant Directors), Jon D. Menaster, (Analyst-in-Charge), Theodore Alexander, Bethany M. Benitez, Emily R. Chalmers, William R. Chatlos, Lynda E. Downing, Carol M. Henn, Marc Molino, Oliver M. Richard, Estelle M. Tsay-Huang, James D. Vitarello, and William T. Woods made key contributions to this report.
Since 2009 Treasury has obligated $27.8 billion in TARP funds through its MHA program to help struggling homeowners avoid foreclosure. The Emergency Economic Stabilization Act of 2008 includes a provision for GAO to report every 60 days on TARP activities. This report examines the extent to which Treasury is reviewing unexpended balances and cost projections for the MHA programs. To do this work, GAO used 2015 mortgage and other data from a private vendor and Treasury to help illustrate potential future costs of MHA/HAMP, reviewed internal Treasury documents, and interviewed relevant federal agency officials. The U.S. Department of the Treasury (Treasury) monitors activity and aggregate expenditures under its Troubled Asset Relief Program (TARP)-funded Making Home Affordable (MHA) program, but it has not instituted a system to review the extent that it will use the full available program balance ($7.7 billion as of October 16, 2015). In a July 2009 report, GAO found that Treasury's estimates of program participation may have been overstated, reflecting uncertainty caused by data gaps and assumptions that had to be made, and recommended that Treasury periodically review and update its estimates. In response, Treasury started performing periodic estimates of the eligible HAMP population. Treasury officials previously told GAO that they could not reliably estimate future participation levels due to data limitations and that they assumed that all available MHA funds would be spent. GAO recognizes that no estimate of future participation and expenditures can be made with certainty. But prior GAO work has concluded that reviewing unexpended balances, including those that have been obligated, can help agencies identify possible budgetary savings. Moreover, Congress's recent action to limit entry into the MHA programs after December 31, 2016, and to allow Treasury to obligate up to $2 billion in TARP funds, including MHA funds, to the Housing Finance Agency Innovation Fund for the Hardest Hit Housing Markets (Hardest Hit Fund), provides Treasury with greater certainty and opportunity with respect to estimating and reprogramming excess MHA fund balances. Since then, the President's 2017 Budget identified $4.7 billion in potential excess funds, and Treasury has announced its intention to transfer $2 billion of these funds to the Hardest Hit Fund. Officials said that deobligating additional amounts would present undue risk of having insufficient funds, and that further estimates of excess funds should await the completion of all new activity. GAO performed its own analysis of September 2015 mortgage data to estimate potential future HAMP participation and costs. This analysis resulted in estimates of MHA program balances as of October 16, 2015, that ranged from using all available funds to a surplus of $2.5 billion. In preparing these estimates, GAO attempted to provide a wide range of possible outcomes and generally used inclusive assumptions. Thus the actual number of eligible loans is likely to be lower and the unexpended balances higher than GAO's estimates. Taking action to estimate likely MHA expenditures allows Treasury to deobligate excess funds and, as appropriate, move funds to the Hardest Hit Fund. To the extent that additional funds may be deobligated, Congress may then have the opportunity to use those funds on other priorities. GAO is making two recommendations to Treasury and has one matter for congressional consideration. Treasury should (1) estimate future expenditures for the MHA program and any unexpended balances and (2) deobligate funds that its review shows will likely not be expended and move up to $2 billion of such funds to the TARP-funded Hardest Hit Fund as authorized. Congress should consider permanently rescinding any deobligated MHA funds that are not moved to the Hardest Hit Fund and make them available for other priorities. Treasury agreed with our recommendations and indicated that it has updated its cost estimates and subsequently deobligated $2 billion of MHA funds on February 25, 2016.
The LDA, as amended by HLOGA, requires lobbyists to register with the Secretary of the Senate and the Clerk of the House and file quarterly reports disclosing their lobbying activity. Lobbyists are required to file their registrations and reports electronically with the Secretary of the Senate and the Clerk of the House through a single entry point. Registrations and reports must be publicly available in downloadable, searchable databases from the Secretary of the Senate and the Clerk of the House. No specific statutory requirements exist for lobbyists to generate or maintain documentation in support of the information disclosed in the reports they file. However, guidance issued by the Secretary of the Senate and the Clerk of the House recommends that lobbyists retain copies of their filings and supporting documentation for at least 6 years after they file their reports. The LDA requires that the Secretary of the Senate and the Clerk of the House provide guidance and assistance on registration and reporting requirements and develop common standards, rules, and procedures for LDA compliance. The Secretary and the Clerk of the House review the guidance semiannually. It was last reviewed December 12, 2013, and revised February 15, 2013. The guidance provides definitions of LDA terms, elaborates on the registration and reporting requirements, includes specific examples of different scenarios, and explains why certain scenarios prompt or do not prompt disclosure under the LDA. The Secretary of the Senate and Clerk of the House told us they continue to consider information we report on lobbying disclosure compliance when they periodically update the guidance. In addition, they told us they send quarterly e-mails to registered lobbyists that address common compliance issues as well as reminders to file reports by the due dates. The LDA defines a lobbyist as an individual who is employed or retained by a client for compensation, who has made more than one lobbying contact (written or oral communication to a covered executive or legislative branch official made on behalf of a client), and whose lobbying activities represent at least 20 percent of the time that he or she spends on behalf of the client during the quarter. Lobbying firms are persons or entities that have one or more employees who lobby on behalf of a client other than that person or entity. Organizations employing in-house lobbyists file only one registration. An organization is exempt from filing if total expenses in connection with lobbying activities are not expected to exceed $12,500. Amounts are adjusted for inflation and published in LDA guidance. reported income (or expenses for organizations with in-house lobbyists) related to lobbying activities during the quarter (rounded to the nearest $10,000). The LDA also requires lobbyists to report certain political contributions semiannually in the LD-203 report. These reports must be filed 30 days after the end of a semiannual period by each lobbying firm registered to lobby and by each individual listed as a lobbyist on a firm’s lobbying report. The lobbyists or lobbying firms must: list the name of each federal candidate or officeholder, leadership political action committee, or political party committee to which they made contributions equal to or exceeding $200 in the aggregate during the semiannual period; report contributions made to presidential library foundations and presidential inaugural committees; report funds contributed to pay the cost of an event to honor or recognize a covered official, funds paid to an entity named for or controlled by a covered official, and contributions to a person or entity in recognition of an official or to pay the costs of a meeting or other event held by or in the name of a covered official; and certify that they have read and are familiar with the gift and travel rules of Congress and that they have not provided, requested, or directed a gift or travel to a member, officer, or employee of Congress that would violate those rules. The Secretary of the Senate, the Clerk of the House, and the Office are responsible for ensuring compliance with the LDA. The Secretary of the Senate and the Clerk of the House notify lobbyists or lobbying firms in writing that they are not complying with reporting requirements in the LDA and subsequently refer those lobbyists who fail to provide an appropriate response to the Office. The Office researches these referrals and sends additional noncompliance notices to the lobbyists or lobbying firms, requesting that they file reports or terminate their registration. If the Office does not receive a response after 60 days, it decides whether to pursue a civil or criminal case against each noncompliant lobbyist. A civil case could lead to penalties up to $200,000, while a criminal case—usually pursued if a lobbyist’s noncompliance is found to be knowing and corrupt—could lead to a maximum of 5 years in prison. Of the 3,034 new registrants we identified for the time periods corresponding to our review, we matched 2,925 reports (96 percent) of registrations filed in the first quarter in which they were first registered. These results are consistent with prior reviews. To determine whether new registrants were meeting the requirement to file, we matched newly filed registrations in the third and fourth quarters of 2012 and the first and second quarters of 2013 from the House Lobbyists Disclosure Database to their corresponding quarterly disclosure reports. We did this using an electronic matching algorithm that allows for misspellings and other minor inconsistencies between the registrations and reports. Figure 1 shows that most newly registered lobbyists filed their disclosure reports as required from 2010 through 2013. For selected elements of lobbyists’ LD-2 reports that can be generalized to the population of lobbying reports, unless otherwise noted, our findings were consistent from year to year. We used tests that adjusted for multiple comparisons to assess the statistical significance of changes over time. Most lobbyists reporting $5,000 or more in income or expenses provided written documentation to varying degrees for the reporting elements in their disclosure reports. For this year’s review, lobbyists for an estimated 96 percent of LD-2 reports (98 out of 102) provided written documentation for the income and expenses reported for the third and fourth quarters of 2012 and the first and second quarters of 2013. The most common forms of documentation provided included invoices for income and internal expense reports for expenses. Figure 2 shows that for most LD-2 reports, lobbyists provided documentation for income and expenses for sampled reports from 2010 through 2013. Figure 3 shows that for some LD-2 reports, lobbyists rounded their income or expenses incorrectly. We identified 33 percent of reports as having rounding errors. On 13 percent of those reports, lobbyists reported the exact amount of lobbying income or expenses instead of rounding to the nearest $10,000 as required. Rounding difficulties has been a recurring issue from 2010 through 2013. The LDA requires lobbyists to disclose lobbying contacts made to executive branch agencies on behalf of the client for the reporting period. This year, of the 102 LD-2 reports in our sample, 42 LD-2 reports disclosed lobbying activities at executive branch agencies. Of those, lobbyists provided documentation for all lobbying activities at executive branch agencies for 30 LD-2 reports. Figures 4 through 7 show that lobbyists for most LD-2 reports were able to provide documentation for selected elements of their LD-2 reports from 2010 through 2013. Lobbyists for an estimated 92 percent of LD-2 reports filed year-end 2012 or midyear 2013 LD-203 reports for all lobbyists and lobbying firms listed on the report as required. Figure 8 shows that lobbyists for most lobbying firms filed contribution reports for lobbyists and lobbying firms as required for LD-2 reports from 2010 through 2013. All individual lobbyists and lobbying firms reporting lobbying activity are required to file LD-203 reports semiannually, even if they have no contributions to report, because they must certify compliance with the gift and travel rules. The LDA requires a lobbyist to disclose previously held covered positions when first registering as a lobbyist for a new client. This can be done either on the LD-1 or on the LD-2 quarterly filing when added as a new lobbyist. This year, we estimate that 17 percent of all LD-2 reports did not properly disclose one or more previously held covered positions as required. Figure 9 shows the extent to which lobbyists failed to properly disclose one or more covered positions as required from 2010 through 2013. As of April 10, 2014, lobbyists amended 18 of the 104 disclosure reports in our original sample to make changes to previously reported information. One of the 18 reports was amended twice—once after we notified the lobbyists of our review and again after we met with them. An additional 7 of the 18 reports were amended after we notified the lobbyists of our review, but before we met with them. Finally, an additional 10 of the 18 reports were amended after we met with the lobbyists to review their documentation. We cannot be certain how lobbyists not in our sample would have behaved had they not been contacted by us. However, the notable number of amended LD-2 reports in our sample each year following notification of our review suggests that sometimes our contact spurs lobbyists to more closely scrutinize their reports than they would have without our review. Table 1 lists reasons lobbying firms in our sample amended their LD-1 or LD-2 reports. As part of our review, we compared contributions listed on lobbyists’ and lobbying firms’ LD-203 reports against those political contributions reported in the FEC database to identify whether political contributions were omitted on LD-203 reports in our sample. The sample of LD-203 reports we reviewed originally contained 80 reports with contributions and 80 reports without contributions. We estimate that overall, for 2013, lobbyists failed to disclose one or more reportable contributions on 4 percent of reports. Table 2 illustrates that most lobbyists disclosed FEC reportable contributions on their LD-203 reports as required from 2010 through 2013. As part of our review, 92 different lobbying firms were included in our 2013 sample. Consistent with prior reviews, most lobbying firms reported that they found it very easy or somewhat easy to comply with reporting requirements. Of the 92 different lobbying firms in our sample, 20 reported that the disclosure requirements were “very easy,” 59 reported them “somewhat easy,” and 9 reported them “somewhat difficult” or “very difficult” (see figure 10). Most lobbyists we interviewed rated the terms associated with LD-2 reporting requirements as “very easy” or “somewhat easy” to understand with regard to meeting their reporting requirements. This is consistent with prior reviews. Figures 11 through 15 show how lobbyists reported ease of understanding the terms associated with LD-2 reporting requirements from 2010 through 2013. The Office stated it continues to have sufficient personnel resources and authority under the LDA to enforce LD-2 reporting requirements, including imposing civil or criminal penalties for noncompliance of LD-2 reporting. Noncompliance refers to a lobbyist’s or lobbying firm’s failure to comply with LDA requirements. According to the Office, it has one contract paralegal specialist assigned full time, as well as five civil attorneys and one criminal attorney assigned part time for LDA compliance work. In addition, the Office stated that it participates in a government-wide program that provides temporary access to attorneys to assist with LDA compliance. The temporarily assigned attorneys work with the contract paralegal specialist to contact referred lobbyists or lobbying firms who do not comply with the LDA. According to the Office, it has sufficient authority to enforce LD-203 compliance with the LDA for lobbying firms and certain individual lobbyists. However, it has difficulty pursuing hundreds of LD-203 referrals that arise when a lobbying firm does not maintain or a lobbyist does not leave forwarding contact information upon leaving the firm. The LD-203 report does not provide contact information. It only provides the name of the lobbyist and lobbying firm. As a result, the Office does not have contact information to find the referred lobbyist to bring him or her into compliance. Office officials reported that many firms have assisted them by providing contact information for lobbyists, and only a few firms have not been willing to provide contact information for noncompliant lobbyists. Additionally, the Office stated that because the current structure of the LDA requires registered lobbyists to file their LD-203 reports and does not require lobbying firms to ensure that their registered lobbyists have complied with LD-203 filing requirements, the Office has no authority to hold lobbying firms responsible for a registered lobbyist who fails to comply with LD-203 requirements. Accordingly, when the Office does not have contact information to find a lobbyist who left the firm and it cannot hold the lobbying firm responsible for the lobbyist’s noncompliance with lobbying disclosure requirements, the Office has no recourse to pursue enforcement action. In a prior report, we recommended that the Office develop a structured approach for tracking and recording its enforcement actions. The Office developed the LDA database to track the status of referrals and enforcement actions it takes to bring lobbyists and lobbying firms into compliance with the LDA. To enforce compliance, the Office has primarily focused on sending letters to lobbyists who have potentially violated the LDA by not filing disclosure reports as required. The letters request lobbyists to comply with the law by promptly filing the appropriate disclosure reports and inform lobbyists of potential civil and criminal penalties for not complying. In addition to sending letters, a contractor sends e-mails and calls lobbyists to inform them of their need to comply with LDA reporting requirements. Not all referred lobbyists receive noncompliance letters, e-mails, or phone calls because some of the lobbyists have terminated their registrations or filed the required financial disclosure reports before the Office received the referral. Office officials stated that lobbyists resolve their noncompliance issues by filing the reports or terminating their registration. Resolving referrals can take anywhere from a few days to years depending on the circumstances. During this time, the Office monitors and reviews all outstanding referrals and uses summary reports from the database to track the overall number of referrals that become compliant as a result of receiving an e-mail, phone call, or noncompliance letter. In addition, more referred lobbyists are being contacted by e-mail and phone calls, which has decreased the number of noncompliance letters the Office sends to lobbyists. Officials from the Office stated that the majority of these e-mails and calls result in the registrant becoming compliant without sending a letter. In our last report, the Office told us that its system collects information on contacts made by e-mail and phone calls in the notes section of its database, but the database does not automatically tabulate the number of e-mails and phone calls to lobbyists, as it does for letters sent. In March 2013 as a part of closing discussions with the Office about the findings of our last lobbying disclosure report, as part of its enforcement efforts we urged the Office to develop a mechanism to track e-mails and telephone contacts to individual lobbyists. Since then, the Office has started tracking the number of e-mails and telephone contacts that are associated with its enforcement efforts to bring lobbyists and lobbying firms into compliance. These actions are now included in the number of enforcement actions taken to bring lobbyists and lobbying firms into compliance. As of January 16, 2014, the Office has received 2,722 referrals from both the Secretary of the Senate and the Clerk of the House for failure to comply with LD-2 reporting requirements cumulatively for calendar years 2009 through 2013. Figure 16 shows the number and status of the referrals received and the number of enforcement actions taken by the Office in its effort to bring lobbying firms into compliance. Enforcement actions include the number of letters, e-mails, and calls made by the Office. About 66 percent (1,787 of 2,722) of the total referrals received are now compliant because lobbying firms either filed their reports or terminated their registrations. In addition, some of the referrals were found to be compliant when the Office received the referral, and therefore no action was taken. This may occur when lobbying firms respond to the contact letters from the Secretary of the Senate and Clerk of the House after the Office has received the referrals. About 34 percent (922 of 2,722) of referrals are pending action because the Office was unable to locate the lobbying firm, did not receive a response from the firm, or plans to conduct additional research to determine if it can locate the lobbying firm. According to the Office, resolving referrals can take anywhere from a few days to years depending on the circumstances. Referrals may remain in pending status and may be monitored by the Office until it determines whether to pursue legal action against the registrant or dismiss certain referrals or until the registrant files the disclosure report or terminates his or her registration. The remaining 13 referrals did not require action or were suspended because the lobbyist or client was no longer in business or the lobbyist was deceased. The Office suspends enforcement actions against lobbyists or lobbying firms that are repeatedly referred for not filing disclosure reports but that do not have any current lobbying activity. The suspended lobbying firms are periodically monitored to determine whether they actively lobby in the future. As a part of this monitoring, the Office checks the lobbying disclosure databases maintained by the Secretary of the Senate and the Clerk of the House. LD-203 referrals consist of two types: LD-203(R) referrals represent lobbying firms that have failed to file LD-203 reports for the firm, and LD- 203 referrals represent the lobbyists at the lobbying firm who have failed to file their individual LD-203 reports as required. As of January 16, 2014, the Office had received 1,350 LD-203(R) referrals and 3,042 LD-203 referrals from the Secretary of the Senate and Clerk of the House cumulatively for calendar years 2009 through 2013. For LD-203 referrals, the Office sends noncompliance letters for the lobbyists to the registered lobbying firms listed on the LD-203 report because the lobbyist’s personal contact information is not listed on the LD-203 report. Figure 17 shows the status of LD-203(R) referrals received and the number of enforcement actions taken by the Office in its effort to bring lobbying firms into compliance. About 43 percent (581 of 1,350) of the lobbyists who were referred by the Secretary of the Senate and Clerk of the House for noncompliance during the 2009 through 2013 reporting periods are now considered compliant because lobbying firms either have filed their reports or have terminated their registrations. About 57 percent (768 of 1,350) of the referrals are pending action because the Office was unable to locate the lobbyist, did not receive a response from the lobbyist, or plans to conduct additional research to determine if it can locate the lobbying firm. Figure 18 shows that as of January 16, 2014, the Office had received 3,042 LD-203 referrals from the Secretary and Clerk for lobbyists who failed to comply with LD-203 reporting requirements for calendar years 2009 through 2012. Figure 18 shows the status of the referrals received and the number of enforcement actions taken by the Office in its effort to bring lobbyists into compliance. Figure 18 shows that 55 percent (1,676 of 3,042) of the lobbyists either have come into compliance by filing their reports or are no longer registered as a lobbyist. About 44 percent (1,352 of 3,042) of the referrals are pending action because the Office was unable to locate the lobbyists, did not receive a response from the lobbyists, or plans to conduct additional research to determine if it can locate the lobbyists. The Office said that many of the pending LD-203 referrals represent lobbyists who no longer lobby for the lobbying firms affiliated with the referrals, even though these lobbying firms may be listed on the lobbyist’s LD-203 report. In addition, Office officials stated that they continue to experience challenges with increasing LD-203 compliance because the Office has little leverage to bring certain individual lobbyists into compliance. Many of the LD-203 referrals remain open in an attempt to locate lobbyists who are no longer employed by the lobbying firm and do not leave a forwarding address. As a result, it may take years to resolve the referrals and bring the lobbyists into compliance. Since the 2012 reporting period, the Office has identified nine registrants on its chronic offenders list for failure to comply with reporting requirements. Of the nine registrants, five filed the outstanding reports or terminated their registration after being contacted by an Assistant U.S. Attorney. The Office reached settlement agreements with two of the registrants for $50,000 and $30,000, respectively, in civil penalties for repeatedly failing to file disclosure reports. In December 2013, the Office filed a default judgment for $200,000 against a registrant for repeated failure to file his LDA reports as required. In March 2014, the Office filed a civil complaint in the U.S. District Court for the District of Columbia for a registrant’s failure to comply with LDA reporting requirements. The Office continues to monitor and review chronic offenders to determine appropriate enforcement actions, which may include considering legal actions or dismissing certain cases. Over the past several years of reporting on lobbying disclosure, we have found that lobbyists reported understanding terms and requirements, but their disclosure filings demonstrated some compliance difficulties. For example, a number of lobbyists had rounding errors in their reports, failed to disclose covered positions, and did not accurately disclose their lobbying activity with the House, the Senate, or executive agencies. As a result, after being contacted by us, lobbyists amended their reports to address these types of compliance difficulties. In our first lobbying disclosure report in September 2008, we concluded that the lobbying community could benefit from creating an organization to share examples of best practices of the types of records maintained to support filings; use this information gathered over an initial period to formulate minimum standards for recordkeeping; provide training for the lobbying community on reporting and disclosure requirements intended to help the community comply with the LDA; and report annually to the Secretary of the Senate and the Clerk of the House on opportunities to clarify existing guidance and ways to minimize sources of potential confusion for the lobbying community. The continuing difficulties that some lobbyists have demonstrated in their disclosure reports, coupled with the sustained public and congressional attention on lobbyists and their interactions with government officials, underscores the importance of accurate and public disclosure of such activities. In that regard, we continue to believe that creating the type of organization we described in our first report could still be beneficial to the lobbying community and the public interest. The activities of such an organization could help enhance the enforcement and compliance with the LDA as amended by HLOGA and improve the accuracy and value of the information reported to Congress. We provided a draft of this report to the Attorney General for review and comment. The Assistant U.S. Attorney for the District of Columbia responded on behalf of the Attorney General that the Department of Justice had no comments. We are sending copies of this report to the Attorney General, Secretary of the Senate, Clerk of the House of Representatives, and interested congressional committees and members. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4749 or bagdoyans@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Consistent with the mandate in the Honest Leadership and Open Government Act (HLOGA), our objectives were to determine the extent to which lobbyists are able to demonstrate compliance with the Lobbying Disclosure Act of 1995, as amended (LDA) by providing documentation to support information contained on registrations and reports filed under the LDA; identify challenges and potential improvements to compliance, if any; and describe the resources and authorities available to the U.S. Attorney’s Office for the District of Columbia (the Office) and the efforts the Office has made to improve enforcement of the LDA. To respond to our mandate, we used information in the lobbying disclosure database maintained by the Clerk of the House of Representatives (Clerk of the House). To assess whether these disclosure data were sufficiently reliable for the purposes of this report, we reviewed relevant documentation and spoke to officials responsible for maintaining the data. Although registrations and reports are filed through a single web portal, each chamber subsequently receives copies of the data and follows different data cleaning, processing, and editing procedures before storing the data in either individual files (in the House) or databases (in the Senate). Currently, there is no means of reconciling discrepancies between the two databases caused by the differences in data processing. For example, Senate staff told us during previous reviews that they set aside a greater proportion of registration and report submissions than the House for manual review before entering the information into the database. As a result, the Senate database would be slightly less current than the House database on any given day pending review and clearance. House staff told us during previous reviews that they rely heavily on automated processing. They added that while they manually review reports that do not perfectly match information on file for a given registrant or client, they will approve and upload such reports as originally filed by each lobbyist, even if the reports contain errors or discrepancies (such as a variant on how a name is spelled). Nevertheless, we do not have reasons to believe that the content of the Senate and House systems would vary substantially. For this review, we determined that House disclosure data were sufficiently reliable for identifying a sample of quarterly disclosure (LD-2) reports and for assessing whether newly filed registrants also filed required reports. We used the House database for sampling LD-2 reports from the third and fourth quarters of 2012 and the first and second quarters of 2013, as well as for sampling year-end 2012 and midyear 2013 political contributions (LD-203) reports and finally for matching quarterly registrations with filed reports. We did not evaluate the Offices of the Secretary of the Senate or the Clerk of the House, both of which have key roles in the lobbying disclosure process. However, we did consult with officials from each office and they provided us with general background information at our request. To assess the extent to which lobbyists could provide evidence of their compliance with reporting requirements, we examined a stratified random sample of 104 LD-2 reports from the third and fourth quarters of 2012 and the first and second quarters of 2013. We excluded reports with no lobbying activity or with income less than $5,000 from our sampling frame. We drew our sample from 65,489 activity reports filed for the third and fourth quarters of 2012 and the first and second quarters of 2013 available in the public House database, as of our final download date for each quarter. One LD-2 report was removed from the sample because we could not contact the firm and it appears the firm has gone out of business. We treated this report as a nonrespondent for the purposes of analysis and adjusted our sampling weights accordingly for analysis. Another LD-2 report was excluded because the lobbyist amended the LD- 2 to reflect no lobbying activity after being notified of the review. This report was treated as out of scope. We adjusted for three comparisons to account for the three pairwise tests for each item examined. may also be related to the nature of our sample, which was relatively small and was designed only for cross-sectional analysis. Our sample is based on a stratified random selection and is only one of a large number of samples that we may have drawn. Because each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval. This interval would contain the actual population value for 95 percent of the samples that we could have drawn. The percentage estimates for 2013 have a 95 percent confidence interval of within plus- or-minus 10.1 percentage points or less of the estimate itself, unless otherwise noted. For 2010 through 2012, the percentage estimates have a 95 percent confidence interval with a maximum of 11 percentage points. the amount of income reported for lobbying activities, the amount of expenses reported on lobbying activities, the names of those lobbyists listed in the report, the houses of Congress and federal agencies that they lobbied, and the issue codes listed to describe their lobbying activity. After reviewing the survey results for completeness, we conducted interviews with the lobbyists and lobbying firms to review documentation they reported as having on their online survey for selected elements of their LD-2 reports. Prior to each interview, we conducted an open source search to identify lobbyists on each report who may have held a covered official position. We reviewed the lobbyists’ previous work histories by searching lobbying firms’ websites, LinkedIn, Leadership Directories, Legistorm, and Google. Prior to 2008, lobbyists were only required to disclose covered official positions held within 2 years of registering as a lobbyist for the client. HLOGA amended that time frame to require disclosure of positions held 20 years before the date the lobbyists first lobbied on behalf of the client. Lobbyist are required to disclose previously held covered official positions either on the client registration (LD-1) or on the first LD-2 report when the lobbyist is added as “new.” Consequently, those who held covered official positions may have disclosed the information on the LD-1 or a LD-2 report filed prior to the report we examined as part of our random sample. Therefore, where we found evidence that a lobbyist previously held a covered official position and it was not disclosed on the LD-2 report under review, we then conducted an additional reviewed the publicly available Secretary of the Senate or Clerk of the House database. This was done to determine whether the lobbyist properly disclosed the covered official position on a prior report or LD-1. Finally, if a lobbyist appeared to hold a covered position that was not disclosed, we asked for an explanation at the interview with the lobbying firm to ensure that our research was accurate. In previous reports, we reported the lower bound of a 90- percent confidence interval to provide a minimum estimate of omitted covered positions and omitted contributions with a 95 percent confidence level. We did so to account for the possibility that our searches may have failed to identify all possible omitted covered positions and contributions. As we have developed our methodology over time, we are more confident in the comprehensiveness of our searches for these items. Accordingly, this report presents the estimated percentages for omitted contributions and omitted covered positions rather than the minimum estimates. As a result, percentage estimates for these items will differ slightly from the minimum percentage estimates presented in prior reports In addition to examining the content of the LD-2 reports, we confirmed whether year-end 2012 or midyear 2013 LD-203 reports had been filed for each firm and lobbyist listed on the LD-2 reports in our random sample. Although this review represents a random selection of lobbyists and firms, it is not a direct probability sample of firms filing LD-2 reports or lobbyists listed on LD-2 reports. As such, we did not estimate the likelihood that LD-203 reports were appropriately filed for the population of firms or lobbyists listed on LD-2 reports. To determine if the LDA’s requirement for registrants to file a report in the quarter of registration was met for the third and fourth quarters of 2012 and the first and second quarters of 2013, we used data filed with the Clerk of the House to match newly filed registrations with corresponding disclosure reports. Using an electronic matching algorithm that includes strict and loose text matching procedures, we identified matching disclosure reports for 2,925, or 96 percent, of the 3,034 newly filed registrations. We began by standardizing client and registrant names in both the report and registration files (including removing punctuation and standardizing words and abbreviations, such as “company” and “CO”). We then matched reports and registrations using the House identification number (which is linked to a unique registrant-client pair), as well as the names of the registrant and client. For reports we could not match by identification number and standardized name, we also attempted to match reports and registrations by client and registrant name, allowing for variations in the names to accommodate minor misspellings or typos. For these cases, we used professional judgment to determine whether cases with typos were sufficiently similar to consider as matches. We could not readily identify matches in the report database for the remaining registrations using electronic means. To assess the accuracy of the LD-203 reports, we analyzed stratified random samples of LD-203 reports from the 31,482 total LD-203 reports. The first sample contains 80 of the 10,227 reports with political contributions. The second contains 80 of the 21,255 reports listing no contributions. Each sample contains 40 reports from the year-end 2012 filing period and 40 reports from the midyear 2013 filing period. The samples allow us to generalize estimates in this report to either the population of LD-203 reports with contributions or the reports without contributions to within a 95 percent confidence interval of plus or minus 9.5 percentage points or less. Although our sample of LD-203 reports was not designed to detect differences over time, we conducted tests of significance for changes from 2010 to 2013 and found no statistically significant differences after adjusting for multiple comparisons. While the results provide some confidence that apparent fluctuations in our results across years are likely attributable to sampling error, the inability to detect significant differences may also be related to the nature of our sample, which was relatively small and designed only for cross-sectional analysis. We analyzed the contents of the LD-203 reports and compared them to contribution data found in the publicly available Federal Elections Commission’s (FEC) political contribution database. We interviewed FEC staff responsible for administering the database and determined that the data reliability is suitable for confirming whether a FEC-reportable disclosure listed in the FEC database had been reported on an LD-203 report. We compared the FEC-reportable contributions reporting on the LD-203 reports with information in the FEC database. The verification process required text and pattern matching procedures, and we used professional judgment when assessing whether an individual listed is the same individual filing an LD-203. For contributions reported in the FEC database and not on the LD-203 report, we asked the lobbyists or organizations to explain why the contributions were not listed on the LD- 203 report or to provide documentation of those contributions. As with covered positions on LD-2 disclosure reports, we cannot be certain that our review identified all cases of FEC-reportable contributions that were inappropriately omitted from a lobbyist’s LD-203 report. We did not estimate the percentage of other non-FEC political contributions that were omitted because they tend to constitute a small minority of all listed contributions and cannot be verified against an external source. To identify challenges to compliance, we used a web-based survey and obtained the views from 92 different lobbying firms included in our sample on any challenges to compliance. The number of different lobbying firms total 92 and is less than our sample of 102 reports because some lobbying firms had more than 1 LD-2 report included in our sample. We calculated our responses based on the number of different lobbying firms that we contacted rather than the number of interviews. Prior to our calculations, we removed the duplicate lobbying firms based on the most recent date of their responses. For those cases with the same response date, we kept the cases with the smallest assigned case identification number. To obtain their views, we asked them to rate their ease with complying with the LD-2 disclosure requirements using a scale of “very easy,” “somewhat easy,” “somewhat difficult,” or “very difficult.” In addition, using the same scale we asked them to rate the ease of understanding the terms associated with LD-2 reporting requirements. To describe the resources and authorities available to the Office and its efforts to improve its LDA enforcement, we interviewed officials from the Office and obtained updated information on the capabilities of the system they established to track and report compliance trends and referrals, and other practices established to focus resources on enforcement of the LDA. The Office provided us with updated reports from the tracking system on the number and status of referrals and chronically noncompliant lobbyists and lobbying firms. The mandate does not include identifying lobbyists who failed to register and report in accordance with LDA requirements, or whether for those lobbyists who did register and report, that all lobbying activity or contributions were disclosed. We conducted this performance audit from June 2013 through May 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The random sample of lobbying disclosure reports we selected was based on unique combinations of registrant lobbyists and client names (see table 3). See table 4 for a list of the lobbyists and lobbying firms from our random sample of lobbying contribution reports with contributions. See table 5 for a list of the lobbyists and lobbying firms from our random sample of lobbying contribution reports without contributions. In addition to the contact named above, Bill Reinsberg, Assistant Director; Shirley Jones, Assistant General Counsel; Crystal Bernard; Stuart Kaufman; Lois Hanshaw; Sharon Miller; Anna Maria Ortiz; Anthony Patterson; Robert Robinson; Stewart Small; and Katherine Wulff made key contributions to this report. Assisting with lobbyist file reviews were Vida Awumey and Patricia Norris.
The LDA requires lobbyists to file quarterly lobbying disclosure reports and semiannual reports on certain political contributions. The LDA also requires that GAO annually (1) audit the extent to which lobbyists can demonstrate compliance with disclosure requirements, (2) identify challenges to compliance that lobbyists report, and (3) describe the resources and authorities available to the Office in its role in enforcing LDA compliance and the efforts the Office has made to improve enforcement. This is GAO's seventh report under the mandate. GAO reviewed a stratified random sample of 104 quarterly disclosure LD-2 reports filed for the third and fourth quarters of 2012 and the first and second quarters of calendar year 2013. GAO also reviewed two random samples totaling 160 LD-203 reports from year-end 2012 and midyear 2013. This methodology allowed GAO to generalize to the population of 65,489 disclosure reports with $5,000 or more in lobbying activity and 31,482 reports of federal political campaign contributions. GAO also met with officials from the Office to obtain updated statuses on the Office's efforts to focus resources on lobbyists who fail to comply. GAO provided a draft of this report to the Attorney General for review and comment. On behalf of the Attorney General, the Assistant U.S. Attorney for the District of Columbia responded that the Department of Justice had no comments. Most lobbyists provided documentation for key elements of their disclosure reports to demonstrate compliance with the Lobbying Disclosure Act of 1995, as amended (LDA). For lobbying disclosure (LD-2) reports and political contribution (LD-203) reports GAO estimated the following: Ninety-six percent of newly registered lobbyists filed LD-2 reports as required. Lobbyists are required to file LD-2 reports for the quarter in which they first register. Ninety-six percent could provide documentation for income and expenses. However, 33 percent of these LD-2 reports were not properly rounded to the nearest $10,000. Ninety-two percent filed year-end 2012 or midyear 2013 LD-203 reports as required. Seventeen percent of all LD-2 reports did not properly disclose one or more previously held covered position as required. Four percent of all LD-203 reports omitted one or more reportable political contributions that were documented in the Federal Election Commission database. These findings are generally consistent with GAO's reviews from 2010 through 2012 and can be generalized to the population of disclosure reports. Most lobbyists in GAO's sample rated the terms associated with LD-2 reporting as “very easy” or “somewhat easy” to understand with regard to meeting reporting requirements. However, some disclosure reports demonstrate compliance difficulties, such as failure to disclose covered positions or misreporting of income or expenses. In addition, lobbyists amended 18 of 104 original disclosure reports in GAO's sample to change previously reported information. The U.S. Attorney's Office for the District of Columbia (the Office) stated it has sufficient authority and resources to enforce LD-2 and LD-203 compliance with the LDA for lobbying firms and certain individual lobbyists. It has one contract paralegal working full time and six attorneys working part time on LDA enforcement issues. The Office continues its efforts to follow up on referrals for noncompliance with lobbying disclosure requirements by contacting lobbyists by e-mail, telephone, and letter. In March 2014, the Office filed a civil complaint against a lobbyist for failure to comply with LDA reporting requirements. GAO's first report on lobbying disclosure under the LDA concluded that the lobbying community could benefit from creating an entity to share examples of best practices, provide training, and report annually on opportunities to clarify guidance and minimize sources of potential confusion for the lobbying community. Given the ongoing difficulties with compliance, GAO continues to believe that such an entity could be useful to the lobbying community.
In 2001, the President announced his management agenda for making the government more focused on citizens and results, which included expanding Electronic Government (E-Government). The President’s E- Government Strategy identified several governmentwide initiatives with a goal of eliminating redundant systems and significantly improving the government’s quality of customer service for citizens and businesses. The expected results of the E-Government initiative include providing high- quality customer service regardless of whether a citizen contacts an agency by phone, in person, or on the worldwide Web. The E-Government Act of 2002 codified the President’s E-Government initiatives and expanded OMB’s leadership role by establishing the Office of E- Government and Information Technology within OMB. The act also requires that agencies comply with OMB E-Guidance. One of the 24 presidential E-Government initiatives is developing and deploying governmentwide citizen customer service using industry best practices that will provide citizens with timely, consistent responses about government information and services via e-mail, telephone, Internet, and publications. By congressional direction, OMB also is responsible for establishing and issuing governmentwide guidelines to federal agencies for ensuring the quality of the information disseminated to the public. In response to this direction, OMB issued guidance to agencies in February 2002 that defined the quality of information to include accuracy as one of its fundamental elements and directed agencies to develop procedures for reviewing and substantiating the quality of their information before dissemination. Contact centers are one method agencies use to disseminate information to the public. In the past, public inquiries to the government were often made by telephone and thus federal agencies began establishing call centers. With evolving technology, citizen inquiries to the government now come through various channels such as e-mails, Web-based forms, facsimiles, Web chat rooms, and traditional postal mail. As a result, agencies have established multichannel contact centers to handle these inquiries. Contact centers rely on automated and live telephone response systems, Web site technologies, and trained customer service representatives to provide information to the public. For contractor- operated contact centers, the agency typically provides either scripted responses or the content from which the contractor creates its own scripted responses. The scripts are used for the prerecorded telephone response systems, Web pages, and preformatted responses given by the customer service representatives. Contact centers are staffed in tiers by generalist or specialist representatives or a combination of both. Usually, Tier 1 staff handle general information inquiries and direct more complex or personal issues to specialized Tier 2 or Tier 3 staff or to the agency’s subject matter experts. One method for obtaining information on the contact centers that are operated by contractors on behalf of the government is to review data from FPDS. FPDS is used to report individual procurement transactions, which include the industrial classification of the goods and services procured by the federal government. FPDS was implemented by OMB’s Office of Federal Procurement Policy (OFPP) in 1978 in response to the Office of Federal Procurement Policy Act of 1974 requirement of establishing a system for collecting and developing information about federal procurement contracts. Since 1982, GSA has administered FPDS on OFPP’s behalf. In 2003, the system was revised and is now called FPDS- Next Generation. A wide range of users, including those with the executive and legislative branches, rely on FPDS data for information on agency contracting actions, governmentwide procurement trends, and achievement of goals related to small business. The six agencies we reviewed emphasized accuracy of contact center information to varying degrees through the quality assurance mechanisms of their contracts and various oversight practices. Four of the six included a specific metric to measure contractor performance related to providing accurate information to the public, but only one of the six used all four of the oversight practices we identified—such as actively monitoring contacts—to ensure that accurate information is provided to the public. Each of the six agencies we reviewed specified key performance metrics that its contractor is required to meet. These performance metrics define the minimum level of quality acceptable to the agency and provide the basis against which the contractor is to be evaluated. We found that four of the six agencies’ contracts included accuracy of information in one or more of the key performance metrics. The remaining two agencies did not have specific metrics that addressed the need to provide accurate information to the public. Table 2 summarizes the key performance metrics specified in the contracts we reviewed and indicates through shading those that specifically address providing accurate information. The Federal Acquisition Regulation requires agencies to perform and document oversight of their contractors’ performance to ensure the government receives high-quality services as specified in the contract. Agency oversight provides quality assurance independent of the contractors’ own quality control processes. Although each agency employed some oversight practices, only two of the six agencies we reviewed used all four of the oversight practices we identified for ensuring that accurate information is provided to the public. Each agency emphasized accuracy of information to varying degrees within its practices. On the basis of our review of industry contact center practices and the practices employed by the agencies considered leaders in government contact centers, we identified four agency oversight practices related to ensuring that accurate information is provided to the public via a contractor-operated contact center. Table 3 describes the four accuracy- related oversight practices. The first two practices, knowledge database management and agency contact monitoring, provide direct oversight regarding accuracy of information, because they focus on detecting inaccuracies in the source information used to provide responses to the public and in the actual responses provided by the customer service representatives. The remaining two practices, customer satisfaction surveys and validation of contractor-prepared reports, are more indirect methods of ensuring accuracy in that they review customers’ reactions to the information provided and independent agency corroboration of the contractor’s reporting on its own quality procedures. The agencies we reviewed varied with respect to how they implemented these practices. This variance was due to a number of factors, such as differences among the agencies in staffing levels, funding, and the use of guidance specific to the agency. Table 4 shows the extent to which the six agencies we reviewed employ each of the accuracy-related oversight practices. Most of the agencies we reviewed had a structured process for ensuring accurate information is maintained in the knowledge database. DOL, Education, GSA, and USPS approve contractor-developed information that is created based on government-provided materials. These agencies then perform periodic reviews of the information in the knowledge database. CDC currently prepares all scripted responses and Web site information, which the contractor is required to use, and plans to implement annual reviews of the knowledge database, starting at the first anniversary of operation in February 2006. TMA allows the contractor to develop information based on material TMA provides, but does not review the information used by the contractor to respond to public inquires. DOD said that TMA relies on the expertise and skills of its contractor to provide the required services. Almost all of the agencies we reviewed perform regular monitoring of the contractor’s responses to the public to help assess whether accurate information is provided. CDC, DOL, Education, GSA, and USPS each monitor a number of contacts on a regular basis, although accuracy of information is addressed to varying degrees in the score sheets. For example, accuracy is clearly weighted as an important aspect of the call in CDC’s score sheet. Therefore if an inaccurate answer is provided, the contractor “fails” for that call and the customer service representative is counseled. On the other hand, Education’s score sheet does not clearly weight accuracy of information. Education and its contractor staff could not explain how providing inaccurate information on a call would be indicated on the monitoring score sheet. In addition to giving different weights to accuracy, the five agencies also vary in terms of the frequency with which they monitor their contacts. Education and USPS each employ one full-time staff to monitor a selection of the contact centers’ contacts. CDC has a third-party contractor monitor the contact center on a daily basis and uses this assessment in the determination of the contractor’s award fee. GSA staff monitor a sample of calls on a weekly basis and started performing quarterly audits of the contractor’s monitoring efforts in November 2005. The sixth center, TMA, only monitors calls on an ad hoc basis when officials visit the contact center. Three of the agencies we reviewed conduct customer satisfaction surveys subsequent to the initial contact from an individual. GSA, TMA, and USPS conduct customer satisfaction surveys, which ask, to limited degrees, questions that address the accuracy of information provided. While providing some level of insight regarding accuracy, customer surveys may not always provide a valid basis for oversight of the accuracy of information, since they usually ask the individual’s opinion on the service provided. If the survey is conducted too closely to the time of the inquiry, the individual may not have had time to act upon the information to know whether it is accurate or not. CDC plans to implement three types of postcontact customer satisfaction surveys through a third-party contractor beginning in June 2006. DOL does not conduct postcontact surveys because it does not maintain personal information on the individuals that contact the agency. Three of the six agencies we reviewed take steps to validate the information in the contractor-prepared reports related to contact center performance. These reports generally include some aspects related to accuracy of information provided to the public, such as the contractor’s results of its monitoring of contacts. CDC and USPS validate to some degree the reports provided by the contractor. GSA conducts quarterly audits of its contractor’s supporting data. Although DOL, Education, and TMA review their contractor reports, they rely upon the reports without validation. According to GAO’s standards for internal control in the federal government, good internal control practices require that agencies validate the performance reports provided by the contractor to ensure the information is valid. The federal government does not have comprehensive, centralized guidance for operating a contact center or for overseeing a contractor- operated center. Although operation and oversight of contact centers are the responsibility of individual agencies, GSA, in consultation with OMB, determined that governmentwide standards would be useful. GSA sponsored an interagency committee that recently provided draft guidelines for operating federal contact centers to OMB and other federal agencies. However, OMB told us it does not plan to issue any governmentwide guidance based on the committee’s recommended guidelines at this time, because OMB has not identified the operation of contact centers as an area of concern. Furthermore, until recently, no governmentwide information specific to contact centers has been collected. Initial attempts to gather governmentwide information about the number and type of activities that agencies use to provide public information proved to be inadequate for providing a comprehensive governmentwide view of contact centers. In addition, officials from the agencies we reviewed told us that no industry classification code in FPDS currently covers the full range of services provided by a contact center. In its 2004 report on the electronic government initiative, OMB highlighted the importance of delivering timely and accurate information to the public and stated that there are opportunities to apply existing and emerging best practices to achieve increases in productivity and delivery of services and information. To date, however, OMB has established only limited guidance on preferred practices at contact centers. The only OMB guidance we found that specifically related to contact centers is focused on the use of performance-based contracting for such services. This guidance is dated and limited in its coverage and does not provide guidance on performance metrics for contact centers or oversight practices. Because of the need for governmentwide standards for operating contact centers, GSA in consultation with OMB, took the initiative to form an interagency working group to propose guidelines to OMB and other federal agencies. Formed in March 2005, the Citizen Service Levels Interagency Committee is composed of 58 contact service representatives from 33 executive branch agencies. In addition to relying on their knowledge in running contact centers, the committee also had a contractor perform two studies to provide insight on citizens’ expectations when contacting government agencies for information and current industry metrics, benchmarks, and best practices for operating contact centers. The committee submitted a report with 37 proposed standards for operating contact centers to OMB in September 2005, including four standards specifically related to ensuring accuracy. The committee plans to continue to work on additional contact center issues and to help agencies implement any contact center standards that OMB might endorse. In October 2005, OMB officials stated that they had reviewed the committee’s report but did not plan to issue any governmentwide guidance based on the committee’s recommended guidelines at this time, because OMB has not identified the operation of contact centers as an area of concern. OMB stated further that if agencies need additional guidance in developing their standards, they can refer to the committee’s report. The agencies we reviewed each performed independent research to develop their contracts and formulate a management strategy for operating their contact centers. Performing independent research resulted in duplication of efforts across agencies, using limited resources and taking valuable time. For example, to develop guidance, the Department of Education performed market research, worked with a contractor on customer services and related standards, and studied industry best practices on a limited basis. Similarly, CDC sent out a request for information to industry to gain insight on the technology available for operating contact centers before it developed its contract. CDC then performed market research, reviewed industry practices, and visited other government contact centers, such as that of the Social Security Administration and the Centers for Medicare and Medicaid Services, to learn about the practices of government- and contractor-operated centers. In 2004, GSA created a multiple-award contract called FirstContact to assist agencies in contracting for contact center services. Under this multiple-award contract, agencies can issue a task order to any of five preapproved contractors to operate a contact center. Using FirstContact will minimize the time and effort required of agencies to locate a contractor to manage their centers. To date, the Department of Homeland Security’s Federal Emergency Management Agency, GSA, and the Department of Health and Human Services have placed six task orders against this contract, and three other agencies are looking to place orders as well. For example, the Federal Emergency Management Agency recently used this contract vehicle to quickly provide contact center services for the influx of calls and applications for government assistance in the aftermath of Hurricane Katrina. Although agencies now have this multiple-award contract as a mechanism to assist them in contracting for contact center service, they still must develop specific performance metrics and oversight practices specific to their center. Given the lack of governmentwide information on the activities that provide information to the public, OMB made an initial attempt to collect such data in 2004. For this effort, OMB issued a data request to all executive branch agencies to obtain basic data, such as the contact center name, volume of contacts, and whether performance and cost metrics are collected, for every activity that provides information directly to the public. OMB normally uses data requests as a census tool to acquire a snapshot of the current budget environment of the government. Agencies responded by self-identifying over 1,800 activities that currently provide information to citizens using various communication channels such as telephone, e-mail, and Internet Web sites. The individual activities identified ranged in size from a couple of employees who answer telephone calls as part of their duties to contact centers with staffs of several hundred employees who handle millions of inquiries through several channels. Of the 1,800 activities identified, over 500 categorized themselves as contact centers. Since it was making a nonstandard data request, OMB performed little follow-up on nonresponding agencies and did not verify reported results. We noted that some large agencies, such as DOD, did not report any activities that provide information to the public. In an effort to expand on the information collected through the OMB data request, GSA surveyed 360 activities—approximately a quarter of those who responded to the data request—to develop a baseline snapshot of governmentwide activities providing information to the public. However, GSA’s survey methodology was flawed because the agency selected its sample from an incomplete universe, had a low survey response rate, and did not perform a nonresponder analysis. Thus, the survey results did not provide a representative view of activities across the government. GSA plans to conduct a follow-up survey of government activities in 2007. While OMB and GSA information regarding the universe of federal contact centers is incomplete, another potential source of information on those contact centers that are contracted out by the government is the FPDS. Since its inception in 1978, the FPDS has served as the governmentwide system for collecting federal procurement data. Five of the six agencies we reviewed, however, each used a different code to report their contact center procurement actions to FPDS. Officials from agencies we reviewed told us that no current North American Industry Classification System (NAICS) code covers the full range of services provided by a contact center. Table 5 lists the NAICS codes used by the five agencies reporting data to FPDS. The five agencies that reported to FPDS chose different NAICS codes for different reasons. Officials from DOL and GSA stated that they chose alternative NAICS codes because the definition provided for telephone call center does not cover all of the activities handled in a contact center. Education and DOD-TMA officials explained that they chose NAICS codes that encompassed the main work of the contract, since the contact center is only a portion of the work in a contract for a larger program. CDC chose its NAICS code based upon the information technology services required for creating its contact center. No governmentwide procurement information was reported to FPDS using the NAICS codes for telephone call centers in fiscal years 2000 through 2004. This category of NAICS codes—56142—is defined as establishments primarily engaged in answering telephone calls and relaying messages or in telemarketing activities. Although officials from three of the agencies we reviewed expressed the opinion that the definition for telephone call centers is too narrow to encompass all the work performed by a contact center, OMB told us that the telephone call center code is the correct code to use. Specifically, OMB stated that the subcode of 561422— telemarketing bureaus—was written with the intent to cover all the functions of a contact center. OMB is considering issuing a clarification to the description of the 56142 codes to explain that these codes include more than telephones—such as Web sites, e-mails, facsimiles, and so forth—in its next update to the NAICS manual in 2012. Providing timely and accurate information is a key result area for the federal government. Federal agencies that use contractor-operated contact centers to meet the public’s demand for information assume the burden of assuring that the information provided by the contractors is accurate. While the agencies we reviewed have taken a variety of steps to ensure that their contractor-operated contact centers address accuracy, at some agencies accuracy clearly does not have the same priority as other objectives, such as timeliness. Although agencies need flexibility in meeting the needs of the individuals that contact them, they also can benefit from the experience gained by their peers operating other government contact centers. Short of mandating specific guidance, increased sharing among agencies of successful practices for managing contact centers may help improve their abilities to write and oversee contracts for these centers and may avoid needless duplication of effort. The guidelines proposed by the Citizen Service Levels Interagency Committee are a step in this direction. However, whether this effort will result in information sharing across agencies is uncertain. Leveraging knowledge gained by other agencies through the sharing of effective practices could be enhanced by governmentwide leadership. OMB’s leadership of the electronic government initiative, its role in guiding agency dissemination of public information, as well as its procurement policy role, put it in an ideal position to facilitate the exchange of information among agencies to ensure effective oversight of contractors in meeting the public’s need for timely and accurate information. While OMB and GSA have taken initial steps to enhance the oversight of federal contact centers by gathering some information on the universe of these centers, it is not clear whether the data collected provide enough information for governmentwide oversight of contact center operations or whether GSA’s planned data collection efforts will do so either. With additional reliable information, OMB may be able to more quickly identify and act on emerging problems and opportunities. In addition, FPDS can also be more effective in identifying the number of contracts and dollars obligated for contact centers across the government, but only if the agencies consistently use the appropriate NAICS code for these services. To facilitate the sharing of sound oversight practices for the operation of contact centers, to help ensure that providing accurate information to the public by contact centers is a priority outcome, and to improve the quality of information gathered about these centers, we recommend that the Director of the Office of Management and Budget take the following actions: Building on efforts begun by the GSA-sponsored interagency committee, work with agencies to develop a mechanism for sharing performance metrics and oversight practices for contact centers. Continued efforts should stress that providing accurate information to the public needs to be a key factor in the oversight of federal contact centers. Take steps to ensure consistent reporting on contact centers by developing an industry category or specific code definition in NAICS that encompasses all the services provided by contact centers or by providing further instruction to agencies regarding the appropriate NAICS code to use for contact centers. To improve the quality of information about federal contact centers, we recommend that the Administrator of General Services take the following action: Ensure that further efforts to develop governmentwide data on contact center operations—such as the survey planned for next year—employ sound methodologies to ensure that the resulting information is representative of the activities across the government. We requested comments on a draft of this report from the Office of Management and Budget and each of the six agencies we reviewed— Department of Defense, Department of Education, Department of Health and Human Services, Department of Labor, General Services Administration, and U.S. Postal Service. The Office of Management and Budget provided oral comments in which they concurred with our findings and recommendations. The Department of Defense, Department of Health and Human Services, and the General Services Administration provided written comments that are reproduced in appendices III, IV, and V, respectively. OMB and most of the agencies also provided technical comments, which we incorporated as appropriate. The Department of Health and Human Services and the General Services Administration also concurred with our findings and recommendations. The Department of Defense did not concur with our draft report because it believes the report does not fully reflect all the metrics and practices DOD and its contractor use to ensure the accuracy of information provided to TRICARE beneficiaries. In its comments, DOD emphasizes that its approach to contracting for contact center operations relies on the contractor to use industry standards for ensuring information accuracy. DOD states that standards exist in its contract related to the accuracy of information provided by telephone. DOD also cites additional metrics it uses for monitoring contractor performance. In addition, DOD requires the contractor to have a quality management program which must be validated by a nationally recognized third-party organization. DOD points out that it receives monthly briefings on the operation of the contractor’s quality management program and observes call center operations during site visits. Finally, DOD explained that it monitors the expertise and skills of the contractor staff that perform the knowledge management function. We recognize that DOD has decided to use what it calls the “audit the auditor” approach to quality assurance. It was not our objective, however, to assess the merits of any particular approach to ensuring quality, but rather to determine the extent to which contract terms and agency oversight practices emphasize the importance of providing accurate information to the public. In this regard, while the contractor may use specific standards for accuracy in its quality management program, we found no specific metric related to accuracy in the TRICARE contact center contract itself or in the additional metrics cited in DOD’s comments. For the most part, the additional quality control activities listed by the Department are those of its contractor, not oversight activities performed by the agency, which was the focus of our review. While independent validation of the contractor’s quality control program helps to ensure the contractor has a quality process in place for monitoring its responses to the public, this does not substitute for DOD oversight activities such as validating the contractor’s reports of its monitoring efforts. In addition, while DOD performs site visits to oversee the contractor’s operations, it does so only on an ad hoc basis. Based on DOD’s comments, we added additional language to the report regarding DOD’s approach to knowledge management. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this report. We will then send copies of this report to the Director of the Office of Management and Budget, the Administrator of General Services, the Postmaster General, and the Secretaries of the Department of Defense, Department of Education, Department of Health and Human Services, and Department of Labor. We will also make copies available to others upon request. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. This report is available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (202) 512-4841. An additional GAO contact and staff who made contributions to this report are listed in appendix VI. To assess the guidance provided to federal agencies and the information gathered by the federal government about contact centers, we conducted interviews with Office of Management and Budget (OMB) and General Services Administration (GSA) officials, and reviewed related guidance and results of their initial data collection efforts. We researched and discussed with OMB the absence of guidance related to the operation and oversight of contact centers. We also discussed the results of OMB’s 2004 request for information to federal agencies that asked for self- identification of any activities that provide information to the public. In addition, we reviewed and discussed the results of GSA’s survey of a sample of agency activities that responded to OMB’s request. We did not assess the validity of the data gathered by OMB and GSA. However, a GAO methodologist reviewed the GSA survey methodology and identified its weaknesses. In addition, we monitored the progress of the GSA-sponsored working group—the Citizen Service Levels Interagency Committee—as it developed and recommended standards to OMB for federal contact centers. We did not assess the committee’s recommendations as a whole, but rather reviewed how accuracy of information was addressed within its proposed standards. We reviewed data from the Federal Procurement Data System for the past 5 fiscal years to determine if any contract actions were reported using the code for telephone call center services. To describe federal agencies’ efforts to ensure accurate information is provided to the public by contractor-operated centers, we reviewed the contract terms and oversight activities for one center at each of six agencies. We selected centers that handle over 1 million inquiries annually and provide information to citizens that could significantly affect their finances, health, or safety. The contact centers selected for our review are Department of Defense TriCare Management Activity (TMA) North region—Healthnet’s contact center: provides general and personalized medical benefit and coverage information and processes enrollments and claims for military families in the North region; Department of Education (Education)—Federal Student Aid Information Center: provides general information about applications and loan issues and personalized information on the status of applications and loans to the public and academic community; Department of Health and Human Services’ Centers for Disease Control and Prevention (CDC)—CDC INFO contact center: provides information about health and safety issues—including prevention, detection, and outbreak control—to the public and medical professionals; Department of Labor (DOL)—National Contact Center: provides general information and referrals regarding job issues, workplace safety, and pension and health benefits to the public and employers; General Services Administration—National Contact Center: provides general information and referrals related to any agency or government program; and U.S. Postal Service (USPS)—National Contact Center: provides general and individualized information on mail delivery and shipping issues to the public and businesses. To complete our review, we interviewed management and staff responsible for oversight of the contractor-operated contact center at each agency. We reviewed the performance metrics specified in the agency’s contract as well as the related reports used to oversee and evaluate the contractors’ operation of the contact centers. In addition to conducting discussions with the agencies, we visited four contractor-operated centers to observe their operations and quality control procedures. Specifically, we visited locations for the GSA center operated by ICT Group, CDC and Education centers operated by Pearson Government Solutions, and the DOL center operated by Datatrac Information Services. At each center we interviewed management and customer service representatives regarding the oversight practices used to monitor the accuracy of information. We did not test the contractors’ internal control procedures or validate any data from their sample reports. We identified industry practices for ensuring the accuracy of information provided by contact centers, interviewed representatives from two major contact center industry groups—the Society of Consumer Affairs Professionals and the Incoming Calls Management Institute—and attended the 2005 Government Customer Support Conference. In addition, we reviewed prior GAO reports concerning contact centers. We also discussed contact center issues with other GAO teams that were currently reviewing or had recently reviewed other federal contact centers. Our work was conducted from February through November 2005 in accordance with generally accepted government auditing standards. Medical benefits and coverage issues, enrollment, and claims processing Daily 7:00 a.m. to 7:00 p.m. (eastern time) Languages supported (beyond English) Fiscal year 2005 call volume from February to September 2005 (estimated at 2.7 million calls when fully operational) Total value of contract at award (in millions of dollars) (the contact center is only a portion of this contract) Monday-Friday 8:00 a.m. to midnight; Saturday 9:00 a.m. to 6:00 p.m. (eastern) Monday-Friday 8:00 a.m. to 8:00 p.m. (eastern) Monday-Friday 8:00 a.m. to 8:30 p.m.; Saturday 8:00 a.m. to 6:00 p.m. (eastern time) 1-year base plus 9 1-year options 1-year base plus 4 1-year options 4-year base plus 6 1-year options $254.6 (for 4-year base only) (the contact center is only a portion of this contract) Firm fixed price plus award fee3 DOL provides service 24 hours a day for the Occupational Safety and Health Administration toll-free number and provided service 24 hours a day during hurricane relief efforts; Education extends its hours during student aid application season; GSA provides service 24 hours a day under emergency situations. The CDC contact center is in its second year of operation and is consolidating the work for 40 different toll-free numbers over a total period of 4 years. The contact center is a portion of a larger service contract. The value shown here is for the entire contract, as the agency could not provide a breakdown of the cost for the contact center alone. In addition to the contact named above, Ruth Eli DeVan, William McPhail, Jean Lee, David Schilling, Nyankor Matthews, Robert Swierczek, John Krump, Monica Wolford, and Karen O’Conor made key contributions to this report. Improvements Needed to the Federal Procurement Data System-Next Generation. GAO-05-960R. Washington, D.C.: September 27, 2005. Social Security Administration: Additional Actions Needed in Ongoing Efforts to Improve 800-Number Service. GAO-05-735. Washington, D.C.: August 8, 2005. Immigration Services: Better Contracting Practices Needed at Call Centers. GAO-05-526. Washington, D.C.: June 30, 2005. Federal Thrift Savings Plan: Customer Service Practices Adopted by Private Sector Plan Managers Should be Considered. GAO-05-38. Washington, D.C.: January 18, 2005. Medicare: Accuracy of Responses from the 1-800-MEDICARE Help Line Should Be Improved. GAO-05-130. Washington, D.C.: December 8, 2004. Medicare: Call Centers Need to Improve Responses to Policy-Oriented Questions from Providers. GAO-04-669. Washington, D.C.: July 16, 2004. Reliability of Federal Procurement Data, GAO-04-295R. Washington, D.C.: December 30, 2003. Medicare: Communications with Physicians Can Be Improved. GAO-02-249. Washington, D.C.: February 27, 2002. IRS Telephone Assistance: Limited Progress and Missed Opportunities to Analyze Performance in the 2001 Filing Season. GAO-02-212. Washington, D.C.: December 7, 2001. IRS Telephone Assistance: Quality of Service Mixed in the 2000 Filing Season and below IRS’ Long-Term Goal. GAO-01-189. Washington, D.C.: April 6, 2001. IRS Telephone Assistance: Opportunities to Improve Human Capital Management. GAO-01-144. Washington, D.C.: January 30, 2001. Customer Service: Human Capital Management at Selected Public and Private Call Centers. GAO/GGD-00-161. Washington, D.C.: August 22, 2000. Social Security Administration: Information on Monitoring 800 Number Telephone Calls. GAO/HEHS-98-56R. Washington, D.C.: December 8, 1997. Social Security Administration: More Cost-Effective Approaches Exist to Further Improve 800-Number Service. GAO/HEHS-97-79. Washington, D.C.: June 11, 1997. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Federal agencies have increasingly relied on contact centers--centers handling inquiries via multiple channels such as telephone, Web page, e-mail, and postal mail--as a key means of communicating with the public. Many of these centers are contractor-operated. Concerns exist about the accuracy of responses provided through contractor-operated centers. This report examines (1) the extent to which the contract terms and oversight practices for contact centers at selected agencies emphasize the importance of providing accurate information to the public, and (2) whether guidance for the operation of contact centers and basic information needed to provide general oversight exist. GAO reviewed one contractor-operated contact center at each of six agencies: the Centers for Disease Control and Prevention (CDC), General Services Administration (GSA), U.S. Postal Service (USPS), and the Departments of Defense, Labor, and Education (DOD, DOL, and Education). The contracts and oversight practices for the contact centers of the six agencies reviewed, which handle millions of inquiries annually, varied significantly regarding the emphasis they placed on providing accurate information to the public. Although federal policy for disseminating information to the public specifically emphasizes accuracy, only four of the six agencies include accuracy as a performance metric in their contracts. With respect to oversight, only two of the six agencies used all four of the accuracy-related oversight practices we identified--regular knowledge database reviews, regular contact monitoring, postcontact customer satisfaction surveys, and validation of contractor reports. Although each agency used some form of oversight to assess the accuracy of the information provided by its contact center, each agency differed regarding how it implemented these practices. There is no governmentwide guidance or standards for operating contact centers--including guidance on specifying accuracy as a contract performance metric or as a key focus for oversight. Some agencies indicated that had federal guidance been available, it would have helped them establish performance indicators and develop oversight policies and practices. Recognizing the need for operational standards for contact centers, an interagency working group recently proposed draft guidelines to OMB and other federal agencies, but OMB has no plans to issue these guidelines or any standards for use by agencies. Additionally, until recently the federal government had not collected data on the universe of federal contact centers. OMB and GSA attempted to collect data on the number, types, and costs of federal contact centers in 2004, but the data collected were incomplete. In addition, no governmentwide procurement information was reported to the Federal Procurement Data System (FPDS) in fiscal years 2000 through 2004 using the reporting code for telephone call centers, which OMB said is the appropriate code for contact centers. The five agencies we reviewed that report data to FPDS used a variety of different codes, some because they believe that the telephone call center code is too narrow to cover the services of their multichannel contact centers.
Our work has identified several challenges related to U.S. efforts in Afghanistan. Among those we highlighted in our 2013 key issues report are a dangerous security environment, the prevalence of corruption, and the limited capacity of the Afghan government to deliver services and sustain donor funded projects. Dangerous security environment. Afghanistan’s security environment continues to challenge the efforts of the Afghan government and international community. This is a key issue that we noted in 2007 when we reported that deteriorating security was an obstacle to the U.S. government’s major areas of focus in Afghanistan.2009, the U.S. and coalition partners deployed additional troops to disrupt and defeat extremists in Afghanistan. While the security situation in Afghanistan has improved, as measured by enemy- In December initiated attacks on U.S. and coalition forces, Afghan security forces, and non-combatants, including Afghan civilians, the number of daily enemy-initiated attacks remains relatively high compared to the number of such attacks before 2009. In 2012, attacks on ANSF surpassed attacks on U.S. and coalition forces (see fig. 2). Prevalence of corruption in Afghanistan. Corruption in Afghanistan continues to undermine security and Afghan citizens’ belief in their government and has raised concerns about the effective and efficient use of U.S. funds. We noted in 2009 that according to the Afghan National Development Strategy pervasive corruption exacerbated the Afghan government’s capacity problems and that the sudden influx of donor money into a system already suffering from poor procurement practices had increased the risk of corruption and waste of resources. According to Transparency International’s 2013 Corruption Perception Index, Afghanistan is ranked at the bottom of countries worldwide. In February 2014, the Afghan President dissolved the Afghan Public Protection Force which was responsible for providing security intended to protect people, infrastructure, facilities, and construction projects. DOD had reported major corruption concerns within the Afghan Public Protection Force. Limited Afghan capacity. While we have reported that the Afghan government has increased its generation of revenue, it remains heavily reliant on the United States and other international donors to fund its public expenditures and continued reconstruction efforts. In 2011, we reported that Afghanistan’s domestic revenues funded only about 10 percent of its estimated total public expenditures. We have repeatedly raised concerns about Afghanistan’s inability to sustain and maintain donor funded projects and programs, putting U.S. investments over the last decade at risk. DOD reported in November 2013 that Afghanistan remains donor dependent. These persistent challenges are likely to play an even larger role in U.S. efforts within Afghanistan as combat forces continue to withdraw through the end of 2014. The United States, along with the international community, has focused its efforts in areas such as building the capacity of Afghan ministries to govern and deliver services, developing Afghanistan’s infrastructure and economy, and developing and sustaining ANSF. In multiple reviews of these efforts, we have identified numerous shortcomings and have made recommendations to the agencies to take corrective actions related to (1) mitigating against the risk of providing direct assistance to the Afghan government, (2) oversight and accountability of U.S. development projects, and (3) estimating the future costs of ANSF. In 2010, the United States pledged to provide at least 50 percent of its development aid directly through the Afghan government budget within 2 years. This direct assistance was intended to help develop the capacity of Afghan government ministries to manage programs and funds. In the first year of the pledge, through bilateral agreements and multilateral trust funds, the United States more than tripled its direct assistance awards to Afghanistan, growing from over $470 million in fiscal year 2009 to over $1.4 billion in fiscal year 2010. For fiscal year 2013 USAID provided about $900 million of its Afghanistan mission funds in direct assistance. In 2011 and 2013, we reported that while USAID had established and generally complied with various financial and other controls in its direct assistance agreements, it had not always assessed the risks in providing direct assistance before awarding funds. Although USAID has taken some steps in response to our recommendations to help ensure the accountability of direct assistance funds provided to the Afghan government, we have subsequently learned from a Special Inspector General for Afghanistan Reconstruction (SIGAR) report that USAID may have approved direct assistance to some Afghan ministries without mitigating all identified risks. Since 2002, U.S. agencies have allocated over $23 billion dollars towards governance and development projects in Afghanistan through USAID, DOD, and State. The agencies have undertaken thousands of development activities in Afghanistan through multiple programs and funding accounts. We have previously reported on systemic weaknesses in the monitoring and evaluation of U.S. development projects as well as the need for a comprehensive shared database that would account for all U.S. development efforts in Afghanistan (see table 1). With respect to monitoring and evaluation, although USAID collected progress reports from implementing partners for agriculture and water projects, our past work found that it did not always analyze and interpret project performance data to inform future decisions. USAID has undertaken some efforts in response to our recommendations to improve its monitoring and evaluation of the billions of dollars invested toward development projects in Afghanistan. We and other oversight agencies, however, have learned that USAID continued to apply performance management procedures inconsistently, fell short in maintaining institutional knowledge, and still needed to strengthen its oversight of contractors. For example, in February 2014, we reported that USAID identified improvements needed in its oversight and management of contractors in Afghanistan, including increasing the submission of contractor performance evaluations. We also found that USAID may have missed opportunities to leverage its institutional knowledge, and have recently recommended that USAID further assess its procedures and practices related to contingency contracting. GAO, Afghanistan Reconstruction: Progress Made in Constructing Roads, but Assessments for Determining Impact and a Sustainable Maintenance Program Are Needed, GAO-08-689 (Washington, D.C.: July 8, 2008). comprehensive database of U.S. development projects in Afghanistan in 2012, we suggested that Congress consider requiring U.S. agencies to report information in a shared comprehensive database. Since 2002, the United States, with assistance from coalition nations, has worked to build, train, and equip ANSF so that the Afghan government could lead the security effort in Afghanistan. U.S. agencies have allocated over $62 billion to support Afghanistan’s security, including efforts to build and sustain ANSF, from fiscal years 2002 through 2013. This has been the largest portion of U.S. assistance in Afghanistan. The United States and the international community have pledged to continue to assist in financing the sustainment of ANSF beyond 2014. In April 2012, we reported concerns regarding the need to be transparent in disclosing the long-term cost of sustaining ANSF beyond 2014. DOD initially objected to such disclosure noting that ANSF cost estimates depend on a constantly changing operational environment and that it provided annual cost information to Congress through briefings and testimonies. Our analysis of DOD data estimates that the cost of continuing to support ANSF from 2014 through 2017 will be over $18 billion, raising concerns about ANSF’s sustainability. Furthermore, we reported that on the basis of projections of U.S. and other donor support for ANSF, that there will be an estimated gap each year of $600 million from 2015 through 2017 between ANSF costs and donor pledges if additional contributions are not made. We previously noted in 2005 and 2008 that DOD should report to Congress about the estimated long-term cost to sustain ANSF.Congress mandated that DOD take such steps. In 2012, we once again In 2008, reported that DOD had not provided estimates of the long-term ANSF costs to Congress. Subsequently, in a November 2013 report to Congress on its efforts in Afghanistan, DOD included a section on the budget for ANSF and reported the expected size of ANSF to be 230,000 with an estimated annual budget of $4.1 billion. In February 2013, we reported that while the circumstances in Iraq differ from those in Afghanistan, potential lessons could be learned from the transition from a military to civilian-led presence to avoid possible missteps and better utilize resources. As we have reported, contingency planning is critical to a successful transition and to ensuring that there is sufficient oversight of the U.S. investment in Afghanistan.particularly vital given the uncertainties of the U.S.-Afghanistan Bilateral Security Agreement and post-2014 presence. While the circumstances, combat operations, and diplomatic efforts in Iraq differ from those in Afghanistan, potential lessons can be learned from the transition from a military to civilian-led presence in Iraq and applied to Afghanistan to avoid possible missteps and better utilize resources. In Iraq, State and DOD had to revise their plans for the U.S. presence from more than 16,000 personnel at 14 sites down to 11,500 personnel at 11 sites after the transition had begun—in part because the United States did not obtain the Government of Iraq’s commitment to the planned U.S. presence. Given these reductions, we found that State was projected to have an unobligated balance of between about $1.7 billion and about $2.3 billion in its Iraq operations budget at the end of fiscal year 2013, which we brought to the attention of Congressional appropriators. As a result, $1.1 billion was rescinded from State’s Diplomatic and Consular Programs account. According to DOD officials, U.S. Forces-Iraq planning assumed that a follow-on U.S. military force would be approved by both governments. The decision not to have a follow-on force led to a reassessment of State and DOD’s plans and presence. In April 2014, we reported that State planned for the U.S. footprint in Afghanistan to consist of the U.S. Embassy in Kabul, with additional representation at other locations as security and resources allow. In a review still under way, we are examining the status of U.S. civilian agencies’ plans for their presence in Afghanistan after the scheduled end of the U.S. combat mission on December 31, 2014, and how changes to the military presence will affect the post-2014 U.S. civilian presence. We have found that State plans to provide some critical support services to U.S. civilian personnel after the transition, but is planning to rely on DOD for certain other services. We plan to report in July 2014 on the anticipated size, locations, and cost of the post-2014 U.S. civilian presence, the planned division of critical support responsibilities between State and DOD, and how pending decisions regarding the post-2014 U.S. and coalition military presence will affect the U.S. civilian presence. In closing, the President announced in May 2014 that the United States intends to maintain a military presence in Afghanistan through the end of 2016, stationing about 10,000 military personnel in Afghanistan with two narrow missions: to continue supporting ANSF training efforts and to continue supporting counterterrorism operations against the remnants of al Qaeda. Simultaneously, the President announced that the embassy would be reduced to a “normal” presence. At the same time, the United States has made commitments to continue providing billions of dollars to Afghanistan over the next 2 years. These recently announced plans underscore the bottom line of my message today: continued oversight of U.S. agencies is required to ensure the challenges they face are properly mitigated in Afghanistan and that there is oversight and accountability of U.S. taxpayer funds. Chairman Ros-Lehtinen, Ranking Member Deutch, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to answer any questions that you may have at this time. For further information on this statement, please contact me at (202) 512- 7331 or johnsoncm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony include Hynek Kalkus (Assistant Director), David Dayton, Anne DeCecco, Mark Dowling, Brandon Hunt, Christopher J. Mulkins, Kendal Robinson, and Amie Steele. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The U.S. government has engaged in multiple efforts in Afghanistan since declaring a global war on terrorism that targeted al Qaeda, its affiliates, and other violent extremists, including certain elements of the Taliban. These efforts have focused on a whole-of-government approach that calls for the use of all elements of U.S. national power to disrupt, dismantle, and defeat al Qaeda and its affiliates and prevent their return to Afghanistan. This approach, in addition to security assistance, provided billions toward governance and development, diplomatic operations, and humanitarian assistance. To assist Congress in its oversight, GAO has issued over 70 products since 2003 including key oversight issues related to U.S. efforts in Afghanistan. This testimony summarizes the key findings from those products and discusses: (1) the challenges associated with operating in Afghanistan, (2) key oversight and accountability issues regarding U.S. efforts in Afghanistan, and (3) the need for contingency planning as the U.S. transitions to a civilian-led presence in Afghanistan. Since 2003, GAO has identified numerous challenges related to U.S. efforts in Afghanistan. Among the various challenges that GAO and others have identified, are the following: the dangerous security environment, the prevalence of corruption, and the limited capacity of the Afghan government to deliver services and sustain donor-funded projects. As illustrated in the figure below, between fiscal years 2002 and 2013, U.S. agencies allocated nearly $100 billion toward U.S. efforts in Afghanistan. The United States, along with the international community, has focused its efforts in areas such as building the capacity of Afghan ministries to govern and deliver services, developing Afghanistan's infrastructure and economy, and developing and sustaining the Afghan National Security Forces. In multiple reviews of these efforts, GAO has identified numerous shortcomings and has made recommendations to the agencies to take corrective actions related to (1) mitigating the risk of providing direct assistance to the Afghan government, (2) oversight and accountability of U.S. development projects, and (3) estimating the future costs of sustaining Afghanistan's security forces which the United States and international community have pledged to support. In February 2013, GAO reported that while the circumstances, combat operations, and diplomatic efforts in Iraq differ from those in Afghanistan, potential lessons could be learned from the transition from a military- to a civilian-led presence to avoid possible missteps and better utilize resources. As GAO has reported, contingency planning is critical to a successful transition and to ensuring that there is sufficient oversight of the U.S. investment in Afghanistan. This is particularly vital given the uncertainties of the U.S.-Afghanistan Bilateral Security Agreement and the ultimate size of the post-2014 U.S. presence in Afghanistan. While GAO is not making new recommendations it has made numerous recommendations in prior reports aimed at improving U.S. agencies' oversight and accountability of U.S. funds in Afghanistan. U.S. agencies have generally concurred with these recommendations and have taken or plan to take steps to address them.
The Chief Financial Officers (CFO) Act of 1990 was enacted to address longstanding problems in financial management in the federal government. The act established CFO positions throughout the federal government and mandated that, within each of the largest federal departments and agencies, the CFO oversee all financial management activities relating to the programs and operations of the agency. Among the key responsibilities of CFOs are overseeing the recruitment, selection, and training of personnel to carry out agency financial management functions. Recognizing that a qualified workforce was fundamental to achieving the objectives of the CFO Act and other related management reform legislation aimed at improving federal financial management, the Human Resources Committee of the Chief Financial Officers Council and the Joint Financial Management Improvement Program (JFMIP) have proposed improvements addressing the recruitment, training, retention, and performance of federal financial management personnel. In November 1995, JFMIP published the Framework for Core Competencies for Financial Management Personnel in the Federal Government, designed to highlight the knowledge, skills, and abilities that accountants, budget analysts, and other financial managers in the federal government should possess or develop to perform their functions effectively in accordance with the CFO Act. JFMIP stressed the need for federal government financial managers to be well-equipped to contribute to financial management activities, such as the execution of budgets, under increasingly constrained resource caps and the preparation, analysis, and interpretation of consolidated financial statements. A primary goal in this body of work is to obtain and share with DOD information on the formal education, professional work experience, training, and professional certifications of key financial managers in the department, including each of the military services and the Defense Finance and Accounting Service. The objective of this assignment is to provide information on the formal education, professional work experience, training, and professional certifications of personnel serving in key financial management positions in the Air Force. We obtained this information from biographies and profile instruments due to concerns regarding the completeness of personnel data bases and personnel files. We worked with Air Force officials to determine the key financial management positions to be included in this review. These positions typically were comptrollers, deputy comptrollers, and budget officers serving at operational and training commands. In agreement with Air Force officials, we did not verify the information contained in the biographies and profiles provided by the respondents. A more detailed discussion of our scope and methodology, including a description of how we obtained qualifications and work experience data, is in appendix I. We performed our audit work from January through September 1997. The Assistant Secretary of the Air Force (Financial Management and Comptroller) provided comments on a draft of this report. These comments are discussed in the “Agency Comments and Our Evaluation” section of this report and are reprinted in appendix VII. Table 1 shows the formal education and careers of the Department of the Air Force’s four executives included in our review. All four had attained both bachelor’s and master’s degrees, with majors including accounting, economics, public administration, operations research, public budgeting and finance, and history and political science. The Assistant Secretary had spent 7 years at DOD, 19 years at the Congressional Budget Office, and 3 years in the private sector. The three Deputy Assistant Secretaries’ DOD careers ranged from 27 to 31 years. In addition to his 27-year career at DOD, one also spent 2 years at the Department of the Interior and 3 years in the private sector. All four executives have served in financial management-related positions during most of their DOD careers. None held professional certifications. In collaboration with Air Force officials, we identified 204 key financial managers across the Department for this review, of which 173 (or 85 percent) provided information on their qualifications and experience. Respondents included all 10 staff from the Office of the Assistant Secretary of the Air Force (Financial Management and Comptroller)—SAF/FM&C, 106 of 129 staff from four operational commands and their installations, 28 of 36 staff from the Air Education and Training Command and its all 29 staff from the Air Force Materiel Command and centers, including 5 air logistics centers responsible for supply and maintenance support and 3 product centers responsible for the research, development, test, and evaluation (RDT&E) and procurement of Air Force aeronautical, electronics, space, and missile systems. The SAF/FM&C respondents performed roles involving financial operations, financial management policy, and/or budget execution. The officials responding from the major commands and installations included 76 comptrollers, 14 deputy comptrollers, 68 budget officers, and 5 working capital fund managers—the last being from the Air Force Materiel Command and its air logistics centers. Of the 173 respondents, almost 70 percent were military officers. Table 2 provides a breakout of the 117 officers by rank and the 56 civilians by grade. The officers served mainly as comptrollers and budget officers at major commands and comptrollers at installations, and the civilians most often served in budget officer positions at installations. Over 90 percent of the respondents (all 117 officers and 41 of 56 civilians) reported having attained bachelor’s degrees, and about 75 percent had also attained master’s degrees. Two of the respondents also reported holding doctoral degrees. For bachelor’s degrees held, table 3 shows the number reported in accounting, other business, and nonbusiness majors. About 30 percent of these 158 respondents majored in accounting, while approximately 50 percent had other business-related majors. Six of the respondents reported more than one major. Table 4 shows the majors reported by the 99 officers and 30 civilians holding master’s degrees. While none of the respondents held master’s degrees in accounting, about two-thirds of these staff listed other business-related majors. Four respondents reported holding more than one major. Of the two civilians reporting doctoral degrees, one majored in business administration and the other in law. The key financial managers were also requested to provide information on the number of accounting-related subjects completed as part of their formal education. Of the 173 respondents, 163 had completed one or more of these subjects, as follows: 1-2 subjects: 29 (22 officers and 7 civilians), 3-5 subjects: 55 (37 officers and 18 civilians), and 6 or more subjects: 79 (55 officers and 24 civilians). Included in this latter group were 75 (or about 43 percent of the respondents) who reported completing both principles of accounting and intermediate accounting along with at least 4 other subjects. By completing this level of education in accounting-related subjects, these 75 staff also appear to meet the educational requirements to serve in federal GS-510 accountant positions. Figures 1 and 2 show the average number of years of work experience by rank for the officers and by grade for the civilians, respectively. As the figures show, both officer and civilian respondents have spent most of their careers in DOD. About 50 percent of all respondents, officers and civilians, reported performing tasks in several financial management-related functions included in our review throughout their careers. The officers’ careers ranged from 3 to 38 years, averaging 18 years, while the civilians’ careers ranged from 12 to 44 years, averaging 27 years. Officers and civilians at the ranks of first lieutenant and captain and grades of GS-11 and 12 typically served in budget officer positions at installations. In collaboration with DOD officials, we identified five functions and associated tasks which are often performed by personnel serving in key financial management positions, including: financial statement preparation—preparing annual financial statements and footnotes; financial reporting/accounting policy—preparing financial reports and consulting on the application of accounting policy; financial analysis—performing tasks associated with cost accounting, business process improvements, budgeting, cash flow analysis, cost analysis, revenue and expenditure forecasting, and other analysis of financial position and operations; accounting operations—recording and reporting accounting transactions; and accounting systems development and maintenance—performing tasks associated with functional design and maintenance of accounting and finance systems. Fifty-five officers and 28 civilians, or almost one-half of each group, reported that they had performed tasks in 3 or more of these functions during their careers. Figures 3 and 4 show the number of officers and civilians who indicated that they had performed each function and the average number of years of experience in that function. For example, as shown in figure 3, 114 of the 117 officers have performed financial analysis-related tasks for an average of 9 years. During 1995 and 1996, about 75 percent of the officers and 80 percent of the civilians reported completing some form of training. Of the 86 officers and 45 civilians receiving training, 9 out of 10 listed general topics, such as computers and supervision, as examples of the training they had completed. Meanwhile, about one-half of both officers and civilians reported completing some training in financial-related topics, while only about 2 out of 10 reported completing training in accounting-related topics, such as accounting standards and financial reporting. Figure 5 shows the type of training completed during the 2-year period as reported by the 173 respondents. As indicated in the figure: total receiving accounting-related training: 26 (18 officers and 8 civilians), total receiving financial-related training: 63 (42 officers and 21 civilians), total receiving training in general topics: 120 (79 officers and 41 civilians), and total not receiving training: 42 (31 officers and 11 civilians). Almost 20 percent of the respondents reported holding financial management-related certifications. Figure 6 shows the numbers and types of professional certifications reported by the Air Force financial managers. Of the 32 respondents holding one or more financial management-related certifications, 6 were CPAs (3 officers and 3 civilians), 6 were CGFMs (3 officers and 3 civilians), and 24 held other financial management-related certifications (11 officers and 13 civilians). Also, 24 staff reported nonfinancial management-related certifications, including 15 officers and 9 civilians. Of the 128 staff that did not hold any professional certifications, 91 were officers and 37 were civilians. Appendixes II through VI provide the formal education, professional work experience, training, and professional certification data for the 117 officers and 56 civilians by their respective organizations, including: SAF/FM&C in appendix II, 4 operational commands and 51 of their 57 installations in appendix III, the Air Education and Training Command and 13 of its 16 installations in appendix IV, Air Force Materiel Command (AFMC) and the five air logistics centers in AFMC and three product centers in appendix VI. In commenting on a draft of this report, the Air Force generally concurred with the contents and stated that it believed the information will help its evaluation of military and civilian career programs to ensure Air Force financial managers provide the best possible service to customers. The Air Force expressed concern, however, that parts of the report seemed to overly emphasize the need for accounting courses and training. Regarding the Air Force’s concern, this report presents information on a number of measures relating to qualifications and experience of key Air Force financial managers, who are serving in positions responsible for the fiscal and budgetary management of the data used to prepare financial reports and statements. As agreed with Air Force officials, information on formal education and training, including accounting training, are among such important measures. As the Air Force response indicates, this information will help the Department evaluate its military and civilian career programs to ensure Air Force financial managers provide the best possible service to customers. The Air Force’s comments are reprinted in appendix VII. Also, the Air Force provided a number of technical comments, which were fully addressed in finalizing our report. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Governmental Affairs, the House Committee on Government Reform and Oversight, and the Subcommittee on Government Management, Information, and Technology of the House Government Reform and Oversight Committee, and to the Director of the Office of Management and Budget. Copies will also be made available to others upon request. If you have any questions about this report, please contact me at (202) 512-9095. Major contributors to this report are listed in appendix VIII. In collaboration with Air Force officials, we identified Air Force financial managers to be included in this review as those serving in key positions throughout the department. For the most part, these positions included comptrollers, deputy comptrollers, and budget officers at operational and training commands and their installations. The types of Air Force organizations from which we selected financial managers are similar to those we are reviewing in the other services. In addition to the office of the assistant secretary for financial management for each military service, we are also focusing on operational and training organizations, working capital fund activities, and activities involved in the research, development, test, evaluation, and procurement of major systems. In the Air Force, the 208 key financial managers selected for this review included: 4 senior executives in the Office of the Assistant Secretary of the Air Force (Financial Management and Comptroller)—SAF/FM&C, including the Assistant Secretary of the Air Force (Financial Management and Comptroller); Principal Deputy Assistant Secretary of the Air Force (Financial Management and Comptroller); Deputy Assistant Secretary, Financial Operations; and Deputy Assistant Secretary, Budget; 10 SAF/FM&C staff involved in financial operations, financial management policy, and/or budget execution-related functions; and 194 staff serving in comptroller, deputy comptroller, budget officer, and working capital fund manager positions at 87 major commands and installations involved in operations, training, supply and maintenance, and the research, development, test, evaluation, and procurement of aircraft, missiles, and other Air Force systems, such as launch systems, satellites, and communications/electronics. Of the 208 selected Air Force financial managers located at 88 organizations, 177 from 79 of these organizations responded to this review. The respondents included the 4 senior executives, the 10 SAF/FM&C staff, and 163 key staff from major commands and installations comprised of 76 comptrollers, 14 deputy comptrollers, 68 budget officers, and 5 working capital fund managers. Table I.1 identifies the Air Force major commands and the number of their installations and key financial managers included in this review. Also, shown by each major command are the number of installations and respondents. The respondents are further identified by position— comptrollers, deputy comptrollers, budget officers, and working capital managers. We obtained fiscal year 1997 Air Force budget data, including operation and maintenance (O&M) funding for operational, training, and working capital fund and product centers from the SAF/FM&C budget office. We also obtained research, development, test, and evaluation and procurement funding for the product centers. Those commands and installations included in our review managed about $25 billion of the $60 billion Air Force budget during fiscal year 1997. Air Combat Command and 24 of its 27 installations (48 of the 58 staff responding included 24 comptrollers, 2 deputy comptrollers, and 22 budget officers) Pacific Air Forces and 10 of its 12 installations (21 of the 28 staff responding included 11 comptrollers, 1 deputy comptroller, and 9 budget officers) U.S. Air Forces in Europe and six of its seven installations (13 of the 17 staff responding included 7 comptrollers and 6 budget officers) Air Mobility Command and its 11 installations (24 of the 26 staff responding included 12 comptrollers, 2 deputy comptrollers, and 10 budget officers) Air Education and Training Command and 13 of its 16 installations (28 of the 36 staff responding included 13 comptrollers, 2 deputy comptrollers, and 13 budget officers) Air Force Materiel Command and its 5 air logistics centers (all 20 staff responded, including 6 comptrollers, 4 deputy comptrollers, 5 budget officers, and 5 working capital fund managers) Air Force Materiel Command and its 3 product centers involved in aeronautics, electronics, and space and missile research, development, test, evaluation, and procurement efforts (all 12 staff responded, including 4 comptrollers, 4 deputy comptrollers, and 4 budget officers) In an August 1988 report, GAO proposed a framework for evaluating the quality of the federal workforce over time. Quantifiable measures identified in that report include specific knowledge, skills, and abilities. Using this report and the JFMIP study on core competencies, and in collaboration with DOD representatives, we identified four indicators to measure the attributes that key financial managers can bring to their positions. These include formal education, professional work experience, training, and professional certifications. These attributes are being used to measure the qualifications and experience of key financial managers in the five DOD organizations included in our reviews. We then worked with Air Force officials in developing a data collection instrument to gather the following types of information under each indicator: formal education: degrees attained, majors, and specific accounting and financial-related courses completed; professional work experience: (1) number of years working in current position, years at DOD, years in other government agencies, and years in the private sector, and (2) experience in five specific financial management-related functions; training: during 1995-1996, specific subjects completed related to accounting, other financial-related topics, and general topics; and professional certifications: CPA, CGFM, other financial management-related certifications, and other nonfinancial management-related certifications held. For the four Air Force executives, we obtained information on their formal education, careers, and professional certifications from official biographies. For all other individuals, due to Air Force officials’ concerns over the completeness of personnel files and data bases, we agreed to collect information on the four indicators using profile instruments. This procedure is being used to collect qualification and experience information from all DOD organizations in this series of assignments. We sent profile instruments to the Office of the Secretary of the Air Force (Financial Management and Comptroller) and each major command and installation. Those activities then distributed the instruments to personnel serving in financial management positions identified for this review. We mailed more instruments to those activities from which the originals had not been received after 60 days and contacted those respondents whose profile instruments were returned with incomplete information. Through these efforts, we received profile instruments with complete information from 85 percent of the key financial managers included in this review. Figure I.1 contains the profile instrument we used to obtain personnel qualification and experience information from Air Force financial managers. As agreed with the Air Force, we did not attempt to verify the information contained in the biographies or the profiles we received. However, as noted above, for incomplete instruments, we contacted those individuals and obtained the missing information. We conducted our work from January through September 1997 in accordance with generally accepted government auditing standards. We included 10 key financial managers in the Office of the Assistant Secretary of the Air Force (Financial Management and Comptroller)—SAF/FM&C, all of whom provided information on their qualifications and experience. This population includes four staff involved in financial operations, one staff in financial management/accounting policy, and five staff in budget execution functions. Table II.1 shows the officer and civilian composition of this staff, by rank and grade. Brigadier General (O-7) Colonel (O-6) Lieutenant Colonel (O-5) Major (O-4) Captain (O-3) First Lieutenant (O-2) As shown in table II.2, all 10 respondents have bachelor’s degrees, with one of the 10 also reporting more than one major. Four of the 10 majored in accounting. As shown in table II.3, all 10 staff also held master’s degrees, nine of which were business related. All of the 10 respondents completed one or more courses in accounting-related subjects, as follows: 1-2 subjects: 1 civilian, 3-5 subjects: 2 civilians, and 6 or more subjects: 7 (3 officers and 4 civilians). All of respondents in the latter group appear to have met the educational requirements to serve in GS-510 accountant positions. Two civilians also held doctoral degrees, one in business administration and the other in law. Figures II.1 and II.2 show the average number of years of work experience by rank for the three officers and by grade for the seven civilians, respectively. The average was 32 years for the officers, ranging from 24 to 37 years, and 28 years for the civilians, ranging from 22 to 44 years. As the figures show, the respondents have spent most of their careers in DOD. B G() Con() () Staff by rank (number at each rank) S Execves () GS- () GS- () GS- () Staff by grade (number at each grade) Figures II.3 and II.4 show the number of officers and civilians who indicated that they had performed each financial management function previously outlined and the average number of years of experience in that function. All of the respondents have performed financial analysis functions. A review of their profiles also showed that the three officers and six civilians have performed tasks in three or more of these functions. Number of staff performing function (total = 3 respondents) g policy ( yea) yea) opon ( yea) ( yea) Number of staff performing function (total = 7 respondents) g policy ( yea) yea) opon ( yea) ( yea) Figure II.5 shows the training reported by the 10 respondents as being completed during 1995 and 1996. Number of staff (total = 10 respondents) As indicated in the figure: total receiving accounting-related training: three (one officer and two civilians), total receiving financial-related training: four (one officer and three civilians), total receiving training in general topics: eight (two officers and six civilians), and total not receiving training: two (one officer and one civilian). Figure II.6 shows the numbers and types of professional certifications held by the SAF/FM&C financial managers. Of the six holding one or more of these certifications, three civilians were CPAs, two civilians were CGFMs, one officer and one civilian held other financial management-related one officer held nonfinancial management-related certifications. Of the four staff that did not hold any professional certifications, one was an officer and three were civilians. Number of staff (total = 10 respondents) The four Air Force operational commands included in this review were the Air Combat Command (ACC), Pacific Air Forces (PACAF), U.S. Air Forces in Europe (USAFE), and Air Mobility Command (AMC). Surveys were sent to 129 financial managers, 106 responded, representing all four operational commands and 51 of their 57 installations. Table III.1 shows the number of installations by major command, the number of key financial managers within each command, and the number responding to this review. The table also shows the operation and maintenance (O&M) funding for fiscal year 1997 managed by each major command. Commands and (number of installations) O&M budgets (in billions) Air Combat Command (27) Pacific Air Forces (12) U.S. Air Forces in Europe (7) Air Mobility Command (11) Total (57) Table III.2 shows the officer and civilian composition of the respondents, by rank and grade, respectively. The 106 respondents included 54 comptrollers, 5 deputy comptrollers, and 47 budget officers. Brigadier General (O-7) Colonel (O-6) Lieutenant Colonel (O-5) Major (O-4) Captain (O-3) First Lieutenant (O-2) As shown in table III.3, 96 of the 106 respondents held bachelor’s degrees, with one of the 96 also reporting more than one major. The major for 25 of these respondents was accounting. As shown in table III.4, 79 staff also held master’s degrees, with 4 of these staff also reporting more than one major. The majors for 52 of these staff were business related. Of the 106 respondents, 100 (86 officers and 14 civilians) completed one or more courses in accounting-related subjects, as follows: 1-2 subjects: 21 (18 officers and 3 civilians), 3-5 subjects: 33 (26 officers and 7 civilians), and 6 or more subjects: 46 (42 officers and 4 civilians). Of the latter group, 41 officers and 3 civilians appear to have met the educational requirements to serve in GS-510 accountant positions. Figures III.1 and III.2 show the average number of years of work experience by rank for the 88 officers and by grade for the 18 civilians. The average was 16 years for the officers, ranging from 3 to 35 years, and 27 years for the civilians, ranging from 19 to 44 years. As the figures show, the respondents have spent most of their careers in DOD. B G() Con() () M () C () F L () Staff by rank (number at each rank) GS-13 (2) GS-12 (14) GS-11 (2) Staff by grade (number at each grade) Figures III.3 and III.4 show the number of officers and civilians who indicated that they had performed each financial management function previously outlined and the average number of years of experience in that function. Financial analysis was the function performed most frequently. A review of their profiles also showed that 37 officers and 7 civilians have performed tasks in 3 or more of these functions. Number of staff performing function (total = 88 respondents) g policy ( yea) yea) opon ( yea) ( yea) Number of staff performing function (total = 18 respondents) g policy ( yea) yea) opon ( yea) ( yea) Figure III.5 shows the training reported by the 106 respondents as being completed during 1995 and 1996. Number of staff (total = 106 respondents) As indicated in the figure: total receiving accounting-related training: 15 (11 officers and 4 civilians), total receiving financial-related training: 37 (31 officers and 6 civilians), total receiving training in general topics: 71 (59 officers and 12 civilians), and total not receiving training: 28 (24 officers and 4 civilians). Figure III.6 shows the numbers and types of professional certifications held by the key operational command and installation financial managers. Of the 19 holding one or more of these certificates: one officer was a CPA, 2 officers and one civilian were CGFMs, 8 officers and 1 civilian held other financial management-related 11 officers held nonfinancial management-related certifications. Of the 87 staff that did not hold any professional certifications, 71 were officers and 16 were civilians. Number of staff (total = 106 respondents) AETC managed an O&M budget of $1.8 billion for fiscal year 1997. As shown in table IV.1, 28 of the 36 key financial managers from AETC (representing 13 of its 16 installations) provided information on their qualifications and experience. The respondents included 13 comptrollers, 2 deputy comptrollers, and 13 budget officers. Brigadier General (O-7) Colonel (O-6) Lieutenant Colonel (O-5) Major (O-4) Captain (O-3) First Lieutenant (O-2) As shown in table IV.2, 24 of the 28 respondents held bachelor’s degrees, with one of the 24 also reporting more than one major. Seven majored in accounting. As shown in table IV.3, 19 staff also held master’s degrees. The majors for 14 of these staff were business related. Of the 28 respondents, 26 (18 officers and 8 civilians) reported completing one or more courses in accounting-related subjects, as follows: 1-2 subjects: 4 (3 officers and 1 civilian), 3-5 subjects: 10 (8 officers and 2 civilians), and 6 or more subjects: 12 (7 officers and 5 civilians). Of the latter group, seven officers and three civilians appear to have met the educational requirements to serve in GS-510 accountant positions. Figures IV.1 and IV.2 show the average number of years of work experience by rank for the 19 officers and by grade for the 9 civilians. The average was 18 years for the officers, ranging from 7 to 27 years, and 26 years for the civilians, ranging from 12 to 31 years. As the figures show, most of the respondents have spent the major part of their careers in DOD. B G() Con() () M () C () Staff by rank (number at each rank) GS-13 (4) GS-12 (3) GS-11 (2) Staff by grade (number at each grade) Figures IV.3 and IV.4 show the number of officers and civilians who indicated that they had performed each financial management function previously outlined and the average number of years of experience in that function. The financial management function performed most frequently was financial analysis. A review of their profiles also showed that 10 officers and 5 civilians have performed tasks in 3 or more of these functions. Number of staff performing function (total = 19 respondents) g policy ( yea) yea) opon ( yea) ( yea) Number of staff performing function (total = 9 respondents) g Fys (policy ( yea) yea) opon ( yea) ( yea) Figure IV.5 shows the training reported by the 28 respondents as being completed during 1995 and 1996. Number of staff (total = 28 respondents) As indicated in the figure: total receiving accounting-related training: 5 (all officers), total receiving financial-related training: 13 (8 officers and 5 civilians), total receiving training in general topics: 21 (13 officers and 8 civilians), and total not receiving training: 5 (4 officers and 1 civilian). Figure IV.6 shows the numbers and types of professional certifications held by the key training command and installation financial managers. Of the four holding these certifications: none were CPAs, one officer was a CGFM, none held other financial management-related certifications, and three officers held nonfinancial management-related certifications. Of the 24 staff that did not hold professional certifications, 15 were officers and 9 were civilians. Number of staff (total = 28 respondents) The five air logistics centers (ALCs) within the Air Force Materiel Command (AFMC) managed a fiscal year 1997 budget of $4.4 billion, derived from their customers’ O&M accounts. The 20 key financial managers at AFMC and the ALCs provided information on their qualifications and experience. Table V.1 provides the ranks of the 3 officers and grades of the 17 civilians. The respondents included six comptrollers, four deputy comptrollers, five budget officers, and five working capital fund managers. Brigadier General (O-7) Colonel (O-6) Lieutenant Colonel (O-5) Major (O-4) Captain (O-3) First Lieutenant (O-2) As shown in table V.2, 19 of the 20 respondents held bachelor’s degrees, with 3 of the 19 reporting more than one major. Seven majored in accounting. As shown in table V.3, 14 staff also held master’s degrees. The majors for five of these staff were business related. Of the 20 respondents, 18 (3 officers and 15 civilians) reported completing one or more courses in accounting-related subjects, as follows: 1-2 subjects: 3 (1 officer and 2 civilians), 3-5 subjects: 6 (2 officers and 4 civilians), and 6 or more subjects: 9 civilians. All of the 9 civilians in the latter group appear to have met the educational requirements to serve in GS-510 accountant positions. Figures V.1 and V.2 show the average number of years of work experience by rank for the 3 officers and by grade for the 17 civilians. The average was 27 years for the officers, ranging from 27 to 28 years, and 26 years for the civilians, ranging from 17 to 32 years. As the figures show, the respondents have spent most of their careers in DOD. B G() Con() Staff by rank (number at each rank) S Execves () GS- () GS- () GS- () Staff by grade (number at each grade) Figures V.3 and V.4 show the number of officers and civilians who indicated that they had performed each financial management function previously outlined and the average number of years of experience in that function. The financial management function performed most frequently was financial analysis. A review of their profiles also showed that the 3 officers and 10 of the 17 civilians have performed tasks in 3 or more of these functions. Number of staff performing function (total = 3 respondents) g policy ( yea) yea) opon ( yea) ( yea) Number of staff performing function (total = 17 respondents) g policy ( yea) yea) opon ( yea) ( yea) Figure V.5 shows the training reported by the 20 respondents as being completed during 1995 and 1996. Number of staff (total = 20 respondents) As indicated in the figure: total receiving accounting-related training: 3 (1 officer and 2 civilians), total receiving financial-related training: 6 (1 officer and 5 civilians), total receiving training in general topics: 11 (1 officer and 10 civilians), and total not receiving training: 7 (2 officers and 5 civilians). Figure V.6 shows the numbers and types of professional certifications held by the working capital fund key financial managers. Of the nine holding one or more of these certifications: none were CPAs, none were CGFMs, seven civilians held other financial management-related certifications, and seven civilians held nonfinancial management-related certifications. Of the 11 that did not hold any professional certifications, 3 were officers and 8 were civilians. Number of staff (total = 20 respondents) In addition to the five air logistics centers, the Air Force Materiel Command (AFMC) also has oversight of product centers. The Aeronautical Systems Center, Electronics Systems Center, and Space and Missile Systems Center managed an O&M budget of $1.22 billion, a RDT&E budget of $5.23 billion, and a procurement budget of $5.05 billion during fiscal year 1997. The 12 key financial managers at AFMC and these centers provided information on their qualifications and experience. Table VI.1 provides the ranks of the six officers and grades of the six civilians. The respondents included four comptrollers, four deputy comptrollers, and four budget officers. Brigadier General (O-7) Colonel (O-6) Lieutenant Colonel (O-5) Major (O-4) Captain (O-3) First Lieutenant (O-2) As shown in table VI.2, the 12 respondents held bachelor’s degrees. Five majored in accounting. As shown in table VI.3, 10 staff also held master’s degrees. The majors for eight of these staff were business related. All of the 12 respondents reported completing one or more courses in accounting-related subjects, as follows: 1-2 subjects: 1 officer, 3-5 subjects: 5 (2 officers and 3 civilians), and 6 or more subjects: 6 (3 officers and 3 civilians). All of the latter group appear to have met the educational requirements to serve in GS-510 accountant positions. Figures VI.1 and VI.2 show the average number of years of work experience by rank for the six officers and by grade for the six civilians. The average was 26 years for the officers, ranging from 18 to 30 years, and 27 years for the civilians, ranging from 19 to 32 years. As the figures show, the respondents have spent most of their careers in DOD. B G() Con() () Staff by rank (number at each rank) S Execves () GS- () Staff by grade (number at each grade) Figures VI.3 and VI.4 show the number of officers and civilians who indicated that they had performed each financial management function previously outlined and the average number of years of experience in that function. The financial management functions performed most frequently were financial analysis and financial reporting/accounting policy. A review of their profiles also showed that the four officers and one civilian have performed tasks in three or more of these functions. Number of staff performing function (total = 6 respondents) g policy ( yea) yea) opon ( yea) ( yea) Number of staff performing function (total = 6 respondents) g policy ( yea) yea) opon ( yea) ( yea) Figure VI.5 shows the training reported by the 12 respondents as being completed during 1995 and 1996. Number of staff (total = 12 respondents) As indicated in the figure: total receiving accounting-related training: 1 (an officer), total receiving financial-related training: 4 (2 officers and 2 civilians), total receiving training in general topics: 11 (5 officers and 6 civilians), and total not receiving training: 1 (an officer). Figure VI.6 shows the numbers and types of professional certifications held by the product center key financial managers. Of the seven holding one or more of these certifications: two officers were CPAs, none were CGFMs, two officers and four civilians held other financial management-related certifications, and two civilians held nonfinancial management-related certifications. Of the five staff that did not hold any professional certifications, three were officers and two were civilians. Number of staff (total = 12 respondents) George H. Stalcup, Associate Director Geoffrey B. Frank, Assistant Director Robert L. Self, Evaluator-in-Charge Patricia A. Summers, Senior Auditor Dennis B. Fauber, Senior Evaluator Francine M. DelVecchio, Communications Analyst Michele A. Howard, Intern The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative mandate, GAO provided information on key financial managers within the Department of the Air Force, specifically focusing on the qualifications and professional work experience of 4 Air Force management executives and 173 key financial management staff representing 79 of the 88 Air Force organizations. GAO noted that: (1) the four Air Force financial management executives included the: (a) Assistant Secretary of the Air Force (Financial Management and Comptroller); (b) Principal Deputy Assistant Secretary of the Air Force (Financial Management and Comptroller); (c) Deputy Assistant Secretary, Financial Operations; and (d) Deputy Assistant Secretary, Budget; (2) each of the executives had attained masters degrees; (3) none held professional certifications; and (4) of the 173 other key Air Force financial managers responding to GAO's review: (a) almost 70 percent (117) were military officers, serving mainly as comptrollers and budget officers at major commands and as comptrollers at installations; (b) 56 were civilian personnel serving mainly in budget officer positions at installations; (c) all of the 117 officers and 41 of the 56 civilians reported holding bachelors degrees; (d) about 30 percent of respondents with bachelors degrees majored in accounting, while approximately 50 percent majored in other business-related areas; (e) 129 (99 officers and 30 civilians) also reported holding advanced degrees; (f) about two-thirds of these degrees were in business-related majors other than accounting, while the majors of the remaining respondents were not business-related, and two civilians held doctoral degrees--one in business administration and the other in law; (g) the officers' careers ranged from 3 to 38 years, averaging 18 years, while the civilians' careers ranged from 12 to 44 years, averaging 27 years; (h) officers with less than 12 years of experience were most often assigned as budget officers at installations; (i) about 50 percent of all respondents reported performing tasks throughout their careers in several financial management-related functions included in GAO's review; (j) 131 respondents (86 officers and 45 civilians) reported receiving training during 1995 and 1996, with 9 out of every 10 listing general topics, such as computers and supervision, as examples of the training completed; (k) about one-half also reported completing financial-related training during this period, while only about 2 out of 10 reported completing accounting-related training, such as accounting standards and financial reporting; (l) about 20 percent of the respondents reported holding one or more financial management-related certifications; and (m) of the 32 holding certificates, 6 were Certified Public Accountants (CPA), 6 were Certified Government Financial Managers (CGFM), and 24 were others, such as Certified Cost Analysts and Certified Acquisition Professional in Financial Management and Comptrollership.
NRC is an independent agency of over 3,200 employees established by the Energy Reorganization Act of 1974 to regulate civilian—that is, commercial, industrial, academic, and medical—use of nuclear materials. NRC is headed by a five-member Commission. The President appoints the Commission members, who are confirmed by the Senate, and designates one of them to serve as Chairman and official spokesperson. The Commission as a whole formulates policies and regulations governing nuclear reactor and materials safety, issues orders to licensees, and adjudicates legal matters brought before it. NRC and the licensees of nuclear power plants share the responsibility for ensuring that commercial nuclear power reactors are operated safely. NRC is responsible for issuing regulations, licensing and inspecting plants, and requiring action, as necessary, to protect public health and safety. Plant licensees have the primary responsibility for safely operating their plants in accordance with their licenses and NRC regulations. NRC has the authority to take actions, up to and including shutting down a plant, if licensing conditions are not being met and the plant poses an undue risk to public health and safety. Nuclear power plants have many physical structures, systems, and components, and licensees have numerous activities under way, 24-hours a day, to ensure that plants operate safely. NRC relies on, among other things, its on-site resident inspectors to assess plant conditions and the licensees’ quality assurance programs such as those required for maintenance and problem identification and resolution. With its current resources, NRC can inspect only a relatively small sample of the numerous activities going on during complex plant operations. According to NRC, its focus on the more safety significant activities is made possible by the fact that safety performance at plants has improved as a result of more than 25 years of operating experience. Commercial nuclear power plants are designed according to a “defense in depth” philosophy revolving around redundant, diverse, and reliable safety systems. For example, two or more key components are put in place so that if one fails, there is another to back it up. Plants have numerous built- in sensors to monitor important indicators such as water temperature and pressure. Plants also have physical barriers to contain the radiation and provide emergency protection. For example, the nuclear fuel is contained in a ceramic pellet to lock in the radioactive byproducts and then the fuel pellets are sealed inside rods made of special material designed to contain fission products, and the fuel rods are placed in reactors housed in containment buildings made of several feet of concrete and steel. Furthermore, the nuclear power industry formed an organization, the Institute of Nuclear Power Operations (INPO) with the mission to “promote the highest levels of safety and reliability-to promote excellence- in the operation of nuclear electric generating plants.” INPO provides a system of personnel training and qualification for all key positions at nuclear power plants and workers undergo both periodic training and assessment. INPO also conducts periodic evaluations of operating nuclear plants, focusing on plant safety and reliability, in the areas of operations, maintenance, engineering, radiological protection, chemistry, and training. Licensees make these evaluations available to the NRC for review, and the NRC staff uses the evaluations as a means to determine whether its oversight process has missed any performance issues. NRC uses various tools to oversee the safe operation of nuclear power plants, generally consisting of physical plant inspections of equipment and records and objective indicators of plant performance. These tools are risk-informed in that they are focused on the issues considered most important to plant safety. Based on the results of the information it collects through these efforts, NRC takes a graded approach to its oversight, increasing the level of regulatory attention to plants based on the severity of identified performance issues. NRC bases its regulatory oversight process on the principle and requirement that plant licensees routinely identify and address performance issues without NRC’s direct involvement. An important aspect of NRC’s inspections is ensuring the effectiveness of licensee quality assurance programs. NRC assesses overall plant performance and communicates these results to licensees on a semi- annual basis. During fiscal year 2005, NRC inspectors spent a total of 411,490 hours on plant inspection activities (an average of 77 hours per week at each plant). The majority of these inspection efforts were spent on baseline inspections, which all plants receive on an almost continuous basis. Baseline inspections, which are mostly conducted by the two to three NRC inspectors located at each nuclear power plant site, evaluate the safety performance of plant operations and review plant effectiveness at identifying and resolving its safety problems. There are more than 30 baseline inspection procedures, conducted at varying intervals, ranging from quarterly to triennially, and involving both physical observation of plant activities and reviews of plant reports and data. The inspection procedures are risk-informed to focus inspectors’ efforts on the most important areas of plant safety in four ways: 1) areas of inspection are included in the set of baseline procedures based on, in part, their risk importance, 2) risk information is used to help determine the frequency and scope of inspections, 3) the selection of activities to inspect within each procedure is informed with plant-specific risk information, and 4) the inspectors are trained in the use of risk information in planning their inspections. For inspection findings found to be more than minor, NRC uses its significance determination process (SDP) to assign each finding one of four colors to reflect its risk significance. Green findings equate to very low risk significance, while white, yellow, and red colors represent increasing levels of risk, respectively. Throughout its application of the SDP, NRC incorporates information from the licensee, and the licensee has the opportunity to formally appeal the final determination that is made. In addition to assigning each finding a color based on its risk significance, all findings are evaluated to determine if certain aspects of plant performance, referred to as cross-cutting issues, were a contributing cause to the performance problem. The cross-cutting issues are comprised of (1) problem identification and resolution, (2) human performance, and (3) safety consciousness in the work environment. To illustrate, in analyzing the failure of a valve to operate properly, NRC inspectors determined that the plant licensee had not followed the correct procedures when performing maintenance on the valve, and thus NRC concluded the finding was associated with the human performance cross-cutting area. If NRC determines that there are multiple findings during the 12-month assessment period with documented cross-cutting aspects, more than three findings with the same causal theme, and NRC has a concern about the licensee’s progress in addressing these areas, it may determine that the licensee has a “substantive” cross-cutting issue. Opening a substantive cross-cutting issue serves as a way for NRC to notify the plant licensee that problems have been identified in one of the areas and that NRC will focus its inspection efforts in the cross-cutting area of concern. When NRC becomes aware of one or more performance problems at a plant that are assigned a risk color greater-than-green (white, yellow, or red), it conducts supplemental inspections. Supplemental inspections, which are performed by regional staff, expand the scope beyond baseline inspection procedures and are designed to focus on diagnosing the cause of the specific performance deficiency. NRC increases the scope of its supplemental inspection procedures based on the number of greater-than- green findings identified, the area where the performance problem was identified, and the risk color assigned. For example, if one white finding is identified, NRC conducts a follow-up inspection directed at assessing the licensee’s corrective actions to ensure they were sufficient in both correcting the specific problem identified and identifying and addressing the root and contributing causes to prevent recurrence of a similar problem. If multiple yellow findings or a single red finding is identified, NRC conducts a much more comprehensive inspection which includes obtaining information to determine whether continued operation of the plant is acceptable and whether additional regulatory actions are necessary to address declining plant performance. This type of more extensive inspection is usually conducted by a multi-disciplinary team of NRC inspectors and may take place over a period of several months. NRC inspectors assess the adequacy of the licensee’s programs and processes such as those for identifying, evaluating, and correcting performance issues and the overall root and contributing causes of identified performance deficiencies. NRC conducts special inspections when specific events occur at plants that are of particular interest to NRC because of their potential safety significance. Special inspections are conducted to determine the cause of the event and assess the licensee’s response. For special inspections, a team of experts is formed and an inspection charter issued that describes the scope of the inspection efforts. At one plant we reviewed, for example, a special inspection was conducted to investigate the circumstances surrounding the discovery of leakage from a spent fuel storage pool. Among the objectives of this inspection were to assess the adequacy of the plant licensee’s determination of the source and cause of the leak, the risk significance of the leakage, and the proposed strategies to mitigate leakage that had already occurred and repair the problem to prevent further leakage. In addition to its various inspections, NRC also collects plant performance information through a performance indicator program, which it maintains in cooperation with the nuclear power industry. On a quarterly basis, each plant submits data for 15 separate performance indicators. These objective numeric measures of plant operations are designed to measure plant performance related to safety in various aspects of plant operations. For example, one indicator measures the number of unplanned reactor shutdowns during the previous four quarters while another measures the capability of alert and notification system sirens, which notify residents living near the plant in the event of an accident. Working with the nuclear power industry, NRC established specific criteria for acceptable performance with thresholds set and assigned colors to reflect increasing risk according to established safety margins for each of the indicators. Green indicators reflect performance within the acceptable range while white, yellow, and red colors represent decreasing plant performance, respectively. NRC inspectors review and verify the data submitted for each performance indicator annually through the baseline inspection process. If questions arise about how to calculate a particular indicator or what the correct value should be, there is a formal feedback process in place to resolve the issue. When performance indicator thresholds are exceeded, NRC responds in a graded fashion by performing supplemental inspections that range in scope depending on the significance of the performance issue. Under the ROP, NRC places each plant into a performance category on the agency’s action matrix, which corresponds to increasing levels of oversight based on the number and risk significance of inspection findings and performance indicators. The action matrix is NRC’s formal method of determining what additional oversight procedures—mostly supplemental inspections—are required. Greater-than-green inspection findings are included in the action matrix for a minimum of four quarters to allow sufficient time for additional findings to accumulate that may indicate more pervasive performance problems requiring additional NRC oversight. If a licensee fails to correct the performance problems within the initial four quarters, the finding may be held open and considered for additional oversight for more than the minimum four quarters. At the end of each 6-month period, NRC issues an assessment letter to each plant licensee. This letter describes what level of oversight the plant will receive according to its placement in the action matrix performance categories, what actions NRC is expecting the plant licensee to take as a result of the performance issues identified, and any documented substantive cross-cutting issues. NRC also holds an annual public meeting at or near each plant site to review performance and address questions about the plant’s performance from members of the public and other interested stakeholders. Most inspection reports, assessment letters and other materials related to NRC’s oversight processes are made publicly available through a NRC website devoted to the ROP. The website also includes plant-specific quarterly summaries of green or greater inspection findings and all the performance indicators. The ROP has identified numerous performance deficiencies as inspection findings at nuclear power plants since it was first implemented, but most of these were considered to be of very low risk to safe plant operations. Similarly, there have been very few instances in which performance indicator data exceeded acceptable standards. As a result, few plants have been subjected to high levels of oversight. Of more than 4,000 inspection findings identified between 2001 and 2005, 97 percent were green. While green findings are considered to be of “very low” safety significance, they represent a performance deficiency on the part of the plant licensee and thus are important to correct. Green findings consist of such things as finding that a worker failed to wear the proper radiation detector or finding that a licensee did not properly evaluate and approve the storage of flammable materials in the vicinity of safety-related equipment. NRC does not follow-up on the corrective action taken for every green finding identified; rather, it relies on the licensee to address and track their resolution through the plant’s corrective action program. NRC does, however, periodically follow-up on some of the actions taken by the licensee to address green findings through an inspection specifically designed to evaluate the effectiveness of the licensee’s corrective action program. NRC officials stated that green findings provide useful information on plant performance and NRC inspectors use the findings to identify performance trends in certain areas and help inform their selection of areas to focus on during future inspections. In contrast to the many green findings, NRC has identified 12 findings of the highest risk significance (7 yellow and 5 red), accounting for less than 1 percent of the findings since 2001. For example, one plant was issued a red finding— the highest risk significance—after a steam generator tube failed, causing an increased risk in the release of radioactive material. Similar to the inspection findings, most performance indicator reports have shown the indicators to be within the acceptable levels of performance. Only 156, or less than one percent of over 30,000 indicator reports from 2001 to 2005, exceeded the acceptable performance threshold. Four of the 15 performance indicators have always been reported to be within acceptable performance levels. In addition, 46 plants have never had a performance indicator fall outside of the acceptable level and only three plants reported having a yellow indicator for one performance measure; no red indicators have ever been reported. On the basis of its inspection findings and performance indicators, NRC has subjected more than three quarters of the 103 operating plants to at least some level of increased oversight (beyond the baseline inspections) for varying amounts of time. Most of these plants received the lowest level of increased oversight, consisting of a supplemental inspection, to follow- up on the identification of one or two white inspection findings or performance indicators. Five plants have received the highest level of plant oversight for which NRC allows plants to continue operations, due to the identification of multiple white or yellow findings and/or the identification of a red finding. One plant received this level of oversight because NRC determined that the licensee failed to address the common causes of two white findings and held them open for more than four quarters. One of these findings involved the recurrent failure of a service water pump because the licensee failed to take adequate corrective action after the first failure. NRC inspectors at the plants we reviewed indicated that, when plant performance declines, it is often the result of ineffective corrective action programs, problems related to human performance, or complacent management, which often results in deficiencies in one or more of the cross-cutting areas. In assessing the results of the ROP data, we found that all plants subjected to NRC’s highest level of oversight also had a substantive cross-cutting issue open either prior to or during the time that it was subjected to increased oversight inspections. Overall, NRC’s oversight process shows mostly consistent results from 2001 to 2005. For example, the total number of green findings at all plants ranged from 657 to 889 per year and the total number of other findings ranged from 10 to 30 per year with no strong trend (see fig. 1). Only in the area of cross-cutting issues—or inspection findings for which one or more cross-cutting issues was associated—is an increasing trend evident (see fig. 2). According to NRC, the reason for this increase is due in part to the development of guidance on the identification and documentation of cross-cutting issues and its increased emphasis in more recent years. According to NRC officials, the results of its oversight process at an industry or summary level serve as an indicator of industry performance, which to date indicates good safety performance. On an annual basis, NRC analyzes the overall results of its inspection and performance indicator programs and compares them with industry level performance metrics to ensure all metrics are consistent and takes action if adverse trends are identified. While NRC communicates the results of its oversight process on a plant-specific basis to plant managers, members of the public, and other government agencies through annual public meetings held at or near each site and an internet Web site, it does not publicly summarize the overall results of its oversight process, such as the total number and types of inspection findings and performance indicators falling outside of acceptable performance categories, on a regular basis. NRC has taken a proactive approach to improving its reactor oversight process. It has several mechanisms in place to incorporate feedback from both external and internal stakeholders and is currently working on improvements in key areas of the process, including better focusing inspections on areas most important to safety, improving its timeliness in determining the risk significance of its inspection findings, and modifying the way that it measures some performance indicators. NRC is also working to address what we believe is a significant shortcoming in its oversight process by improving its ability to address plants’ safety culture, allowing it to better identify and address early indications of deteriorating safety at plants before performance problems develop. According to NRC officials, the ROP was implemented with the understanding that it would be an evolving process and improvements would be made as lessons-learned were identified. Each fall NRC solicits feedback from external stakeholders, including industry organizations, public interest groups, and state and local officials, through a survey published in the Federal Register. NRC also conducts an internal survey of its site, regional, and headquarters program and management staff every other year to obtain their opinions on the effectiveness of the ROP. Additionally, NRC has in place a formal feedback mechanism whereby NRC staff can submit recommendations for improving various oversight components and NRC staff meet with industry officials on a monthly basis—in addition to various meetings, workshops, and conferences—to discuss oversight implementation issues and concerns. NRC staff also incorporates direction provided by the NRC Commissioners and recommendations from independent evaluations such as from GAO and the NRC Inspector General. The results of these efforts are pulled together in the form of an annual self-assessment report, which outlines the overall results of its outreach and the changes it intends to make in the year ahead. According to NRC officials, the changes made to the ROP since its implementation in 2000—including those made in response to the Davis- Besse incident—have generally been refinements to the existing process rather than significant changes to how it conducts its oversight. In the case of Davis-Besse, NRC formed a task force to review the agency’s regulatory processes. The task force’s report, issued in September 2002, contained more than 50 recommendations, many associated with the ROP. Among the more significant ROP-related recommendations were those to enhance the performance indicator that monitors unidentified leakage to be more accurate, develop specific guidance to inspect boric acid control programs and vessel head penetration nozzles, modify the inspection program to provide for better follow-up of longstanding issues, and enhance the guidance for managing plants that are in an extended shutdown condition as a result of significant performance problems. NRC program officials told us that the task force’s most significant recommendations were in areas outside of the ROP, such as improving the agency’s operating experience program. According to NRC, it has implemented almost all of the task force’s recommendations. Other modifications that NRC has recently made or is in the process of making include the following: NRC recently revised seven of its baseline inspection procedures to better focus the level and scope of its inspection efforts on those areas most important to safety. These revisions resulted from a detailed analysis in 2005 of its more than 30 baseline inspection procedures. The effort involved analyzing the number of findings resulting from each of its inspection procedures and the time spent directly observing plant activities or reviewing licensee paperwork, among other things. NRC has efforts underway to improve what it refers to as its significance determination process (SDP). An audit by the NRC Inspector General, a review by a special task group formed by NRC, and feedback from other stakeholders have pointed to several significant weaknesses with the SDP. For example, internal and external stakeholders raised concerns about the amount of time, level of effort, and knowledge and resources required to determine the risk significance of some findings. Industry officials commented that because most inspection findings are green, one white finding at a plant can place it in the “bottom quartile” of plants from a performance perspective. Therefore, industry officials explained, licensees try to avoid this placement and will expend a great deal of effort and resources to provide additional data to NRC to ensure the risk level of a finding is appropriately characterized. This can add significant time to the process because different technical tools may be used that then must be incorporated with NRC’s tools and processes. The delay in assigning a color to a finding while the new information is being considered could also affect a plant’s placement on NRC’s action matrix, essentially delaying the increased oversight called for if the finding is determined to be greater- than-green. NRC developed a SDP Improvement Plan in order to address these and other concerns and track its progress in implementing key changes. For example, NRC introduced a new process aimed at improving timeliness by engaging decision-makers earlier in the process to more quickly identify the scope of the evaluation, the resources needed, and the schedule to complete the evaluation. NRC is also taking actions to improve its performance indicators. These actions are partly to address concerns that the indicators have not contributed to the early identification of poorly performing plants to the degree originally envisioned as they are almost always within acceptable performance levels (green). There have been several cases where plants reported an acceptable performance indicator and performance problems were subsequently identified. For example, NRC inspectors at one plant noted that while performance indicator data related to its alert and notification system in place for emergency preparedness had always been reported green, the system had not always been verified to be functioning properly. On the other hand, industry officials believe that the high percentage of indicators that are green is indicative of plants’ good performance. Several plant managers told us that they closely monitor and manage to the acceptable performance thresholds established for each indicator, and will often take action to address performance issues well before the indicator crosses the acceptable performance threshold. Because NRC inspectors verify indicator data once a year, a potential disagreement over the data might not surface for up to a year after it is reported, and it may take even longer to resolve the disagreement with the licensee. Similar to delays with the SDP, a delay in assigning a color while the disagreement is resolved could affect a plant’s placement on NRC’s action matrix, and delay the increased oversight called for if the indicator is determined to be greater-than-green. NRC plans to work with the industry to review selected indicator definitions to make interpretation more concise and reduce the number of discrepancies. To date, NRC has focused significant effort on developing a key indicator to address known problems with the performance indicators measuring the unavailability of safety systems. NRC is also in the process of changing the definition for several other indicators, in addition to considering the feasibility of new indicators. I would now like to discuss what we believe is one of NRC’s most important efforts to improve its oversight process by increasing its ability to identify and address deteriorating safety culture at plants. NRC and others have long recognized that safety culture and the attributes that make up safety culture, such as attention to detail, adherence to procedures, and effective corrective and preventative action, have a significant impact on a plant’s performance. Despite this recognition and several external groups’ recommendations to better incorporate safety culture aspects into its oversight process, it did not include specific measures to explicitly address plant safety culture when it developed the ROP in 2000. The 2002 Davis-Besse reactor vessel head incident highlighted that this was a significant weakness in the ROP. In investigating this event, we and others found that NRC did not have an effective means to identify and address early indications of deteriorating safety at plants before performance problems develop. Largely as a result of this event, in August 2004, the NRC Commission directed the NRC staff to enhance the ROP by more fully addressing safety culture. In response to the Commission’s directive, the NRC staff formed a safety culture working group in early 2005. The working group incorporated the input of its stakeholders through a series of public meetings held in late 2005 and early 2006. In February 2006, NRC issued its proposed approach to better incorporate safety culture into the ROP. NRC officials expect to fully implement all changes effective in July 2006. NRC’s proposed safety culture changes largely consist of two main approaches: first, clarifying the identification and treatment of cross- cutting issues in its inspection processes and second, developing a structured way for NRC to determine the need for a safety culture evaluation of plants. NRC has developed new definitions for each of its cross-cutting issues to more fully address safety culture aspects and additional guidance on their treatment once they are identified. For example, the problem identification and resolution cross-cutting area is now comprised of several components—corrective action program, self and independent assessments, and operating experience. NRC inspectors are to assess every inspection finding to determine if it is associated with one or more of the components that make up each of the cross-cutting areas. Inspectors then determine, on a semi-annual basis, if a substantive cross-cutting issue exists on the basis of the number and areas of cross- cutting components identified. If the same substantive cross-cutting issue is identified in three consecutive assessment periods, NRC may request that the licensee perform an assessment of its safety culture. The intent is to provide an opportunity to diagnose a potentially declining safety culture before significant safety performance problems occur. Under its approach, NRC would expect the licensees of plants with more than one white color finding or one yellow finding to evaluate whether the performance issues were in any way caused by any safety culture components, and NRC might request the licensee to complete an independent assessment of its safety culture, if the licensee did not identify an important safety culture component. For plants where more significant or multiple findings have been identified, the NRC would not only independently evaluate the adequacy of the independent assessment of the licensee’s safety culture, but it might also conduct its own independent assessment of the licensee’s safety culture. Some of NRC’s proposed actions regarding safety culture have been controversial, and not all stakeholders completely agree with the agency’s approach. For example, the nuclear power industry has expressed concern that the changes could introduce undue subjectivity to NRC’s oversight, given the difficulty in measuring these often intangible and complex concepts. Several of the nuclear power plant managers at the sites we reviewed said that it is not always clear why a cross-cutting issue was associated with finding, or what it will take to clear themselves once they’ve been identified as having a substantive cross-cutting issue open. Some industry officials worry that this initiative will further increase the number of findings that have cross-cutting elements associated with them and if all of the findings have them they will lose their value. Industry officials also warn that if it is not implemented carefully, it could divert resources away from other important safety issues. Other external stakeholders, on the other hand, suggest that this effort is an important step in improving NRC’s ability to identify performance issues at plants before they result in performance problems. Importantly, there will be additional tools in place for NRC to use when it identifies potential safety culture concerns. NRC officials view this effort as the beginning step in an incremental approach and acknowledge that continual monitoring, improvements, and oversight will be needed in order to better allow inspectors to detect deteriorating safety conditions at plants before events occur. NRC plans to evaluate stakeholder feedback and make changes based on lessons learned from its initial implementation of its changes as part of its annual self-assessment process for calendar year 2007. Mr. Chairman, this completes my prepared statement, I would be happy to respond to any questions you or the other Members of the Subcommittee may have at this time. For further information about this testimony, please contact me at (202) 512-3841 (or at wellsj@gao.gov). Raymond H. Smith, Jr. (Assistant Director), Alyssa M. Hundrup, Alison O’Neill, and Dave Stikkers made key contributions to this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Nuclear Regulatory Commission (NRC) has the responsibility to provide oversight to ensure that the nation's 103 commercial nuclear power plants are operated safely. While the safety of these plants has always been important, since radioactive release could harm the public and the environment, NRC's oversight has become even more critical as the Congress and the nation consider the potential resurgence of nuclear power in helping to meet the nation's growing energy needs. Prior to 2000, NRC was criticized for having a safety oversight process that was not always focused on the most important safety issues and in some cases, was overly subjective. To address these and other concerns, NRC implemented a new oversight process--the Reactor Oversight Process (ROP). NRC continues to modify the ROP to incorporate feedback from stakeholders and in response to other external events. This testimony summarizes information on (1) how NRC oversees nuclear power plants, (2) the results of the ROP over the past several years, and (3) the aspects of the ROP that need improvement and the status of NRC's efforts to improve them. This testimony discusses preliminary results of GAO's work. GAO will report in full at a later date. GAO analyzed program-wide information, inspection results covering 5 years of ROP operations, and detailed findings from a sample of 11 plants. NRC uses various tools to oversee the safe operation of nuclear power plants, including physical plant inspections and quantitative measures or indicators of plant performance. To apply these tools, NRC uses a riskinformed and graded approach--that is, one considering safety significance in deciding on the equipment and operating procedures to be inspected and employing increasing levels of regulatory attention to plants based on the severity of identified performance problems. The tools include three types of inspections--baseline, supplemental, and special. All plants receive baseline inspections of plant operations almost continuously by NRC inspectors. When NRC becomes aware of a performance problem at a plant, it conducts supplemental inspections, which expand the scope of baseline inspections. NRC conducts special inspections to investigate specific safety incidents or events that are of particular interest to NRC because of their potential significance to safety. The plants also self-report on their safety performance using performance indicators for plant operations related to safety, such as the number of unplanned reactor shutdowns. Since 2001, NRC's ROP has resulted in more than 4,000 inspection findings concerning nuclear power plant licensees' failure to comply with regulations or other safe operating procedures. About 97 percent of these findings were for actions or failures NRC considered important to correct but of low significance to overall safe operation of the plants. In contrast, 12 of the inspection findings, or less than 1 percent, were of the highest levels of significance to safety. On the basis of its findings and the performance indicators, NRC has subjected more than three-quarters of the 103 operating plants to oversight beyond the baseline inspections for varying amounts of time. NRC has improved several key areas of the ROP, largely in response to independent reviews and feedback from stakeholders. These improvements include better focusing its inspections on those areas most important to safety, reducing the time needed to determine the risk significance of inspection findings, and modifying the way that some performance indicators are measured. NRC also recently undertook a major initiative to improve its ability to address plants' safety culture--that is, the organizational characteristics that ensure that issues affecting nuclear plant safety receive the attention their significance warrants. GAO and others have found this to be a significant shortcoming in the ROP. Although some industry officials have expressed concern that its changes could introduce undue subjectivity to NRC's oversight, given the difficulty in measuring these often intangible and complex concepts, other stakeholders believe its approach will provide NRC better tools to address safety culture issues at plants. NRC officials acknowledge that its effort is only a step in an incremental approach and that continual monitoring, improvements, and oversight will be needed to fully detect deteriorating safety conditions before an event occurs.
Since DHS began operations in March 2003, it has developed and implemented key policies, programs, and activities for implementing its homeland security missions and functions that have created and strengthened a foundation for achieving its potential as it continues to mature. However, the department’s efforts have been hindered by challenges faced in leading and coordinating the homeland security enterprise; implementing and integrating its management functions for results; and strategically managing risk and assessing, and adjusting as necessary, its homeland security efforts. DHS has made progress in these three areas, but needs to take additional action, moving forward, to help it achieve its full potential. DHS has made important progress in implementing and strengthening its mission functions over the past 8 years, including implementing key homeland security operations and achieving important goals and milestones in many areas. The department’s accomplishments include developing strategic and operational plans across its range of missions; hiring, deploying and training workforces; establishing new, or expanding existing, offices and programs; and developing and issuing policies, procedures, and regulations to govern its homeland security operations. For example:  DHS issued the QHSR, which provides a strategic framework for homeland security, and the National Response Framework, which outlines guiding principles for disaster response.  DHS successfully hired, trained, and deployed workforces, such as a federal screening workforce which assumed security screening responsibilities at airports nationwide, and the department has about 20,000 agents to patrol U.S. land borders.  DHS created new programs and offices, or expanded existing ones, to implement key homeland security responsibilities, such as establishing the United States Computer Emergency Readiness Team to, among other things, coordinate the nation’s efforts to prepare for, prevent, and respond to cyber threats to systems and communications networks. DHS also expanded programs for identifying and removing aliens subject to removal from the United States and for preventing unauthorized aliens from entering the country.  DHS issued policies and procedures addressing, among other things, the screening of passengers at airport checkpoints, inspecting travelers seeking entry into the United States, and assessing immigration benefit applications and processes for detecting possible fraud. Establishing these elements and others are important accomplishments and have been critical for the department to position and equip itself for fulfilling its homeland security missions and functions. However, more work remains for DHS to address gaps and weaknesses in its current operational and implementation efforts, and to strengthen the efficiency and effectiveness of those efforts to achieve its full potential. For example, we have reported that many DHS programs and investments have experienced cost overruns, schedule delays, and performance problems, including, for instance, DHS’s recently cancelled technology program for securing U.S. borders, known as the Secure Border Initiative Network, and some technologies for screening passengers at airport checkpoints. Further, with respect to the cargo advanced automated radiography system to detect certain nuclear materials in vehicles and containers at ports DHS pursued the acquisition and deployment of the system without fully understanding that it would not fit within existing inspection lanes at ports of entry. DHS subsequently canceled the program. DHS also has not yet fully implemented its roles and responsibilities for developing and implementing key homeland security programs and initiatives. For example, DHS has not yet developed a set of target capabilities for disaster preparedness or established metrics for assessing those capabilities to provide a framework for evaluating preparedness, as required by the Post-Katrina Emergency Management Reform Act. Our work has shown that DHS should take additional action to improve the efficiency and effectiveness of a number of its programs and activities by, for example, improving program management and oversight, and better assessing homeland security requirements, needs, costs, and benefits, such as those for key acquisition and technology programs. Table 1 provides examples of key progress and work remaining in DHS’s functional mission areas, with an emphasis on work we completed since 2008. Impacting the department’s ability to efficiently and effectively satisfy its missions are: (1) the need to integrate and strengthen its management functions; (2) the need for increased utilization of performance assessments; (3) the need for an enhanced use of risk information to inform planning, programming, and investment decision-making; (4) limitations in effective sharing and use of terrorism-related information; (5) partnerships that are not sustained or fully leveraged; and (6) limitations in developing and deploying technologies to meet mission needs. DHS made progress in addressing these areas, but more work is needed, going forward, to further mitigate these challenges and their impact on DHS’s mission implementation. For instance, DHS strengthened its performance measures in recent years and linked its measures to the QHSR’s missions and goals. However, DHS and its components have not yet developed measures for assessing the effectiveness of key homeland security programs, such as programs for securing the border and preparing the nation for emergency incidents. For example, with regard to checkpoints DHS operates on U.S. roads to screen vehicles for unauthorized aliens and contraband, DHS established three performance measures to report the results of checkpoint operations. However, the measures did not indicate if checkpoints were operating efficiently and effectively and data reporting and collection challenges hindered the use of results to inform Congress and the public on checkpoint performance. Moreover, DHS has not yet established performance measures to assess the effectiveness of its programs for investigating alien smuggling operations and foreign nationals who overstay their authorized periods of admission to the United States, making it difficult for these agencies to determine progress made in these areas and evaluate possible improvements. Further, DHS and its component agencies developed strategies and tools for conducting risk assessments. For example, DHS has conducted risk assessments of various surface transportation modes, such as freight rail, passenger rail, and pipelines. However, the department needs to strengthen its use of risk information to inform its planning and investment decision-making. For example, DHS could better use risk information to plan and prioritize security measures and investments within and across its mission areas, as the department cannot secure the nation against every conceivable threat. In addition, DHS took action to develop and deploy new technologies to help meet its homeland security missions. However, in a number of instances DHS pursued acquisitions without ensuring that the technologies met defined requirements, conducting and documenting appropriate testing and evaluation, and performing cost-benefit analyses, resulting in important technology programs not meeting performance expectations. For example, in 2006, we recommended that DHS’s decision to deploy next-generation radiation-detection equipment, or advanced spectroscopic portals, used to detect smuggled nuclear or radiological materials, be based on an analysis of both the benefits and costs and a determination of whether any additional detection capability provided by the portals was worth their additional cost. DHS subsequently issued a cost-benefit analysis, but we reported that this analysis did not provide a sound analytical basis for DHS’s decision to deploy the portals. In June 2009, we also reported that an updated cost-benefit analysis might show that DHS’s plan to replace existing equipment with advanced spectroscopic portals was not justified, particularly given the marginal improvement in detection of certain nuclear materials required of advanced spectroscopic portals and the potential to improve the current- generation portal monitors’ sensitivity to nuclear materials, most likely at a lower cost. In July 2011, DHS announced that it would end the advanced spectroscopic portal project as originally conceived given the challenges the program faced. As we have previously reported, while it is important that DHS continue to work to strengthen each of its functional areas, it is equally important that these areas be addressed from a comprehensive, departmentwide perspective to help mitigate longstanding issues that have impacted the department’s progress. Our work at DHS has identified several key themes—leading and coordinating the homeland security enterprise, implementing and integrating management functions for results, and strategically managing risks and assessing homeland security efforts—that have impacted the department’s progress since it began operations. These themes provide insights that can inform DHS’s efforts, moving forward, as it works to implement its missions within a dynamic and evolving homeland security environment. DHS made progress and has had successes in all of these areas, but our work found that these themes have been at the foundation of DHS’s implementation challenges, and need to be addressed from a departmentwide perspective to position DHS for the future and enable it to satisfy the expectations set for it by the Congress, the administration, and the country. Leading and coordinating the homeland security enterprise. While DHS is one of a number of entities with a role in securing the homeland, it has significant leadership and coordination responsibilities for managing efforts across the homeland security enterprise. To satisfy these responsibilities, it is critically important that DHS develop, maintain and leverage effective partnerships with its stakeholders, while at the same time addressing DHS-specific responsibilities in satisfying its missions. Before DHS began operations, we reported that the quality and continuity of the new department’s leadership would be critical to building and sustaining the long-term effectiveness of DHS and achieving homeland security goals and objectives. We further reported that to secure the nation, DHS must form effective and sustained partnerships between components and also with a range of other entities, including federal agencies, state and local governments, the private and nonprofit sectors, and international partners. DHS has made important strides in providing leadership and coordinating efforts. For example, it has improved coordination and clarified roles with state and local governments for emergency management. DHS also strengthened its partnerships and collaboration with foreign governments to coordinate and standardize security practices for aviation security. However, DHS needs to take additional action to forge effective partnerships and strengthen the sharing and utilization of information, which has affected its ability to effectively satisfy its missions. For example, we reported that the expectations of private sector stakeholders have not been met by DHS and its federal partners in areas related to sharing information about cyber-based threats to critical infrastructure. Without improvements in meeting private and public sector expectations for sharing cyber threat information, private-public partnerships will remain less than optimal, and there is a risk that owners of critical infrastructure will not have the information and mechanisms needed to thwart sophisticated cyber attacks that could have catastrophic effects on our nation’s cyber-reliant critical infrastructure. Moreover, we reported that DHS needs to continue to streamline its mechanisms for sharing information with public transit agencies to reduce the volume of similar information these agencies receive from DHS, making it easier for them to discern relevant information and take appropriate actions to enhance security. In 2005, we designated information sharing for homeland security as high risk because the federal government faced serious challenges in analyzing information and sharing it among partners in a timely, accurate, and useful way. Gaps in sharing, such as agencies’ failure to link information about the individual who attempted to conduct the December 25, 2009, airline bombing, prevented the individual from being included on the federal government’s consolidated terrorist watchlist, a tool used by DHS to screen for persons who pose a security risk. The federal government and DHS have made progress, but more work remains for DHS to streamline its information sharing mechanisms and better meet partners’ needs. Moving forward, it will be important that DHS continue to enhance its focus and efforts to strengthen and leverage the broader homeland security enterprise, and build off the important progress that it has made thus far. In addressing ever-changing and complex threats, and with the vast array of partners with which DHS must coordinate, continued leadership and stewardship will be critical in achieving this end. Implementing and integrating management functions for results. Following its establishment, the department focused its efforts primarily on implementing its various missions to meet pressing homeland security needs and threats, and less on creating and integrating a fully and effectively functioning department from 22 disparate agencies. This initial focus on mission implementation was understandable given the critical homeland security needs facing the nation after the department’s establishment, and the enormous challenge posed by creating, integrating, and transforming a department as large and complex as DHS. As the department matured, it has put into place management policies and processes and made a range of other enhancements to its management functions—acquisition, information technology, financial, and human capital management. However, DHS has not always effectively executed or integrated these functions. In 2003, we designated the transformation and integration of DHS as high risk because DHS had to transform 22 agencies into one department, and failure to effectively address DHS’s management and mission risks could have serious consequences for U.S. national and economic security. Eight years later, DHS remains on our high-risk list. DHS has demonstrated strong leadership commitment to addressing its management challenges and has begun to implement a strategy to do so. Further, DHS developed various management policies, directives, and governance structures, such as acquisition and information technology management policies and controls, to provide enhanced guidance on investment decision making. DHS also reduced its financial management material weaknesses in internal control over financial reporting and developed strategies to strengthen human capital management, such as its Workforce Strategy for Fiscal Years 2011-2016. However, DHS needs to continue to demonstrate sustainable progress in addressing its challenges, as these issues have contributed to schedule delays, cost increases, and performance problems in major programs aimed at delivering important mission capabilities. For example, in September 2010, we reported that the Science and Technology Directorate’s master plans for conducting operational testing of container security technologies did not reflect all of the operational scenarios that U.S. Customs and Border Protection was considering for implementation. In addition, when it developed the US-VISIT program, DHS did not sufficiently define what capabilities and benefits would be delivered, by when, and at what cost, and the department has not yet determined how to deploy a biometric exit capability under the program. Moreover, DHS does not yet have enough skilled personnel to carry out activities in various areas, such as acquisition management; and has not yet implemented an integrated financial management system, impacting its ability to have ready access to reliable, useful, and timely information for informed decision making. Moving forward, addressing these management challenges will be critical for DHS’s success, as will be the integration of these functions across the department to achieve efficiencies and effectiveness. Strategically managing risks and assessing homeland security efforts. Forming a new department while working to implement statutorily mandated and department-initiated programs and responding to evolving threats, was, and is, a significant challenge facing DHS. Key threats, such as attempted attacks against the aviation sector, have impacted and altered DHS’s approaches and investments, such as changes DHS made to its processes and technology investments for screening passengers and baggage at airports. It is understandable that these threats had to be addressed immediately as they arose. However, limited strategic and program planning by DHS and limited assessment to inform approaches and investment decisions have contributed to programs not meeting strategic needs or not doing so in an efficient manner. For example, as we reported in July 2011, the Coast Guard’s planned acquisitions through its Deepwater Program, which began before DHS’s creation and includes efforts to build or modernize ships and aircraft and supporting capabilities that are critical to meeting the Coast Guard’s core missions in the future, is unachievable due to cost growth, schedule delays and affordability issues. In addition, because FEMA has not yet developed a set of target disaster preparedness capabilities and a systematic means of assessing those capabilities, as required by the Post-Katrina Emergency Management Reform Act and Presidential Policy Directive 8, it cannot effectively evaluate and identify key capability gaps and target limited resources to fill those gaps. Further, DHS has made important progress in analyzing risk across sectors, but it has more work to do in using this information to inform planning and resource allocation decisions. Risk management has been widely supported by Congress and DHS as a management approach for homeland security, enhancing the department’s ability to make informed decisions and prioritize resource investments. Since DHS does not have unlimited resources and cannot protect the nation from every conceivable threat, it must make risk-informed decisions regarding its homeland security approaches and strategies. Moreover, we have reported on the need for enhanced performance assessment, that is, evaluating existing programs and operations to determine whether they are operating as intended or are in need of change, across DHS’s missions. Information on the performance of programs is critical for helping the department, Congress, and other stakeholders more systematically assess strengths and weaknesses and inform decision making. In recent years, DHS has placed an increased emphasis on strengthening its mechanisms for assessing the performance and effectiveness of its homeland security programs. For example, DHS established new performance measures, and modified existing ones, to better assess many of its programs and efforts. However, our work has found that DHS continues to miss opportunities to optimize performance across its missions because of a lack of reliable performance information or assessment of existing information; evaluation among feasible alternatives; and, as appropriate, adjustment of programs or operations that are not meeting mission needs. For example, DHS’s program for research, development, and deployment of passenger checkpoint screening technologies lacked a risk-based plan and performance measures to assess the extent to which checkpoint screening technologies were achieving the program’s security goals, and thereby reducing or mitigating the risk of terrorist attacks. As a result, DHS had limited assurance that its strategy targeted the most critical risks and that it was investing in the most cost-effective new technologies or other protective measures. As the department further matures and seeks to optimize its operations, DHS will need to look beyond immediate requirements; assess programs’ sustainability across the long term, particularly in light of constrained budgets; and evaluate tradeoffs within and among programs across the homeland security enterprise. Doing so should better equip DHS to adapt and respond to new threats in a sustainable manner as it works to address existing ones. Given DHS’s role and leadership responsibilities in securing the homeland, it is critical that the department’s programs and activities are operating as efficiently and effectively as possible, are sustainable, and continue to mature, evolve and adapt to address pressing security needs. DHS has made significant progress throughout its missions since its creation, but more work is needed to further transform the department into a more integrated and effective organization. DHS has also made important progress in strengthening partnerships with stakeholders, improving its management processes and sharing of information, and enhancing its risk management and performance measurement efforts. These accomplishments are especially noteworthy given that the department has had to work to transform itself into a fully functioning cabinet department while implementing its missions—a difficult undertaking for any organization and one that can take years to achieve even under less daunting circumstances. Impacting the department’s efforts have been a variety of factors and events, such as attempted terrorist attacks and natural disasters, as well as new responsibilities and authorities provided by Congress and the administration. These events collectively have forced DHS to continually reassess its priorities and reallocate resources as needed, and have impacted its continued integration and transformation. Given the nature of DHS’s mission, the need to remain nimble and adaptable to respond to evolving threats, as well as to work to anticipate new ones, will not change and may become even more complex and challenging as domestic and world events unfold, particularly in light of reduced budgets and constrained resources. To better position itself to address these challenges, our work has shown that DHS should place an increased emphasis and take additional action in supporting and leveraging the homeland security enterprise, managing its operations to achieve needed results, and strategically planning for the future while assessing and adjusting, as needed, what exists today. Addressing these issues will be critically important for the department to strengthen its homeland security programs and operations. Eight years after its establishment and 10 years after the September 11, 2001, terrorist attacks, DHS has indeed made significant strides in protecting the nation, but has yet to reach its full potential. Chairman King, Ranking Member Thompson, and Members of the Committee, this concludes my prepared statement. I would be pleased to respond to any questions you may have at this time. For further information regarding this testimony, please contact Cathleen A. Berrick at (202) 512-3404 or berrickc@gao.gov. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony are Rebecca Gambler, Assistant Director; Melissa Bogar; Susan Czachor; Sarah Kaczmarek; Tracey King; Taylor Matheson; Jessica Orr; and Meghan Squires. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The terrorist attacks of September 11, 2001, led to profound changes in government agendas, policies and structures to confront homeland security threats facing the nation. Most notably, the Department of Homeland Security (DHS) began operations in 2003 with key missions that included preventing terrorist attacks from occurring in the United States, reducing the country's vulnerability to terrorism, and minimizing the damages from any attacks that may occur. DHS is now the third-largest federal department, with more than 200,000 employees and an annual budget of more than $50 billion. Since 2003, GAO has issued over 1,000 products on DHS's operations in such areas as transportation security and emergency management, among others. As requested, this testimony addresses DHS's progress and challenges in implementing its homeland security missions since it began operations, and issues affecting implementation efforts. This testimony is based on a report GAO issued in September 2011, which assessed DHS's progress in implementing its homeland security functions and work remaining. Since it began operations in 2003, DHS has implemented key homeland security operations and achieved important goals and milestones in many areas to create and strengthen a foundation to reach its potential. As it continues to mature, however, more work remains for DHS to address gaps and weaknesses in its current operational and implementation efforts, and to strengthen the efficiency and effectiveness of those efforts to achieve its full potential. DHS's accomplishments include developing strategic and operational plans; deploying workforces; and establishing new, or expanding existing, offices and programs. For example, DHS (1) issued plans to guide its efforts, such as the Quadrennial Homeland Security Review, which provides a framework for homeland security, and the National Response Framework, which outlines disaster response guiding principles; (2) successfully hired, trained, and deployed workforces, such as a federal screening workforce to assume security screening responsibilities at airports nationwide; and (3) created new programs and offices to implement its homeland security responsibilities, such as establishing the U.S. Computer Emergency Readiness Team to help coordinate efforts to address cybersecurity threats. Such accomplishments are noteworthy given that DHS has had to work to transform itself into a fully functioning department while implementing its missions--a difficult undertaking that can take years to achieve. While DHS has made progress, its transformation remains high risk due to its management challenges. Examples of progress made and work remaining include: Border security. DHS implemented the U.S. Visitor and Immigrant Status Indicator Technology program to verify the identities of foreign visitors entering and exiting the country by processing biometric and biographic information. However, DHS has not yet determined how to implement a biometric exit capability and has taken action to address a small portion of the estimated overstay population in the United States (individuals who legally entered the country but then overstayed their authorized periods of admission). Aviation security. DHS developed and implemented Secure Flight, a program for screening airline passengers against terrorist watchlist records. DHS also developed new programs and technologies to screen passengers, checked baggage, and air cargo. However, DHS does not yet have a plan for deploying checked baggage screening technologies to meet recently enhanced explosive detection requirements, a mechanism to verify the accuracy of data to help ensure that air cargo screening is being conducted at reported levels, or approved technology to screen cargo once it is loaded onto a pallet or container. Emergency preparedness and response. DHS issued the National Preparedness Guidelines that describe a national framework for capabilities-based preparedness, and a Target Capabilities List to provide a national-level generic model of capabilities defining all-hazards preparedness. DHS is also finalizing a National Disaster Recovery Framework. However, DHS needs to strengthen its efforts to assess capabilities for all-hazards preparedness, and develop a long-term recovery structure to better align timing and involvement with state and local governments' capacity. Chemical, biological, radiological and nuclear (CBRN) threats. DHS assessed risks posed by CBRN threats and deployed capabilities to detect CBRN threats. However, DHS should work to improve its coordination of CBRN risk assessments, and identify monitoring mechanisms for determining progress made in implementing the global nuclear detection strategy. GAO's work identified three themes at the foundation of DHS's challenges: Leading and coordinating the homeland security enterprise; Implementing and integrating management functions for results; and Strategically managing risks and assessing homeland security efforts. This testimony contains no new recommendations.
The objective of the Navy’s ship maintenance program is to perform all necessary maintenance consistent with available funding and provide reasonable assurance that ships will be available for required operations. Ship maintenance, conducted during periods the Navy calls availabilities, includes three types of requirements: time-directed, condition-based, and modernization. Time-directed requirements include those that are periodic in nature and are based on elapsed time or recurrent operations. Condition-based requirements, which are based on the physical condition of the ship, are usually identified by the ship’s crew or inspection teams. Lastly, modernization requirements include changes that either add new capability or improve reliability and maintainability of existing systems. Officials of the Shipbuilders Council of America, the South Tidewater Association of Ship Repairers, private shipyards, and ship repair companies in the Norfolk area have expressed concern that the Navy’s implementation of its policies and procedures favored the Norfolk Naval Shipyard, contributing to the private sectors’ declining ship maintenance workload in the Norfolk area during fiscal year 1998. The Navy’s allocation of ship maintenance workload in the Norfolk, Virginia, area is guided by legislative requirements and established Navy policy objectives, such as (1) retaining a certain level of public sector capability, (2) allowing sailors to remain at their home ports when shorter repairs are being done, and (3) achieving economic and efficient public depot maintenance operations. Historically, large ship dry-dockings and nuclear ship maintenance projects in the Norfolk, Virginia, area are usually allocated to either Newport News or the Norfolk Naval Shipyard and maintenance projects for conventional surface ships are usually contracted with private ship repair companies or allocated to Norfolk Naval Shipyard. In addition, Newport News performs some medium to small conventional ship maintenance work for the Navy. Further, in making workload allocation decisions, Navy officials stated they also consider: The statutory requirement of 10 U.S.C. 2466, more commonly called the 50-50 rule, wherein the Department of the Navy is required to contract not more than 50 percent of funds made available for depot-level maintenance with the private sector. This requirement is for the whole Navy and applies to all types of depot maintenance at all locations. The 50-50 requirement excludes funds obligated for the (1) procurement of major modifications or upgrades of weapon systems that are designed to improve performance or (2) nuclear refueling of an aircraft carrier. To more fully identify all sources of funding for ship maintenance and repair work in the Norfolk area, we included obligated funds for these activities in our analysis for this report. The Navy’s home port policy is to, where possible, do ship repair and maintenance work of 6 months or less at the ship’s home port, thus improving the ship crew’s quality of life by reducing time away from home. If the estimated project is to take 6 months or less, the Navy solicits proposals for maintenance contracts from private shipyards and ship repair companies located near the ship’s home port. If the estimate is more than 6 months, the Navy expands the solicitation to include additional ship repair companies operating on the coast—the Atlantic coast for the Atlantic Fleet. Core work requirements, where the Navy tries to maintain the required capabilities within organic Navy shipyards to meet readiness and sustainability requirements of the weapon systems that support the wartime and contingency scenarios. According to Navy documents, core capabilities consist of the minimum facilities, equipment, and skilled personnel necessary to meet these readiness and sustainability requirements. The Navy’s guaranteed manday policy, where Navy officials try to match the workload to Norfolk Naval Shipyard workforce because the shipyard’s workforce and related costs have already been committed in the Navy’s budget. In terms of reported obligations for ship maintenance work in the Norfolk area during fiscal years 1994 through 1998, the largest amount of ship maintenance funding went to the private shipyards and repair companies.Among the private sector activities, Newport News received the largest portion of the obligated funds. Reported obligations for the smaller private shipyards and repair companies fluctuated from year to year but, for a variety of reasons, were proportionately much less in fiscal year 1998 than in other recent years and were also less than the Navy initially scheduled for fiscal year 1998. During fiscal years 1994 through 1998, the Navy reported obligating nearly $6.9 billion for the ship maintenance work in the Norfolk area. It provided Norfolk Naval Shipyard with about 31.1 percent of this work and private shipyards and repair companies were allocated 68.9 percent. (See table 1.) Table 1 also shows that reported obligations for fiscal year 1998 were much higher than previous years, with the largest percentage obligated to private shipyards and repair companies. Most of the 1998 obligations went to Newport News to fund a complex overhaul and nuclear refueling of the U.S.S. Nimitz. In the years in which Newport News has such a large workload, major funding spikes occur. In contrast, there was no similar workload assigned to Newport News in fiscal year 1996. Navy officials told us that funding to Newport News was smaller in 1996 because (1) there was less nuclear ship maintenance work—work historically allocated to either Newport News or Norfolk Naval Shipyard and (2) Newport News was already operating near full capacity. Newport News is a large nuclear-capable yard and is capable of doing ship repair work that other smaller shipyards and repair companies in the Norfolk area are not. Smaller private shipyards and repair companies in the area do repair work on conventional ships and are not qualified to do nuclear-related work. Therefore, to make our private sector analysis more meaningful, we separated the Navy’s reported obligations according to whether they were provided to Norfolk Naval Shipyard, Newport News, or other smaller shipyards and repair companies. As shown in table 2, Newport News received the largest obligations in all but 1 year between fiscal year 1994 and 1998. Table 2 also shows that the smaller private shipyards and repair companies received a much lower percentage of the total obligations in fiscal year 1998 than in other years. There was less conventional ship maintenance work—work historically allocated to smaller shipyards and repair companies or to the Norfolk Naval Shipyard. For example, the Navy reported that the total number of ships in the Atlantic Fleet decreased from 191 in fiscal year 1994 to 165 in fiscal year 1998. Similarly, the reported number of conventional steam-powered ships, maintenance-intensive ships that smaller ship repair companies have historically worked on, decreased in the Atlantic Fleet from 38 to 26 between fiscal year 1994 and 1997, and was projected to decrease to 23 ships during fiscal year 1998. According to Navy officials, conventional steam-powered ships require more maintenance than other ships because they are older and contain more mechanical parts than newer ships, which have more reliable component systems that are easier to remove, replace with new component systems, and repair elsewhere. During fiscal year 1998, the Navy provided the smaller private shipyards and repair companies in the Norfolk area less conventional maintenance work than initially scheduled in its private sector planning report dated September 23, 1996. This change occurred largely for operational reasons and requirement changes and because four conventional maintenance projects originally scheduled to go to the private sector were reassigned to the Norfolk Naval Shipyard to meet workload targets established for the public shipyard under the Navy’s guaranteed manday policy. Appendix III details the final distribution of the CNO projects scheduled for fiscal year 1998. In September 1996, Naval Sea Systems Command (NAVSEA) issued the Navy’s private sector depot-level planning report for fiscal years 1997 and 1998. The report contained CNO conventional maintenance projects that were not yet funded but were tentatively planned for allocation to the private shipyards and repair companies in the Norfolk area in fiscal years 1997 and 1998. Navy officials told us that the unfunded schedule was recognized as being subject to change and changes did subsequently occur. Nonetheless, some private shipyards and repair companies expected to receive larger amounts of work than they ultimately obtained because of the information in the report. In fiscal year 1998, the Navy reduced the size of the maintenance package for seven CNO maintenance projects and deferred one CNO project to fiscal year 1999 because several ships scheduled for maintenance needed less maintenance than expected and other Atlantic Fleet ships needed more maintenance than scheduled, requiring the Navy to transfer additional ship maintenance funds to those projects. In addition, the Navy canceled three scheduled projects: the U.S.S. Roberts project was canceled because the ship needed less maintenance than expected, the U.S.S. Radford project was canceled because the ship had operational commitments, and the U.S.S. Guam project was canceled because the ship was decommissioned in August 1998. During fiscal year 1998, the Navy also assigned the Norfolk Naval Shipyard four CNO maintenance projects, initially scheduled for competition in the private sector. This was done to meet workload targets established for Norfolk Naval Shipyard under the Navy’s guaranteed manday policy. The objective of the Navy’s guaranteed manday policy is to match the workload to Norfolk Naval Shipyard’s workforce during the budget execution year since the shipyard’s workforce figures and related costs have already been committed in the Navy’s budget and workload reductions would result in losses. Based on previous work, we believe that the guaranteed manday policy is generally sound from a cost and operational standpoint because, without it, the Norfolk Naval Shipyard would lose money as a result of having work below the level required to support its budgeted workforce. Prior to fiscal year 1995, the Navy’s shipyards reported significant operating losses. These losses were partly due to (1) fleet and system commanders’ operational and administrative decisions that resulted in less work being assigned to the public shipyards than was projected and budgeted for and (2) the Navy’s lack of flexibility to quickly deviate from the budgeted workforce because of Federal Civil Service requirements that require workers be notified before they can be separated. To minimize future departures from the budgeted workload, the Navy implemented the guaranteed manday program. When it is determined that the number of mandays originally budgeted for the Norfolk Naval Shipyard will not be utilized, officials of CNO, NAVSEA, the Atlantic Fleet, and Norfolk Naval Shipyard work together to identify alternatives for realigning the maintenance workload to better utilize the Norfolk Naval Shipyard budgeted workforce. These alternatives include adjusting the scope of work for selected maintenance projects, shifting funding from other Navy programs to ship maintenance, and moving planned workload from private shipyards and repair companies to Norfolk Naval Shipyard. We believe these initiatives are consistent with Navy policies and without them the Norfolk Naval Shipyard would lose money as a result of having work below the level required to support its budgeted workforce. During fiscal years 1994 through 1998, the Navy transferred or reprogrammed appropriated funds into and out of its ship depot maintenance program. However, the amounts transferred or reprogrammed out of the Navy-wide ship depot maintenance program generally did not adversely affect the Atlantic Fleet’s ship depot maintenance program, where more funds were transferred or reprogrammed into the program than were moved out. During fiscal years 1994 through 1998, the Navy transferred or reprogrammed about $1.2 billion (about 10 percent of the total) appropriated for its ship depot maintenance program to other Navy programs. The majority of the transfers or reprogrammings were due to one-time adjustments—changes that reflected the Navy’s decisions to move funds to and from various Navy accounts based on emerging or unforeseen requirements. The Navy made these adjustments because of (1) force structure reductions, (2) operations tempo increases, (3) increased recruiting goals, and (4) administrative support needs. Although there was a Navy-wide reduction in appropriated ship depot maintenance funds due to program transfers or reprogrammings, the reverse occurred in the Atlantic Fleet. During fiscal years 1994 through 1998, these transfers produced a net increase in the Atlantic Fleet’s ship maintenance funding. An analysis of the Fleet’s data by fiscal year shows only 1 year where a net decrease to the program occurred—during fiscal year 1998 when the Navy moved $7.1 million (less than 1 percent) of the Atlantic Fleet’s ship maintenance funds to other programs. (See table 3.) The allocation of ship maintenance workload in the Norfolk, Virginia, area is guided by legislative requirements and established Navy policy objectives. During fiscal years 1994 through 1998, the majority of ship maintenance funding allocated in the Norfolk area went to the private sector. While Newport News received the largest portion of that private funding, obligations to other smaller shipyards and repair companies have fluctuated from year to year. The greatest change occurred in fiscal year 1998, when these smaller companies received less than in other years and less than the Navy initially scheduled. This was largely because the conventional workload that traditionally goes to these companies is declining, scheduled maintenance and operational requirements changed during fiscal year 1998, and four conventional maintenance projects originally scheduled to go to the private sector were reassigned to the Norfolk Naval Shipyard to stabilize and achieve more efficient operations at the public shipyard. Lastly, while the Navy did reprogram ship depot maintenance funds to meet other priorities, the Atlantic Fleet’s ship depot maintenance program was not adversely affected because it received a slight increase over the amount budgeted during fiscal years 1994 through 1998. We requested comments on a draft of this report from the Secretary of Defense. On January 21, 1999, DOD and NAVSEA officials said that they concurred with the report. Additionally, on February 1, 1999, Atlantic Fleet officials stated that the Fleet concurred with the report. We are sending copies of this report to the Chairmen and Ranking Members, Senate Committees on Armed Services and on Appropriations, and the House Committees on National Security and on Appropriations; the Secretaries of Defense and the Navy; and Director, Office of Management and Budget. We will also make copies available to others upon request. Please contact me on (202) 512-8412 if you or your staff have any questions concerning the report. Major contributors to this report are listed in appendix IV. During our review, we interviewed and obtained data from Department of Navy officials, including from the Office of the Assistant Secretary of the Navy for Research, Development, and Acquisition; Office of the Assistant Secretary of Navy for Financial Management and Comptroller; Office of the Deputy Chief of Naval Operations for Logistics; the Naval Sea Systems Command (NAVSEA); the Supervisor of Shipbuilding (SUPSHIP) Portsmouth; SUPSHIP-Newport News; the Atlantic Fleet; the Military Sealift Command (MSC); and the Norfolk Naval Shipyard. We also obtained data from the Department of Defense (DOD) Inspector General. In the private shipbuilding and repair industry, we interviewed industry officials of the Shipbuilders Council of America; the South Tidewater Association of Ship Repairers; Master Ship Repair Agreement (Master Ship Repair) shipyards; and Agreement for Boat Repair (Boat Repair) companies in the Norfolk area. Table I.1 lists the master ship repair and boat repair companies we visited. Colonna’s Shipyard, Inc. Earl Industries, Inc. Atlantic Ordnance and Gyro, Inc. Marine Hydraulics International, Inc. Davis Boat Works, Inc. Holmes Brothers Enterprises, Inc. Lyon Shipyard, Inc. Newport News Shipbuilding and Drydock Company (Newport News) Pure Water Technologies, Inc. We identified and reviewed DOD and Navy policies and instructions that influence the allocation of ship maintenance work to public and private ship repair facilities. Additionally, we interviewed Navy officials to identify how the Navy actually implemented its policies and procedures in the Norfolk area during fiscal years 1997 and 1998. We interviewed also Navy and industry officials to determine the reason for any variances, and concerns that may exist with the Navy’s current policies and procedures. To identify the level of ship maintenance work allocated to the Norfolk Naval Shipyard and the private sector during fiscal years 1994 through 1998, we focused on identifying all sources of funding for ship maintenance work in the Norfolk area, including CNO maintenance projects, emergent and miscellaneous availabilities, modernization projects, and MSC projects. To identify obligations for these funding sources, we used funding data from Navy budget documents, program plans, and ship maintenance schedules. Using reported obligations to measure completed ship maintenance work, we performed two data analyses to provide a more effective comparison of the maintenance work done by Norfolk Naval Shipyard and private ship repair companies. First, we determined and analyzed reported obligations for ship maintenance work allocated to Norfolk Naval Shipyard and the private sector during fiscal years 1994 through 1998. Second, we separated reported obligations for Newport News from the other smaller shipyards and repair companies in the Norfolk area because Newport News, specializing in nuclear refueling and major overhauls, performs larger and different types of maintenance projects than the other shipyards and repair companies. For the purposes of this review, this separation provided a more meaningful comparison of maintenance workloads allocated to Norfolk Naval Shipyard, Newport News, and other smaller private shipyards and repair companies during fiscal years 1994 through 1998. We could not readily separate conventional ship work from nuclear ship work. We did not verify the validity of the Navy’s ship maintenance and repair requirements in the Norfolk area. To contrast the level of scheduled and actual ship maintenance work completed by private shipyards and repair companies in the Norfolk area during fiscal year 1998, we focused on comparing the Navy’s schedule of CNO maintenance projects with actual projects provided to the private sector. We examined Navy planning documents, including the Navy’s schedule of CNO maintenance projects and the NAVSEA Notice 4710, entitled Private Sector Depot Level Planning Report for Fiscal Years 1997 and 1998. The 4710 notice contained the CNO availabilities scheduled for the private sector and specified solicitation area, contract type, solicitation method, and milestones for 1998 maintenance projects. We examined NAVSEA and Atlantic Fleet data that identified completed CNO maintenance projects during fiscal year 1998 and where the work was performed. We compared and analyzed the Navy’s schedules of CNO maintenance projects with actual projects completed by the private shipyards and repair companies in the Norfolk area during this year to identify variances. We discussed variances with NAVSEA and Atlantic Fleet officials to determine the reasons some of these scheduled maintenance projects were descoped, canceled, or transferred. To identify the extent to which ship maintenance workloads in the Norfolk area were affected by the migration of funding from the ship depot maintenance program since fiscal year 1994, we examined a variety of Navy budget documents. We examined and analyzed the budget request, appropriated, current estimate, and actual obligated funding levels during fiscal years 1994 through 1998, for Navy-wide and Atlantic Fleet ship depot maintenance programs. We identified and analyzed the differences between the annual appropriated, current estimate, and actual obligated funding levels for ship depot maintenance. We interviewed Navy officials and examined Navy documents to determine the reasons for differences and their impact on the ship depot maintenance program in the Norfolk area. In performing this review, we used the same budget and accounting systems, reports, and statistics DOD and the Navy use to manage and monitor their ship depot maintenance program. Dollars amounts shown in the report are the Navy’s reported obligations to ship repair and maintenance facilities in the Norfolk area and do not reflect actual distribution of funds. We did not independently determine the reliability of the reported obligation information. However, our recent audit of the federal government’s financial statements, including DOD’s and the Navy’s statements, questioned the reliability of reported obligation information because not all obligations and expenditures are recorded to specific budgetary accounts. We conducted our review from June 1998 to January 1999 in accordance with generally accepted government auditing standards. Reported obligations for ship maintenance work provided to the Norfolk Naval Shipyard, Newport News, and other smaller private shipyards and repair facilities during fiscal years 1994 and 1998 are presented in the following tables. Table II.1: Reported Obligations for Ship Maintenance Work Provided to the Norfolk Naval Shipyard for Fiscal Years 1994 Through 1998 Less Norfolk Naval Shipyard work contracted to private ship repair companies($4.5) (7.7) (10.2) (11.4) (29.9) Table II.2: Reported Obligations for Ship Maintenance Work Provided to Newport News for Fiscal Years 1994 Through 1998 Norfolk Naval Shipyard maintenance work According to Navy officials, the Navy obligated Newport News less of the ship maintenance program in fiscal year 1996 than any other year during this period because there was less nuclear ship maintenance work, historically allocated to Newport News or Norfolk Naval Shipyard. Most of the reported obligations for ship maintenance work provided to Newport News in fiscal year 1998, nearly $1.3 billion, was for a complex overhaul and nuclear refueling of the U.S.S. Nimitz. Norfolk Naval Shipyard maintenance work Note 1: Obligations are for maintenance, modernization, and materials work provided to the smaller private ship repair companies in the Norfolk area. Dollar amounts do not include obligations for ship maintenance work performed outside the Norfolk, Virginia, area. Note 2: Smaller private shipyards and repair companies do repair work on conventional ships and are not qualified to do nuclear-related work. In discussing the previous data with Navy officials, we were told that: CNO maintenance projects declined significantly after peaking in fiscal year 1995. According to Navy officials, obligations for CNO maintenance projects provided to smaller ship repair companies declined significantly. They said that the decline in requirements since 1995 was due primarily to the decreasing numbers of ships in the Atlantic Fleet, including steam-powered ships, which have historically been allocated to smaller ship repair companies. The Navy also reduced the size of 19 CNO maintenance projects scheduled for private shipyards and repair companies in fiscal years 1997 and 1998. Further, during fiscal year 1998, the Navy deferred one and canceled three scheduled CNO projects due to changing maintenance requirements and priorities and transferred four projects to Norfolk Naval Shipyard to meet workload targets established under its guaranteed manday policy. According to Navy officials, obligations for ship maintenance projects fluctuated during the period because their requirements historically vary from year-to-year. Additionally, reported obligations for Fleet maintenance projects peaked in fiscal year 1998 because the Navy reduced the size of the maintenance package for several CNO projects that were reclassified as Fleet maintenance projects, thus increasing the number of Fleet projects and related obligations. Historically, maintenance workload requirements for MSC ships fluctuate from year-to-year and the variances between 1994 and 1998 were typical. Norfolk Naval Shipyard work contracted to ship repair companies increased steadily during fiscal years 1994 through 1998. Likewise, during this period, the Norfolk Naval Shipyard gradually increased its use of temporary contract workers to complete ship maintenance projects. According to Navy officials, in selected cases it is cost-effective for Norfolk Naval Shipyard to contract with private shipyards and repair companies for skilled workers during periods of need rather than employing them full-time as Norfolk Naval Shipyard employees. Additionally, decreased personnel ceilings in the Navy have limited the naval shipyards’ workforce. Obligations for other ship maintenance work, which includes ship modernization projects, fluctuated during the period because the Navy contracted with smaller private shipyards and repair companies for installation of vertical launch systems in fiscal years 1994 and 1995 and pollution abatement systems in fiscal year 1997. Further, Navy officials said that funding requirements for the Navy’s modernization program have gradually declined as older ships were decommissioned and newer ships deployed. Newport News won the maintenance contract for the U.S.S. Ashland. Rescheduled for fiscal year 1999. Maintenance projects for these ships were descoped before the Atlantic Fleet sent them to Norfolk Naval Shipyard. Defense Depot Maintenance: Public and Private Sector Workload Distribution Reporting Can Be Further Improved (GAO/NSIAD-98-175, July 23, 1998). Defense Depot Maintenance: Contracting Approaches Should Address Workload Characteristics (GAO/NSIAD-98-130, June 15, 1998). Defense Depot Maintenance: Use of Public-Private Partnership Arrangements (GAO/NSIAD-98-91, May 7, 1998). Navy Ship Maintenance: Temporary Duty Assignments of Temporarily Excess Shipyard Personnel Are Reasonable (GAO/NSIAD-98-93, Apr. 21, 1998). Public-Private Competitions: DOD’s Additional Support for Combining Depot Workloads Contains Weaknesses (GAO/NSIAD-98-143, Apr. 17, 1998). Defense Depot Maintenance: DOD Shifting More Workload for New Weapon Systems to the Private Sector (GAO/NSIAD-98-8, Mar. 31, 1998). Public-Private Competitions: DOD’s Determination to Combine Depot Workloads Is Not Adequately Supported (GAO/NSIAD-98-76, Jan. 20, 1998). Defense Depot Maintenance: Information on Public and Private Sector Workload Allocations (GAO/NSIAD-98-41, Jan. 20, 1998). Navy Regional Maintenance: Substantial Opportunities Exist to Build on Infrastructure Streamlining Progress (GAO/NSIAD-98-4, Nov. 13, 1997). Navy Depot Maintenance: Privatizing Louisville Operations in Place Is Not Cost-Effective (GAO/NSIAD-97-52, July 31, 1997). Defense Depot Maintenance: Challenges Facing DOD in Managing Working Capital Funds (GAO/T-NSIAD/AIMD-97-152, May 7, 1997). Depot Maintenance: Uncertainties and Challenges DOD Faces in Restructuring Its Depot Maintenance Program (GAO/T-NSIAD-97-112, May 1, 1997) and (GAO/T-NSIAD-97-111, Mar. 18, 1997). Navy Depot Maintenance: Cost and Savings Issues Related to Privatizing-in-Place at the Louisville, Kentucky, Depot (GAO/NSIAD-96-202, Sept. 18, 1996). Defense Depot Maintenance: More Comprehensive and Consistent Workload Data Needed for Decisionmakers (GAO/NSIAD-96-166, May 21, 1996). Defense Depot Maintenance: Privatization and the Debate Over the Public-Private Mix (GAO/T-NSIAD-96-148, Apr. 17, 1996) and (GAO/T-NSIAD-96-146, Apr. 16, 1996). Depot Maintenance: Opportunities to Privatize Repair of Military Engines (GAO/NSIAD-96-33, Mar. 5, 1996). Closing Maintenance Depots: Savings, Personnel, and Workload, Redistribution Issues (GAO/NSIAD-96-29, Mar. 4, 1996). Navy Maintenance: Assessment of the Public and Private Shipyard Competition Program (GAO/NSIAD-94-184, May 25, 1994). Depot Maintenance: Issues in Allocating Workload Between the Public and Private Sectors (GAO/T-NSIAD-94-161, Apr. 12, 1994). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Navy's declining ship maintenance workload, focusing on: (1) the Navy's policies and procedures for allocating ship maintenance work to public and private facilities in the Norfolk area; (2) ship maintenance and modernization funding obligated to the Norfolk Naval Shipyard and private ship repair companies during the fiscal years (FY) 1994 through 1998; and (3) the extent to which the Atlantic Fleet's ship maintenance program has been affected by the movement of funds out of the ship depot maintenance program since FY 1994. GAO noted that: (1) the Navy's allocation of ship maintenance workload in the Norfolk, Virginia area, is guided by legislative requirements and established policy objectives, such as retaining a certain level of public sector capability, allowing sailors to remain at their home ports when shorter repairs are being done, and achieving economic and efficient public depot maintenance operations; (2) during FY 1994 through FY 1998, private shipyards and repair companies in the Norfolk, Virginia, area received proportionately more funding for ship maintenance work than the Navy's Norfolk Naval Shipyard; (3) among the private sector activities, Newport News Shipbuilding and Drydock Company has received the largest portion of ship maintenance funding in the Norfolk area; (4) funding obligated to other smaller shipyards and repair companies has fluctuated from year to year, with the greatest change occurring in FY 1998 when these companies received proportionately much less of the annual ship maintenance funding than in other years and also received less than initially scheduled by the Navy; (5) this was largely because: (a) the conventional workload that traditionally goes to these companies is declining; (b) scheduled maintenance and operational requirements changed during FY 1998; and (c) four conventional maintenance projects originally scheduled to go to the private sector were reassigned to the Norfolk Naval Shipyard to stabilize and achieve more economical and efficient operations at that public shipyard; (6) the Navy did move appropriated funds from its ship depot maintenance account during FY 1994 through FY 1998; (7) however, the Atlantic Fleet received a slight increase over the amount budgeted for its ship depot maintenance program during this period; and (8) consequently, the movement of ship depot maintenance funds did not reduce the amount of funds provided to the public and private sectors in the Norfolk, Virginia, area.
In fiscal year 2000, VA’s pharmacy benefit provided approximately 86 million prescriptions at a cost of approximately $2 billion—or about 12 percent of VA’s total health care budget, compared to 6 percent of VA’s total health care budget a decade ago. VA provides outpatient pharmacy services free to veterans receiving medications for treatment of service- connected conditions and to low-income veterans. Other veterans who have prescriptions filled by VA may be charged a copayment for each 30- day supply of medication. Like many health care organizations, VA uses several measures in an effort to improve quality of care and control pharmacy costs. These include (1) implementing a national formulary, which standardizes the list of drugs available; (2) developing clinical guidelines for prescribing drugs; and (3) using compliance programs, such as prior authorization, to encourage or require physicians to prescribe formulary drugs. VA medical centers individually began using formularies as early as 1955 to manage their pharmacy inventories. However, it was not until 40 years later in September 1995, that VA established a centralized group to manage its pharmacy benefit nationwide. In November 1995, when VISNs were established, VA’s Under Secretary for Health directed each VISN to develop and implement a VISN-wide formulary. To develop their formularies, the VISNs generally combined existing medical center formularies and eliminated rarely prescribed drugs. In 1996, VA was required to improve veterans’ access to care regardless of the region of the United States in which they live. As part of its response, VA implemented a national drug formulary on June 1, 1997, by combining the core set of drugs common to the newly developed VISN formularies. VA’s formulary meets the Joint Commission for the Accreditation of Health Care Organizations’ requirements for developing and maintaining an appropriate selection of medications for prescribers to use in treating their patient populations. VA’s formulary lists more than 1,100 unique drugs in 254 drug classes— groups of drugs similar in chemistry, method of action, or purpose of use. After performing reviews of drug classes representing the highest costs and volume of prescriptions, VA decided that some drugs in 4 of its 254 drug classes were therapeutically interchangeable—that is, essentially equivalent in terms of efficacy, safety, and outcomes. This determination allowed VA to select one or more of these drugs for its formulary so that it could seek better prices through competitively bid committed-use contracts. Other therapeutically equivalent drugs in these classes were then excluded from the formulary. These four classes are known as “closed” classes. VA has not made clinical decisions regarding therapeutic interchange in the remaining 250 drug classes, and it does not limit the number of drugs that can be added to these classes. These are known as “open” classes. To manage its pharmacy benefit nationwide, VA established the Pharmacy Benefits Management Strategic Healthcare Group (PBM). PBM is responsible for managing the national formulary list, maintaining databases that reflect drug use, and monitoring the use of certain drugs. PBM also facilitates the addition and deletion of drugs on the national formulary on the basis of safety and efficacy data, determines which drugs are therapeutically interchangeable in order to purchase drugs through competitive bidding, and develops safeguards to protect veterans from the inappropriate use of certain drugs. VISN directors are responsible for implementing and monitoring compliance with the national formulary and ensuring that a nonformulary drug approval process is functioning at each of their medical centers. Although VISN and medical center directors are held accountable in annual performance agreements for meeting certain national and local goals, attaining formulary goals has not been part of their performance standards. While VA has made significant progress in establishing a national formulary, its oversight has not been sufficient to ensure that it is fully achieving its national formulary goal of standardizing its drug benefit nationwide. In our January 2001 report, we found three factors that have impeded formulary standardization: (1) medical centers we visited omitted some national formulary drugs from their local formularies, (2) VISNs varied in the number of drugs they added to local formularies to supplement the national formulary without appropriate oversight, and (3) medical centers inappropriately added or deleted drugs in closed classes. Nevertheless, most prescribed drugs were on the national formulary, and prescribers and patients were generally satisfied with the national formulary. The first factor impeding standardization is that medical centers omitted some national formulary drugs from their local formularies. Almost 3 years after VA facilities were directed to make all national formulary drugs available locally, two of the three medical centers we visited in spring of 2000 omitted required drugs from the formularies used by their prescribers. At one medical center, about 25 percent (286 drugs) of the national formulary drugs were not available as formulary choices. These included drugs used to treat high blood pressure, mental disorders, and women’s medical needs. At the second medical center, about 13 percent (147 drugs) of the national formulary drugs were omitted, including drugs used to treat certain types of cancer and others used to treat stomach conditions. From October 1999 through March 2000, health care providers at these two medical centers had to obtain nonformulary drug approvals for over 22,000 prescriptions for drugs that should have been available without question because they are on the national formulary. Our analysis showed that at the first center, over 14,000 prescriptions were filled as nonformulary drugs for 91 drugs that should have been on the formulary.At the other medical center, over 8,000 prescriptions for 23 national formulary drugs were filled as nonformulary drugs. If the national formulary had been properly implemented at these medical centers, prescribers would not have had to use extra time to request and obtain nonformulary drug approvals for these drugs, and patients could have started treatment earlier. The second factor impeding standardization is the wide variation in the number of drugs added by VISNs to their local formularies. VA’s policy allowing VISNs to supplement the national formulary locally has the potential for conflicting with VA’s goal of achieving standardization if it is not closely managed. From June 1997 through March 2000, the 22 VISNs added a total of 244 unique drugs to supplement the list of drugs on the national formulary. As figure 1 shows, the number of drugs added by each VISN varies widely, ranging from as many as 63 to as few as 5. Adding drugs to supplement the national formulary is intended to allow VISNs to be responsive to the unique needs of their patients and to allow quicker formulary designation of new drugs approved by the Food and Drug Administration (FDA). VA officials have acknowledged that this variation affects standardization and told us they plan to address it. For example, PBM plans to more quickly review new drugs when approved by FDA to determine if they should be added to the national formulary. The third factor is that medical centers we visited inappropriately modified the national formulary list of drugs in the closed classes. Contrary to VA formulary policy, two of three medical centers added two different drugs to two of the four closed classes, and one facility did not make a drug in a closed class available. Moreover, the Institute of Medicine (IOM) found broad nonconformity at the VISN level.Specifically, IOM reported that 16 of the 22 VISNs modified the list of national formulary drugs for the closed classes. This also undermines VA’s ability to achieve cost savings through its committed-use contracts. While VA has not yet fully achieved national formulary standardization, most prescribed drugs were on the national formulary. From October 1999 through March 2000, 90 percent of VA outpatient prescriptions were written for national formulary drugs. The percentage of national formulary drug prescriptions filled by individual VISNs varied slightly, from 89 percent to 92 percent. We found wider variation among medical centers within VISNs—84 percent to 96 percent. Of the remaining 10 percent of prescriptions filled systemwide, VA’s national database could not distinguish between nonformulary drugs and drugs added to local formularies by VISNs and medical centers to supplement the national formulary. VA’s PBM and the IOM estimate that drugs added to supplement the national formulary probably account for about 7 percent of all prescriptions filled, and nonformulary drugs account for approximately 3 percent of all prescriptions filled. VA officials told us that they are modifying the database to enable them to identify which drugs are added to supplement the national formulary and which are nonformulary. This will allow them to better oversee the balance between local needs and national standardization. Prescribers we surveyed reported they were generally satisfied with the national formulary. Seventy percent of VA prescribers in our survey reported that the formulary includes the drugs their patients need either to a “great extent” or to a “very great extent.” Approximately 27 percent reported that the formulary meets their patients’ needs to a “moderate extent,” with 4 percent reporting that it meets their patients’ needs to a lesser extent. No VA prescribers reported that the formulary meets their patients’ needs to “very little or no extent.” This is consistent with IOM’s conclusion that the VA formulary “is not overly restrictive.” Veterans also appear satisfied with their ability to obtain the drugs they believe they need. At the VA medical centers we visited, patient advocates told us that veterans made very few complaints concerning their prescriptions. In its analysis of patient complaints, IOM found that less than one-half of 1 percent of veterans’ complaints were related to drug access. IOM further reported that complaints involving specific identifiable drugs often involved drugs that are marketed directly to consumers, such as Viagra. Our review also indicated that the few prescription complaints made were often related to veterans trying to obtain “lifestyle” drugs or refusals by VA physicians and pharmacists to fill prescriptions written by non-VA health care providers. VA may fill prescriptions written by non-VA health care providers only under limited circumstances, for example, when the veteran is housebound and receives additional compensation because of a service-connected disability. While the national formulary directive requires certain criteria for approval of nonformulary drugs, it does not prescribe a specific nonformulary approval process. As a result, the processes health care providers must follow to obtain nonformulary drugs differ among VA facilities regarding how requests are made, who receives them, who approves them, and how long it takes to obtain approval. In addition, some VISNs have not established processes to collect and analyze data on nonformulary requests. As a result, VA does not know if approved requests meet its established criteria or if denied requests are appropriate. Both the people involved and the length of time to approve nonformulary drugs varied. The person who first receives a nonformulary drug approval request may not be the person who approves it. For example, 61 percent of prescribers reported that nonformulary drug requests must first be submitted to facility pharmacists, 14 percent said they must first be submitted to facility pharmacy and therapeutics (P&T) committees, and 8 percent said they must first be sent to service chiefs. In contrast, 31 percent of prescribers reported that facility pharmacists approve nonformulary drug requests, 26 percent said that facility P&T committees approve them, and 15 percent told us that facility chiefs of staff approve them. The remaining 28 percent reported that various other facility officials or members of the medical staff approve nonformulary drug requests. The time required to obtain approval for use of a nonformulary drug also varied depending on the local approval processes. The majority of prescribers we surveyed (60 percent) reported that it took an average of 9 days to obtain approval for use of nonformulary drugs. But many prescribers also reported that it took only a few hours (18 percent) or minutes (22 percent) to obtain such approvals. During our medical center visits, we observed that some medical center approval processes are less expeditious than others. For example, to obtain approval to use a nonformulary drug in one facility we visited, prescribers were required to submit a request in writing to the P&T committee for its review and approval. Because the P&T committee met only once a month, the final approval to use the requested drug was sometimes delayed as long as 30 days. The requesting prescriber, however, could write a prescription for an immediate 30-day supply if the medication need was urgent. In contrast, another medical center we visited assigned a clinical pharmacist to work directly with health care providers to help with drug selection, establish dose levels, and facilitate the approval of nonformulary drugs. In that facility, clinical pharmacists were allowed to approve the use of nonformulary drugs. If a health care provider believed that a patient should be prescribed a nonformulary drug, the physician and pharmacist could consult at the point of care and make a final decision with virtually no delay. Prescribers we surveyed were almost equally divided on the ease or difficulty of getting nonformulary drug requests approved. (See table 1.) Regardless of whether the nonformulary drug approval process was perceived as easy or difficult, the majority of prescribers told us that their requests were generally approved. According to our survey results, 65 percent of prescribers sought approval for nonformulary drugs in 1999. These prescribers reported that they made, on average, 25 such requests (the median was 10 requests). We estimated that 84 percent of all prescribers’ nonformulary requests were approved. When a nonformulary drug request was disapproved, 60 percent of prescribers reported that they switched to a formulary drug. However, more than one-quarter of the prescribers who had nonformulary drug requests disapproved resubmitted their requests with additional information. For patients moving from one location to another, the majority of prescribers we surveyed told us that they were more likely to convert VA patients who were on a nonformulary drug obtained at another VA facility to a formulary drug than to request approval for the nonformulary drug. (See table 2.) Contrary to the national formulary policy, not all VISNs have established a process for collecting and analyzing data on nonformulary requests at the VISN and local levels. Twelve of VA’s 22 VISNs reported that they do not collect information on approved and denied nonformulary drug requests. Three VISNs reported that they collect information only on approved nonformulary drug requests, and seven reported that they collect information for both approved and denied requests. Such information could help VA officials to determine the extent to which nonformulary drugs are being requested and whether medical center processes for approving these requests meet established criteria. In its report, IOM noted that inadequate documentation on such matters could diminish confidence in the nonformulary process. We are encouraged by VA’s actions, but it is too early to tell how successful it will be in addressing our recommendations for improving its management and oversight of the national formulary. To improve standardization of its formulary, we recommended that VA establish (1) a mechanism to ensure that VISN directors comply with VA’s national formulary policy and (2) criteria that VISNs should use to determine the appropriateness of adding drugs to supplement the national formulary and monitor the VISNs’ application of these criteria. VA’s PBM has developed changes to its database that will provide comparative national data on VISN, nonformulary, and national formulary drug use. PBM also plans to share these data, including identification of outliers, with all 22 VISNs and coordinate with VISN formulary leaders to facilitate consistent compliance with national formulary policy. In addition, VA (1) drafted criteria for VISNs to use to determine the appropriateness of adding drugs to supplement the national formulary list, which it intends to include in a directive; (2) is developing a template for VISNs to document all VISN formulary additions; and (3) intends to review more quickly all new FDA- approved drugs for inclusion in the national formulary. To improve its nonformulary drug approval process, we recommended that (1) VA establish a process to ensure timely and appropriate decisions by medical centers and (2) veterans be allowed continued access to previously approved nonformulary drugs, regardless of where they seek care in VA’s health care system. In addressing these recommendations, VA plans to incorporate into its revised formulary directive the fundamental steps that all medical centers must take in establishing and reporting their nonformulary activities. VA also plans to include in its revised formulary directive a specific requirement that approved nonformulary medications will continue if a veteran changes his or her care to a different VA facility.
Although the Department of Veterans Affairs (VA) has made significant progress establishing a national formulary that has generally met with acceptance by prescribers and patients, VA oversight has not fully ensured standardization of its drug benefit nationwide. The three medical centers GAO visited did not comply with the national formulary. Specifically, two of the three medical centers omitted more than 140 required national formulary drugs, and all three facilities inappropriately modified the national formulary list of required drugs for certain drug classes by adding or omitting some drugs. In addition, as VA policy allows, Veterans Integrated Service Networks (VISN) added drugs to supplement the national formulary ranging from five drugs at one VISN to 63 drugs at another. However, VA lacked criteria for determining the appropriateness of the actions networks took to add these drugs. In addition to problems standardizing the national formulary, GAO identified weaknesses in the nonformulary approval process. Although the national formulary directive requires certain criteria for approving nonformulary drugs, it does not prescribe a specific nonformulary approval process. As a result, the processes health care providers must follow to obtain nonformulary drugs differ among VA facilities on how requests are made, who receives them, who approves them, and how long it takes to obtain approval. GAO found that the length of time to approve nonformulary drugs averages nine days, but it can be as short as a few minutes in some medical centers. Some VISNs have not established processes to collect and analyze data on nonformulary requests. As a result, VA does not know if approved requests meet its established criteria or if denied requests are appropriate. This testimony summarizes the December 1999 report, HEHS-00-34 and the January 2001 report, GAO-01-183 .
The D.C. Family Court Act fundamentally changed the way the Superior Court’s Family Division handled its family cases. To transition the Family Division into a Family Court, the Family Court Act required that the Superior Court prepare a transition plan describing such things as the function of the presiding judge and the number of magistrate judges, the flow and management of cases, and staffing needs. One of the central organizing principles for establishing the Family Court was the one family/one judge case management concept, whereby the same judge handles all matters related to one family. Judges in other court jurisdictions, such as Hamilton County Juvenile Court in Cincinnati, Ohio, report that implementing a one family/one judge approach in their courts facilitated more efficient and effective court operations and improved compliance with required timeframes of ASFA. The act re-established the Family Division as a Family Court, which has jurisdiction over alleged child abuse and neglect, juvenile delinquency, domestic violence, child support, and other family matters. The act also established specific qualifications for judges and extended their term requirements and established various case management practices to improve the Family Court’s administration of cases and proceedings. Additionally, in creating the new position of magistrate judge (formerly hearing commissioners), the act specified that the magistrate judges would assist associate judges in deciding how to dispose of cases and identifying cases that were to be transferred from judges outside of the Family Court. To assist the Family Court in meeting its responsibilities, the Chief Judge of the Superior Court determined that the Family Court needed 15 associate judges and 17 magistrate judges. Twelve associate judges and 8 magistrate judges initially joined the Family Court, creating the need to hire 3 additional associate judges and 9 magistrate judges. The act specified that before assigning individuals to serve as judges in the Family Court, the individuals would certify to the Chief Judge of the Superior Court that they intend to serve the full term of service and would participate in ongoing training programs designated by the Family Court on various family- related topics. The act also requires judges to have prior training or expertise in family law. New associate judges appointed to the Family Court are required to serve a 5-year term, except for judges who previously served in the Superior Court, who must serve 3-year terms. Magistrate judges are required to serve a 4-year term. To support implementation of the Family Court, a total of about $30 million in federal funds was budgeted to fund the Family Court’s transition from fiscal years 2002 through 2004. In addition to the D.C. Family Court Act, which required that all pending abuse and neglect cases assigned to judges outside of the Family Court be transferred to the Family Court by October 2003, other federal and District laws establish required timeframes for handling abuse and neglect case proceedings. ASFA requires each child to have a permanency hearing within 12 months of the child’s entry into foster care, defined as the earlier of the following two dates: (1) the date of the first judicial finding that the child has been subjected to child abuse or neglect or (2) the date that is 60 days after the date on which the child is removed from the home. The permanency hearing is to decide the goal for where the child will permanently reside and set a timetable for achieving the goal. Permanency may be accomplished through reunification with a parent, adoption, guardianship, or some other permanent placement arrangement. In addition to ASFA’s requirements, District of Columbia law establishes deadlines for conducting trials to determine the veracity of neglect or abuse allegations and dispositions to determine the remedy for confirmed abuse and neglect cases. The deadlines differ depending upon whether children remain in their homes or are removed from their homes. In general, if children are not removed from their homes, both the trial and the disposition must begin within 45 days of the filing of the petition requesting that the court review an alleged abuse and neglect case. If children are removed from their homes and placed in foster care, the statute requires that the trial and disposition begin within 105 days of removal from their home. To ensure that abuse and neglect cases are properly managed, the Council for Court Excellence, at the request of Congress, evaluates Family Court data on these cases. It is important that District social service agencies and the Family Court receive and share information they need on the children and families they serve. For example, Child and Family Services Agency (CFSA) caseworkers need to know from the court the status of a child’s case, when a hearing is scheduled, and a judge’s ruling. The Family Court needs case history information from caseworkers, such as whether services have been provided and if there is evidence of abuse or neglect. Recognizing the need to share such information, the Family Court Act required that the Family Court and the District government integrate their computer systems to share essential information. According to District officials, current plans to exchange information between the Superior Court and District agencies and among District agencies are estimated to cost about $66 million, of which about $22 million would support initiatives outlined in the Mayor’s plan issued in July 2002. According to District officials, about $36 million of the $66 million would come from capital funds that are currently available; however, they would need to seek additional funding for the remaining $30 million. Currently, budget submissions are being made to the District’s Office of Budget and Planning for the fiscal year 2005 budget. In addition to the $66 million needed to fund District data exchange efforts, the total cost of the IJIS project to the Superior Court is expected to be between $20 and $25 million, depending on the availability of funds for project-related infrastructure improvements and other project initiatives. Funding for this project is being made available through the D.C. Courts’ capital budget. The Deputy Mayor for Children, Youth, Families, and Elders and the eight District agencies identified in the District of Columbia Family Court Act or by the Mayor are responsible for defining the program and operational requirements for data sharing and integration. The Deputy Mayor established the Children and Youth Program Coordinating Council, comprising the Directors of the District agencies, the Mayor’s Court Liaison, and the D.C. Public Schools, to lead the effort to define the business and program requirements derived from the Family Court Act and the Mayor’s July 2002 plan to integrate District social services and related information systems with the information systems of the Family Court. The planned Safe Passages Information Suite (SPIS) is expected to link disparate health and human services databases across the District to provide individual case managers with critical information regarding cross-agency servicing of children, families, and individuals within the District’s health and human services system. The effort to develop SPIS is being conducted within a broader project to modernize the District’s human services and related information systems. OCTO is responsible for leading the technology development and system deployment necessary to support the District’s health and human services business process requirements. The Child and Family Program Coordinating Council and affected agencies will have the opportunity to review, adjust, and subsequently affirm the detailed plans, interim milestones, decision points, and project phases prepared by OCTO for this development. Although the Superior Court and the District followed established procedures to appoint new judges to the Family Court, an issue related to the qualification requirements and two other factors deferred the appointment of 2 of the 3 associate judges sought by the Superior Court. The Superior Court had planned to appoint 3 new associate judges to the Family Court by May 2003, but as of September 2003, only one nominee had been appointed. The other two nominees recently received Senate approval on October 24, 2003, and will likely begin hearing cases by January 2004, according to the Chief Judge of the Superior Court. According to a Senate staff member involved in the investigation of judicial nominees, Senate approval was delayed in part by the additional time required to investigate issues surrounding the nominees. For example, one of the nominees was delayed because of further investigation into the nominee’s reluctance to participate in training specified by the Family Court Act. The Superior Court followed internal procedures to appoint the 9 new magistrate judges to the Family Court. The Superior Court used a panel of judges to recruit, interview, and make recommendations to the chief judge to fill magistrate judge positions. The judicial panel consisted of 11 active judges selected by the chief judge from different areas throughout the Superior Court, including the presiding judge of the Family Court. The Family Court Act established several specific qualification requirements for magistrate judges. For example, the act required that magistrate judges have not fewer than 3 years of training or experience in the practice of family law as a lawyer or judicial officer. The judicial panel began formally recruiting for magistrate judges in January 2002 using a variety of recruitment media, including professional legal organizations, newspapers, and the Superior Court’s Web site. To assist the Superior Court in filling the initial magistrate judge vacancies, the Family Court Act authorized the court to use expedited appointment procedures. The panel received 115 applications for the first 5 magistrate judge positions. According to the chair of the judicial panel, some candidates did not meet the basic qualifications, while others had qualifications that far exceeded the requirements. The judicial panel ranked the candidates using a 5-point scale, with 5 representing outstanding, and interviewed candidates determined to be best qualified. The panel submitted its recommendations—three names for each vacancy—to the chief judge using a rank-ordered listing. The chief judge made selections from the list after obtaining input from judges throughout the Superior Court who had some knowledge of the candidates’ qualifications. The Superior Court appointed the first 5 magistrate judges in April 2002 in accordance with its Transition Plan. The panel received an additional 15-20 applications for the second vacancy announcement for the 4 remaining magistrate judge positions and also considered the applications of interested candidates in the first applicant pool. The Superior Court appointed the remaining 4 magistrate judges in October 2002 as planned. The District used procedures established by District laws to appoint associate judges to the Family Court. In June 2002, the chief judge requested that the JNC begin the process for appointing 3 additional associate judges to the Family Court. JNC, comprised of academicians, legal experts, and other District of Columbia residents, selects and recommends to the President judicial nominees for the Superior Court and D.C. Court of Appeals. JNC considered 37 applicants for the 3 vacancies, 29 of whom had previously applied for associate judge positions and 8 new applicants. In considering each applicant, JNC queried applicants about their ability to meet the qualification requirements outlined in the Family Court Act, prior to nominating them to the President. In November 2002, JNC forwarded its recommendations—three names for each vacancy—to the President and in December 2002, the President nominated 3 candidates to fill the Superior Court vacancies and forwarded their names to the Senate Committee on Governmental Affairs for confirmation. However, one nominee later expressed reluctance to participate in the Family Court’s training programs during discussions with the committee. The other two candidates nominated by the President for the Family Court included a magistrate judge serving in Family Court and an attorney with the D.C. Public Defender Service, who was found during a Senate background investigation to have had delinquent federal and District tax filing issues a few years prior to his nomination, though this was not in violation of the Family Court Act. After further questioning, the committee determined that the training and delinquent tax issues were adequately resolved. The Senate held a confirmation hearing to consider the three candidates in June 2003 and approved one of the candidates in July 2003. Following Senate approval, the candidate was appointed to the Superior Court in September 2003. However, the Senate delayed confirmation of the two remaining candidates to allow it to first approve other pending Superior Court judicial nominees for vacancies in other Superior Court divisions. These two candidates were confirmed on October 24, 2003. According to a Senate staff member, the process for appointing associate judges typically takes less than 12 months from the time that JNC receives the request to fill vacancies to the time that the Senate confirms the appointments. However, because of the additional time required to investigate outstanding issues and to confirm other Superior Court nominees, the appointment process for the two remaining candidates will have taken about 18 months by the time the new judges begin hearing cases, scheduled for January 2004. The Family Court met established timeframes for transferring cases into the Family Court and decreased the timeframes for resolving abuse and neglect cases; however, magistrate judges’ effect on reducing the workload of other court officials has been limited. For example, magistrate judges have limited authority, which requires the involvement of associate judges in many cases. The hiring of new magistrate judges has also increased the need for additional support personnel to update automated data, prepare cases for court, and process court documentation. As a result, several associate and magistrate judges and other court officials said the Family Court does not have sufficient support personnel to manage its caseload more efficiently. According to the Chief Judge of the Superior Court, the Superior Court hired additional support personnel but will reassess staff needs as it completes a review of its business processes. To consolidate all abuse and neglect cases in the Family Court, the D.C. Family Court Act required that judges in other divisions of the Superior Court transfer their abuse and neglect cases into the Family Court. While the act generally required the transfer of abuse and neglect cases by October 2003, it also permitted judges outside the Family Court to retain certain abuse and neglect cases provided that their retention of cases met criteria specified in the Family Court Act. Specifically, these cases were to remain at all times in full compliance with ASFA, and the Chief Judge of the Superior Court must determine that the retention of each case would lead to a child’s placement in a permanent home more quickly than if the case were to be transferred to a judge in the Family Court. In its October 2003 progress report on the implementation of the Family Court, the Superior Court reported that it had transferred all abuse and neglect cases back to the Family Court, with the exception of 34 cases that remained outside the Family Court, as shown in table 1. The Chief Judge of the Superior Court said that, as of August 2003, a justification for retaining an abuse and neglect case outside the Family Court had been provided in all such cases. According to the Superior Court, the principal reason for retaining abuse and neglect cases outside the Family Court was a determination made by non-Family Court judges that the cases would close before December 31, 2002, either because the child would turn 21, and thus no longer be under court jurisdiction, or because the case would close with a final adoption, custody, or guardianship decree. In the court’s October 2003 progress report, it stated that the cases remaining outside the Family Court involve children with emotional or educational disabilities. While the Superior Court reported that 4 of the 34 abuse and neglect cases remaining outside the Family Court had closed subsequent to its October 2003 progress report, children in the remaining 30 cases had not yet been placed in permanent living arrangements. On average, children in these 30 cases are 14 years of age and have been in foster care for 8 years, nearly three times the average number of years in care for a child in the District of Columbia. Table 2 provides additional information on the characteristics of the 30 cases that remain outside the Family Court. The Superior Court also reported that the Family Court had closed 620 of the 3,255 transferred cases, or 19 percent, as shown in table 3. Among the transferred cases closed by the Family Court, 77 percent of the 620 cases closed following reunification of the child with a parent or adoption, guardianship, or custody of the child by a designated family member or other individual. In most of the remaining transferred cases that had closed, the child had reached the age of majority, or 21 years of age in the District of Columbia. In addition to transferring cases to the Family Court, the Family Court is responsible for the routine handling of all newly filed cases. For alleged cases of abuse and neglect, complainants file a petition with the Family Court requesting a review of the allegation. After the filing of the petition, the Family Court holds an initial hearing in which it hears and rules on the allegation. Following the initial hearing, the court may resolve the case through mediation or through a pretrial hearing. Depending on the course of action that is taken and its outcome, several different court proceedings may follow to achieve permanency for children, thereby terminating the court’s jurisdiction. Family Court abuse and neglect proceedings include several key activities, such as adjudication, disposition, and permanency hearings. ASFA requires that a permanency hearing be held within 12 months of a child’s placement in foster care. The objective of a permanency hearing is to establish a permanency goal for the child, such as adoption or reunification with a parent, and to establish a time for achieving the specified permanency goal. Figure 1 depicts the flow of abuse and neglect cases through the various case activities handled by the D.C. Family Court. Data provided by the court show that in the last 2 years there has been a decrease in the amount of time to begin an adjudication hearing for children in abuse and neglect cases. Figure 2 shows median times to begin hearings for children removed from their homes and for children not removed from their home. As required by District law, the court must begin the hearing within 105 days for children removed from their home and placed in foster care and within 45 days for children not removed from their home. Between 2001 and 2003, the median time to begin adjudication hearings in cases when a child was removed from home declined by 140 days to 28 days, or about 83 percent. Similarly, the decline in timeframes to begin the hearings was about as large in cases when children remained in their home. In these cases, median timeframes declined by about 90 percent during this same period to 12 days. In both cases, the Superior Court is beginning the hearings within D.C. law requirements. While the reduction in timeframes for these hearings began prior to the establishment of the Family Court, median days to begin hearings for children removed from their home increased immediately following the court’s establishment before declining again in more recent months. According to two magistrate judges, the increase in timeframes immediately following establishment of the Family Court for children removed from their homes was attributable to the complexity of cases initially transferred to it. Similarly, timeframes to begin disposition hearings, a proceeding that occurs after the adjudication hearing and prior to permanency hearings, declined between 2001 and 2003, as shown in figure 3. As required by District law, the court must begin disposition hearings within 105 days for children removed from their home and within 45 days for children not removed from their home. The median days to begin disposition hearings for children removed from their home declined by 202 days to 39 days, or about 84 percent, between 2001 and 2003. The median days to begin disposition hearings for children not removed from their home declined by 159 days to 42 days, or about 79 percent. Therefore, the Superior Court is also within the timeframes required by D.C. law for these hearings. While the decline in the timeframes for disposition hearings began prior to the Family Court, according to two magistrate judges we interviewed the time required to begin these hearings increased in the 7-month period following the establishment of the Family Court because of the complexity of these cases. Despite declines in timeframes to begin adjudication and disposition hearings, the Family Court has not yet achieved full compliance with ASFA’s requirement to hold permanency hearings within 12 months of a child’s placement in foster care. The percentage of cases with timely permanency hearings increased from 25 percent in March 2001 to 55 percent in September 2002, as shown in figure 4. result from the use of uniform court orders. However, other factors continue to impede the Family Court’s full achievement of ASFA compliance. Some D.C. Family Court judges have questioned the adequacy of ASFA’s timelines for permanency, citing barriers external to the court, which increase the time required to achieve permanency. These barriers include lengthy waits for housing, which might take up to a year, and the need for parents to receive mental health services or substance abuse treatment before they can reunite with the child. For example, from January through May 2003, Family Court judges reported that parental disabilities, including emotional impairments and treatment needs, most often impeded children’s reunification with their parents. In nearly half of these reported instances, the parent needed substance abuse treatment. Procedural impediments to achieving reunification included the lack of sufficient housing to fully accommodate the needs of the reunified family. Regarding adoption and guardianship, procedural impediments included the need to complete administrative requirements associated with placing children with adoptive families in locations other than the District of Columbia. Financial impediments to permanency included insufficient adoption or guardianship subsidies. Table 4 provides additional details on impediments to achieving permanency goals. Associate judges we interviewed cited additional factors that impeded the achievement of the appropriate foster care placements and timely permanency goals. For example, one judge said that the District’s Youth Services Administration inappropriately placed a 16-year old boy in the juvenile justice facility because CFSA had not previously petitioned a neglect case before the Family Court. As a result, the child experienced a less appropriate and more injurious placement in a juvenile justice facility than what the child would have experienced had he been appropriately placed in foster care. In other cases, an associate judge has had to mediate disputes among District agencies that did not agree with court orders to pay for services for abused and neglected children, further complicating and delaying the process for providing needed services and achieving established permanency goals. To assist the Family Court in its management of abuse and neglect cases, the Family Court transition plan required magistrate judges to preside over abuse and neglect cases transferred from judges in other divisions of the Superior Court, and these judges absorbed a large number of those cases. In addition, magistrate judges, teamed with associate judges under the one family/one judge concept, had responsibility for assisting the Family Court in resolving all new abuse and neglect cases. Both associate and magistrate judges cited factors that have limited the court’s ability to fully implement the one family/one judge concept and achieve the potential efficiency and effectiveness that could have resulted. For example, the court’s identification of all cases involving the same child depends on access to complete, timely, and accurate data in IJIS. In addition, Family Court judges said that improvements in the timeliness of the court’s proceedings depends, in part, on the continuous assignment of the same caseworker from CFSA to a case and sufficient support of an assigned assistant corporation counsel from the District’s Office of Corporation Counsel. Family Court judges said the lack of consistent support from a designated CFSA caseworker and lack of assistant corporation counsels, has in certain cases prolonged the time required to conduct court proceedings. In commenting on a draft of this report, the Superior Court indicated that the one family/one judge concept does not apply to all proceedings, and as a result multiple judges may preside over cases involving the same child and family. After consultations with Family Court stakeholders, the court chose to apply the concept to juvenile cases only after adjudication of the case. Therefore, in all instances, a different associate or magistrate judge handles the adjudication phase of a juvenile case from the one responsible for all other cases related to the same child and family. In addition, several judges and court officials told us that they do not have sufficient support personnel to allow the Family Court to manage its caseload more efficiently. For example, additional courtroom clerks and court aids could improve case flow and management in the Family Court. These personnel are needed to update automated data, prepare cases for the court, and process court documentation. Under contract with the Superior Court, Booz, Allen, and Hamilton analyzed the Superior Court’s staffing resources and needs; this evaluation found that the former Family Division, now designated as the Family Court, had the highest need for additional full-time positions to conduct its work. Specifically, the analysis found that the Family Court had 154 of the 175 full-time positions needed, or a shortfall of about 12 percent. Two branches—juvenile and neglect and domestic relations—had most of the identified shortfall in full- time positions. In commenting on a draft of this report, the Superior Court stated that the Family Court, subsequent to enactment of the D.C. Family Court Act, hired additional judges and support personnel in excess of the number identified as needed in the Booz, Allen, and Hamilton study to meet the needs of the newly established Family Court. However, several branch chiefs and supervisors we interviewed said the Family Court still needs additional support personnel to better manage its caseload. The Superior Court has decided to conduct strategic planning efforts and re-engineer business processes in the various divisions prior to making the commitment to hire additional support personnel. According to the Chief Judge of the Superior Court, intervening activities, such as the initial implementation of IJIS and anticipated changes in the procurement of permanent physical space for the Family Court, have necessitated a reassessment of how the court performs its work and the related impact of its operations on needed staffing. In September 2003, the Superior Court entered into another contract with Booz, Allen, and Hamilton to reassess resource needs in light of the implementation of the D.C. Family Court Act. The D.C. Courts, comprising all components of the District’s judiciary branch, has made progress in procuring permanent space for the Family Court, but all Family Court operations will not be consolidated under the current plan. To prepare for the new Family Court space, D.C. Courts designated and redesigned space for the Family Court, constructed interim chambers for the new magistrate judges and their staff, and relocated certain non-Family Court-related components in other buildings, among other actions. The first phase of the Family Court construction project, scheduled for completion in July 2004, will provide new judges’ chambers, a family waiting area, and many other components the court needs to serve the public. However, completion of the entire Family Court construction project, scheduled for late 2009, will require the timely completion of renovations in several court buildings located on the Judiciary Square Campus and coordination with several regulatory agencies. While many of the Family Court operations will be consolidated in the new space, several court functions will remain in other areas. The current Family Court construction plan is an alternative to a larger plan for which the D.C. Courts has requested $6 million for fiscal year 2005 to design Family Court space and $57 million for fiscal year 2006 to construct court facilities. In the longer term, D.C. Courts is pursuing this larger-scale plan in order to fully consolidate all Family Court and related operations in one location. D.C. Courts has designated the John Marshall (JM) level of the H. Carl Moultrie I Courthouse (Moultrie Courthouse) as the base for the new Family Court. The new court will consolidate many of the existing Family Court operations currently spread among various levels of the Moultrie Courthouse, on the JM, C Street, and Indiana Avenue levels of the courthouse, and provide new facilities to create greater efficiency in court operations and a more family friendly environment. The Family Court construction project is part of the overall Judiciary Square Master Plan intended to provide for the current and long-term space needs of D.C. Courts located in buildings on the Judiciary Square Campus, including the Moultrie Courthouse. Figure 5 provides a depiction of the buildings on the Judiciary Square Campus. Consolidating Family Court operations primarily on the JM and the C Street levels of the Moultrie Courthouse is scheduled to begin in December 2003 and is estimated to be completed by 2009. The project will also provide space for some Family Court operations on the Indiana Avenue level. The timely completion of the project will depend on timely renovations and upgrades of existing buildings on the Judiciary Square Campus and coordination with multiple regulatory authorities, such as the National Capital Planning Commission. To prepare for the new Family Court, the courts completed a number of interim actions. For example, in March 2002, the courts completed construction of chambers for the full complement of new magistrate judges and their staff. Also in October 2002, the Courts completed renovations in Building B to provide temporary hearing rooms for 4 of the new magistrate judges and to renovate space for the Social Services Division, already located in Building B, which includes counseling, educational, and other services for families. In addition, in October 2003, the courts completed additional renovations to Building B to relocate the Landlord and Tenant and Small Claims Courts from the JM level. The first phase of the Family Court construction project, scheduled for completion in July 2004, will consolidate Family Court support services, and provide additional courtrooms, hearing rooms, and judges’ chambers. In addition, the project will provide an expanded Mayor’s Liaison Office, which coordinates Family Court services for families and provides families with information on such services, and a new family waiting area, among other facilities. Further actions required to complete the Family Court consolidation project, scheduled for 2009, will require the movement of several non-Family Court-related functions presently located on the JM and C Street levels to other levels of the Moultrie Courthouse or to other buildings on the Judiciary Square Campus. Table 5 provides a summary of the various actions required to complete the Family Court consolidation project and their impact on various facilities within the Judiciary Square Complex. For example, as shown in table 5, the Superior Court’s Information Technology Division, currently located on the C Street level, will be relocated to Building C to allow for further consolidation of various Social Service functions and other Family Court operations on that level. Because of the historic nature of Buildings A, B, C, and D, which will require significant repairs and renovations, the Superior Court must obtain necessary approvals for exterior modifications from various regulatory authorities, including the National Capital Planning Commission. In addition, some actions may require environmental assessments and their related formal review process. While the new Family Court space will consolidate many of the existing Family Court operations dispersed among various levels of the Moultrie Courthouse on the JM, C Street, and Indiana Avenue levels of the Moultrie Courthouse, some Family Court operations will not be included. As currently configured, the new Family Court space will consolidate 76 percent of the functions and associated personnel for the Family Court. Some of the Family Court operations that will remain outside the new space include the Juvenile Intake and Diagnostic Branch, which processes juveniles into the Family Court and assesses their character and needs, and some judges chambers. Appendix II provides additional details on the final configuration of the Family Court that the D.C. Courts plans to complete in 2009. The current Family Court space plan is an alternative to a larger Family Court space plan that would provide for greater consolidation of Family Court operations. The D.C. Courts has requested $57 million in its fiscal year 2006 capital budget to construct an addition to the C Street level of the Moultrie Courthouse to provide additional square footage to accommodate Family Court operations. If the D.C. Courts does not receive funding for the larger Family Court space plan, it will continue with the current alternative plan. The Superior Court and the District of Columbia are exchanging some data and making progress toward developing a broader capability to share data among their respective information systems. In August 2003, the Superior Court began using the Integrated Justice Information System (IJIS), which is intended to help the Superior Court better manage its caseload and share data with District agencies. The District has expanded and further evolved the Mayor’s plan to integrate the information systems of eight District agencies with the Superior Court. The expanded effort, called the Human Service Modernization Program, is expected to enable the exchange of data among the police department, social services agencies, and the court. While the District has made progress, it has not yet fully addressed or resolved several critical issues we reported in August 2002. The District is preparing plans and expects to begin developing a data sharing capability and data warehouses to enable data sharing among the Child and Family Services Agency, Department of Human Services’ Youth Services Administration, Department of Mental Health, and the Superior Court in 2004. According to the Program Manager, OCTO will work to resolve the issues we raised in our August 2002 report and incorporate the solutions into its plans. The Superior Court has been implementing IJIS to help manage its caseload and share data with District agencies. In August 2003, the Superior Court launched the first phase of IJIS using a commercially available case management system. The first phase of the implementation was rolled out to 300 court users in the Juvenile and Neglect Branch of the Family Court, as well as the Social Service Division and part of the Multi- Door Dispute Resolution Division of the Superior Court. In the next phase, planned for November 2003, the Superior Court plans to expand IJIS to the remaining components of the Family Court and some other court users. This would include implementing IJIS in the Family Court’s Domestic Relations, Mental Health and Mental Retardation, Paternity and Child Support, and Counsel for Child Abuse and Neglect branches and Superior Court’s Domestic Violence Division and additional users in the Multi-Door Dispute Resolution Division. Future phases involve the planned implementation of the system in the Superior Court’s Probate and Civil Divisions in 2004 and the Criminal Division in 2005. IJIS is intended to be Superior Court’s primary case and information management system. According to D.C. Courts’ Director of Information Technology, the implementation of IJIS provides new capabilities, such as the ability to schedule events, record results of proceedings and document participants, print orders, and create dockets in the courtroom. Superior Court employees also have the capability to search all related cases for individuals to determine what other issues the court should be aware of during proceedings. While the first phase of IJIS is being implemented and further adapted for its use, the court has exchanged data with District agencies using IJIS and the existing District of Columbia Justice Information System. This includes exchanges of data to help meet information needs until the final data exchange capability with District agencies is developed and implemented. These exchanges include sharing with CFSA and the Office of Corporation Counsel calendar information, which identifies the date, time, and location of scheduled court proceedings. Other data exchanges include general case information, drug testing orders and results, and placement recommendations. In addition, CFSA staff stationed at the Superior Court have been electronically scanning court orders directly into the agency’s FACES information system. In discussing data exchanges, the Director of Information Technology, D.C. Courts, noted that the court is becoming concerned about its ability to continue funding some of the interim data exchanges it has developed. The Director said they will be meeting with the D.C. Chief Financial Officer to discuss how to share the funding required for these data exchanges. In the second phase of IJIS, the court will require District agencies to provide information for its new system. The court has been discussing its data requirements with District agencies and OCTO. According to the D.C. Courts’ Director of Information Technology, during these meetings, requirements are defined based on documents currently exchanged, users requirements for additional information, and an overall understanding of the business processes that each agency uses. The District of Columbia has been seeking to develop capabilities and evolve plans to integrate District agencies’ information systems with the Superior Court. While the ultimate form of integration has not yet been completely defined, integration over the next several years is expected to occur primarily through the exchange of data using new capabilities. OCTO has been developing a prototype to provide the capability to exchange data among District law enforcement and social services agencies and the Superior Court. This capability is expected to provide the interconnection of systems through enterprise application integration software and data warehouses, thus eliminating many of the current technical barriers to data exchange. Combined with a citywide Internet portal, OCTO officials expect that users in various District agencies and the Superior Court will be able to access data that they are authorized to view. According to the OCTO Program Manager, with the implementation of the enterprise application integration software, data will be (1) readily transformable into formats required by the Superior Court or any participating District agency and (2) available as required by the Superior Court. The planning and development of the prototype are part of a broader program to modernize the District’s human services agencies’ IT capabilities and improve business processes to better serve clients. OCTO plans to continue analyzing and designing the prototype through 2003 and begin developing full capabilities in 2004 for the District’s Child and Family Services Agency, Youth Services Administration, Department of Mental Health, Courts, and Office of the Corporation Counsel. OCTO officials expect that full data exchange capabilities for other agencies will be accomplished between 2004 and 2006, when the data exchange capabilities are expected to be complete. Figure 6 shows a simplified view of District agencies and the Superior Court exchanging data to meet their needs and fulfill the data-sharing mandate of the D.C. Family Court Act. The District has made progress on defining and designing a data exchange solution to meet the needs of District agencies and the Superior Court, and OCTO is preparing an overall program plan and detailed project plans to develop and implement the solution. The OCTO Program Manager expects the have the plans prepared by December 2003. According to the Deputy Mayor for Children, Youth, Families, and Elders, affected District agencies will have the opportunity to review, adjust, and subsequently affirm the detailed plans, interim milestones, decision points, and project phases prepared by OCTO for this development. It is expected that final plans for upcoming phases will be confirmed later in the spring of 2004. While the District is making progress toward exchanging data, it has not yet fully resolved several key issues we reported in August 2002. In that report, we stated that the Mayor’s plan contained useful information, but did not contain important elements that are critical to assessing the adequacy of the District’s strategy. These elements were: establishing project milestones for completing activities, defining how and to what extent the District will integrate the systems of the six specific offices covered by the Family Court Act and the two offices added by the Mayor, defining details on the type of data the District will be providing to the Family Court and how this will be achieved, and defining how the District will achieve the Mayor’s integration priorities. These elements are also necessary to plan, develop, acquire, and implement the software, hardware, and communications resources that are required to meet the information and information processing needs of the Family Court and participating District agencies. As noted below, the OCTO Program Manager said that the District would address these key elements and incorporate them into its plans. In discussing how the District is addressing these issues, the OCTO Program Manager provided the following information: Establishing project milestones for completing activities—The definition, scope, and structure of efforts to upgrade the health and human services agencies’ information systems broadened significantly during fiscal year 2003 to provide data exchanges necessary to meet the needs of participating agencies and the people who rely on them to provide support services. The overall program plan for the human services modernization project is being developed, and key project components have been defined and are expected to be detailed in project plans by December 2003. OCTO will establish milestones for activities, decision points, and project phases as it develops plans for the modernization project. Defining how and to what extent the District will integrate the systems of the six specific offices designated by the Family Court Act and the two offices added by the Mayor—OCTO, the Superior Court, and several District agencies are conducting joint requirements sessions to finalize detailed requirements on data and document exchanges and common process functions. Mutual agreement exists that all data exchanges by District agencies to the Superior Court will be accomplished through the enterprise application integration capability being designed by OCTO. From the Superior Court’s perspective, one gateway will exist for accessing data from and providing data to all District agencies. OCTO has documented its current understanding of data requirements for the Child and Family Services Agency, Youth Services Administration, and Superior Court. These requirements will be refined as OCTO proceeds with its integration efforts in 2004. OCTO intends to finalize these requirements and build the capability to meet them. These requirements and OCTO’s plans will define how and to what extent OCTO will integrate the systems of participating agencies. Defining details on the type of data the District will be providing to the Family Court and how this will be achieved—Initially, when resources and time constraints limited the capability for a full-fledged process, OCTO relied on the requirements gathering processes of the Superior Court’s IJIS team. Information technology staff and contractors of key agencies worked collaboratively with the Superior Court to begin requirements definition, with the court’s team taking the lead. With the emergence of the human services modernization program, the District’s process will be more aggressive in fulfilling its requirements definition needs. New analysts are being added to the modernization project team, and a defined project team has been established to manage and coordinate the multiagency, court-related integration efforts. As OCTO proceeds with the modernization efforts, it will identify and define the data that the District will provide to the Superior Court and how this will be achieved. Defining how the District will achieve the Mayor’s integration priorities—Regarding the calendar management, notification, and electronic document management priorities, the Child and Family Services Agency is receiving basic information from the Superior Court. The agency is also providing the Superior Court with inquiry-level access to basic information on active cases and scanning Family Court orders into FACES. In the second phase of IJIS, a major shift from paper to electronic business processes will be initiated among the Superior Court, CFSA, and Office of the Corporation Counsel. These priorities will also be addressed in the Human Services Modernization Program. Regarding the inquiry-level access of information and reporting priorities, OCTO has developed an initial prototype for a “common case view” that would enable authorized users to view key demographic information, elements of service plans, and service-related events across multiagency case management activities. This prototype will be used to identify the Superior Court’s requirements and user access restrictions. CFSA has worked with the Superior Court to match records and family member profiles to ensure accuracy of trend analysis and progress reporting, both for the Superior Court and the District government. The planned data warehouses are expected to facilitate reporting across and among agencies and support reporting for court- related needs. Presently this work is in its embryonic stage, and as the modernization program progresses, these priorities will be addressed in OCTO’s plans and activities. In addition, we previously reported that the effectiveness and ultimate success of the Mayor’s plan hinged on resolving critical issues and implementing disciplined processes. These critical issues were: confidentiality and privacy issues governed by laws and regulations; data accuracy, completeness, and timeliness problems that have hampered program management and operations; current legacy systems’ limitations; and human capital acquisition and management. Finally, we said another key to the effectiveness of the Mayor’s plan was developing and using disciplined processes in keeping with information technology management best practices. In the past, we reported that the District had not used disciplined practices and had difficulties developing, acquiring, and implementing new systems. Disciplined processes include the use of a life-cycle model, the development of an enterprise architecture, the use of adequate security measures, and the use of a well-developed business case that evaluates the expected returns against the cost of an investment. These critical issues are necessary to plan, develop, acquire, and implement the software, hardware, and communications resources that are required to meet the information and information processing needs of the Superior Court and participating District agencies. As noted below, the OCTO Program Manager said that the District would address these critical issues and incorporate them into its plans, except for human capital issues. According to the Program Manager, OCTO has sufficient capability to acquire people with the skills needed to accomplish the modernization. In discussing how the District is addressing these issues, the OCTO Program Manager provided the following information: Confidentiality and privacy issues governed by laws and regulations— Confidentiality and privacy issues have posed significant challenges to the District for many years and are recognized as one of the most complicated domains that remain to be fully addressed. The District is beginning a multifaceted process of determining program confidentiality requirements and how they must be addressed. This effort is drawing upon staff in both the Office of the Mayor and OCTO. OCTO has added to its team a nationally renowned technology lawyer with broad experience in privacy, security, and Freedom of Information Act and related issues, who will be playing a central role in determining both requirements and solutions. As the District resolves these issues, and agreements are reached, OCTO will incorporate the solutions into its plans and activities. In commenting on a draft of this report, the Deputy Mayor for Children, Youth, Families, and Elders said that the Children and Youth Program Coordinating Council has established a subcommittee to evaluate the data-sharing issues, including the relevant policies and laws governing that sharing. This subcommittee, comprising agency program personnel, policy directors, legal support teams—including Office of Corporation Counsel and outside counsel—and OCTO staff, will make recommendations to the full Council for legislative changes that may be necessary to support or allow some aspects of data sharing. Data accuracy, completeness, and timeliness problems that have hampered program management and operations—Data quality concerns are a high priority for the Mayor, and in turn, OCTO and its staff. Prior studies have documented too many data errors resulting from human error, inadequate business processes, inadequate controls and reviews, and insufficient computer-assisted mechanisms that can identify errors or inconsistencies. To correct these problems, OCTO is putting in place infrastructure and software to support individual agency efforts to improve data quality and reliability and strengthen their practices to maintain higher levels of data quality and reliability. Once the infrastructure is in place, OCTO will coordinate agency-specific efforts within the overall Human Services Modernization Program initiative. OCTO is putting together a comprehensive plan to perform a major data cleanup with the key health and social services agencies and will provide tools to help the agencies identify inconsistent data and potential data errors. OCTO will work with agencies to identify data stewards who will have ongoing responsibility for monitoring data accuracy. We note that some agencies with critical child welfare roles, such as CFSA, have data inaccuracies, as we previously reported. Current legacy systems limitations—Several issues related to legacy systems or certain agencies’ or departments’ business processes may hamper integration implementation. For example, some information systems do not reflect agencies’ business process or support integration; some are outdated, difficult and costly to maintain, and difficult to integrate fully within a citywide integration infrastructure; some systems can support agencies’ business processes to some extent, but have severe security limitations and cannot support interagency business processes. OCTO plans to address these issues as part of its current efforts, and incorporate the solutions into OCTO’s modernization plans and activities. Human capital acquisition and management—OCTO does not anticipate any issues with the acquisition or management of human capital. OCTO has the option of contracting for specific tasks, functions, deliverables, or system components; contracting for fulfillment of specific project tasks; or contracting for various combinations of functions. Alternately, the District can determine the roles or skills it requires for time-limited technical support and hire temporary employees to satisfy these needs. This flexibility enables OCTO to structure projects and programs most appropriately to fit the District’s needs. Developing and using disciplined processes in keeping with IT management best practices. Disciplined processes include the use of a life-cycle model, the development of an enterprise architecture, the use of adequate security measures, and the use of a well-developed business case that evaluates the expected returns against the cost of an investment—The District will apply a system’s life-cycle model as it proceeds with its modernization efforts. The lifecycle is based on the Project Management Institute’s project life-cycle methodologies and procedures. Also, OCTO is employing software engineering tools, risk management and mitigation methods, and systems development methodologies that are commonly used in the information technology industry. Use of the lifecycle and other methods will be incorporated into the plans that OCTO is developing. Regarding the development of an enterprise architecture, OCTO said that it has made great strides in creating an enterprise architecture framework upon which enterprise architecture can evolve. The District is committed to producing an enterprise architecture through an evolutionary manner and has designated an Enterprise Architect. The architect will be working with the modernization team to set technical standards and ensure the modernization is aligned with the enterprise architecture. As to the use of adequate security measures, the District’s approach to security in a multiagency setting requires controls down to the data- element level. The development of this framework is in early stages. It will be gradually developed as interagency protocols are developed. These controls over data are in addition to the security the District has in place to protect its computing environment. A comprehensive security assessment is being planned as part of the modernization project. Regarding using a business case that evaluates the expected returns against the cost of an investment, the District’s Deputy Chief Technology Officer said the District will prepare a benefit-cost analysis in 2004 for the overall Human Services Modernization Program. The official added that more detailed benefit-cost analysis would be prepared for each project within the program to help in the selection of specific technical alternatives. We agree that such analyses are important to evaluating information technology investments as well as evaluating the relative cost of alternative solutions to meet business needs. Typically, these analyses are performed to evaluate alternatives before decisions are made, and the analyses are periodically updated to support decision-making as alternatives are considered during the course of a project. While the Superior Court and the District of Columbia have made progress in implementing the D.C. Family Court Act, several issues continue to affect the court’s progress in meeting all requirements of the Act. Several barriers, such as a lack of substance abuse services, hinder the court’s ability to more quickly process cases. While the Superior Court and the District have made progress in exchanging information and building a greater capability to perform this function, it remains paramount that their plans fully address several critical issues we previously reported and our prior recommendations. We received written comments from the Chief Judge of the D.C. Superior Court and the Deputy Mayor of the District of Columbia for Children, Youth, Families, and Elders. These comments are reprinted as appendixes III and IV, respectively. The Chief Judge agreed with our conclusion that the Superior Court has made progress in implementing the D.C. Family Court Act. In addition, the court cited several other areas in which it has made progress. These areas include development of Family Court Self- Help Center for unrepresented individuals served by the Family Court, an expanded child protection mediation program, and a new Family Treatment Court for mothers with substance abuse problems. The Superior Court also provided additional information on the court’s compliance with ASFA, the role of magistrate judges, the hiring of support personnel, and procurement of permanent physical space, which we incorporated when appropriate. Although the court provided information on its level of ASFA compliance, we did not use this information because neither GAO nor CCE had verified the data. We used information reported by CCE because CCE verified automated case data with information contained in the paper case files. Regarding the acquisition of permanent physical space, the court commented that we had confused the D.C. Courts’ space plans with a contingency alternative, stating that a less costly contingency plan had been developed in the event that funding to expand the Moultrie Courthouse is not provided. However, our analysis of construction documents and discussions with the D.C. Courts’ Administrative Officer indicate that D.C. Courts is currently following the alternative plan while it continues to pursue funding for the long-term addition to the Courthouse. The District government did not express agreement or disagreement with the contents of the report. The District did, however, offer clarification of the roles and responsibilities of the Office of the Deputy Mayor for Children, Youth, Families, and Elders and the Office of the Chief Technology Officer in implementing the Mayor’s plan to integrate the information systems of the District’s human services agencies and the Superior Court of the District of Columbia. We are sending copies of this report to the Office of Management and Budget, the Joint Committee on Judicial Administration in the District of Columbia, the Chief Judge of the Superior Court of the District of Columbia, the presiding judge of the Family Court of the Superior Court of the District of Columbia, and the Executive Director of the Judicial Nomination Commission. Copies of this report will also be made available to others upon request. If you have any questions about this report, please contact me on (202) 512-8403. Other contacts and staff acknowledgements are listed in appendix V. This appendix discusses in more detail the scope and methodology for assessing the progress of the Family Court since its transition from the Family Division of the Superior Court, as mandated by the D.C. Family Court Act, to determine: (1) the procedures used to make initial judicial appointments to the Family Court and the effect of qualification requirements on the length of time to make these appointments; (2) the timeliness of the Family Court in meeting established timeframes for transferring and resolving pending cases, and the impact of magistrate judges on the workload of judges and other court personnel; (3) the D.C. Court’s progress in procuring permanent physical space; and (4) the Superior Court and relevant District of Columbia agencies’ progress in sharing data from their computer systems. To get an overall perspective on the Family Court’s progress and applicable statutes, we reviewed past GAO reports, the Family Court Act, the Family Court Transition Plan and subsequent reports required by the Family Court Act, and applicable District of Columbia laws. Specifically, to respond to the first objective, we reviewed the Family Court Act and the D.C. Code for qualifications and tenure requirements for judges and applications material for the judge positions and prescribed procedures for appointing associate judges. We also interviewed Superior Court officials involved in the recruitment and selection of magistrate judges, the Executive Director and the Chairperson of the Judicial Nomination Commission, and a Senate staff member regarding the confirmation process for associate judges. For objective two, we analyzed Family Court data on its timeliness in meeting required timeframes for transferring cases back to the Family Court from other divisions in the Superior Court and its timeliness in resolving abuse and neglect cases in accordance with timeframes established by the District of Columbia and federal Adoptions and Safe Families Act requirements. We focused our review on abuse and neglect cases because of congressional interest and the former Family Division’s past problems in handling such cases. We analyzed the court’s performance in meeting timeframes to begin court proceedings leading up to permanency hearings. Specifically, we analyzed timeframes to begin adjudication hearings for suspected abuse and neglect cases and to begin disposition hearings to determine placement arrangements for children. In addition, we analyzed the time required to initiate permanency hearings to establish a goal for the permanent placement of a child (e.g., reunification with parents or adoption) and a timeframe for achieving the goal. We relied on a verification of the accuracy of the Family Court’s data conducted by the Council for Court Excellence as part of its role in overseeing the Family Court implementation. In addition, we analyzed Family Court data on the barriers to finding permanent homes for children. We also interviewed 5 associate judges to determine the impact that magistrate judges had on their workload, and interviewed 5 magistrate judges to obtain information on their caseload assignments and other responsibilities and other information regarding their experiences in working with the Family Court. In addition, we interviewed 10 branch chiefs and supervisors to determine the impact of magistrate judges and reviewed related reports by Booz, Allen, and Hamilton. For objective three, we obtained and reviewed documents on the Family Court’s space plans and the Judiciary Square Master Facilities Plan to determine how other buildings on the Judiciary Square Campus would be affected by the Family Court space. We also interviewed Superior Court officials, officials of the federal government’s General Services Administration, and the lead design architects for the new Family Court space to determine the timeframes for the Family Court construction project, the challenges of meeting those timeframes, and the court operations that would be consolidated in the new Family Court space. We also spoke with an official at the National Capitol Planning Commission to obtain information on issues regarding the Judiciary Square Master Facilities Plan that could potentially interfere with the Family Court’s timeframe for acquiring permanent space. To respond to objective four, we reviewed documentation provided by Superior Court and District officials. We interviewed officials in the Superior Court’s Information Technology Division, the Office of the Deputy Mayor for Children, Youth, Families and Elders, and officials in the District of Columbia’s Office of the Chief Technology Officer, responsible for leading the District’s efforts to integrate the computer systems of relevant district agencies with the Superior Court’s system. In addition, we interviewed officials in all eight of the District agencies required by the D.C. Family Court Act or by the Mayor of the District of Columbia to exchange data with the Family Court. We interviewed these officials to obtain their perspectives on their data exchange efforts. The eight agencies included: the Child and Family Services Agency, D.C. Public Schools, D.C. Housing Authority, Office of the Corporation Counsel, the Metropolitan Police Department, D.C. Department of Mental Health, D.C. Department of Health, and the D.C. Department of Human Services. In addition, to gain an overall perspective on court practices in other jurisdictions, we interviewed judges in family courts in Honolulu, Hawaii; Louisville, Kentucky; and Cincinnati, Ohio, by telephone. We chose these court jurisdictions because they served populations similar to the D.C. Family Court’s and because of their experience in managing family court operations. In addition, we interviewed court experts with the National Council of Juvenile and Family Court Judges, National Center for State Courts, Council for Court Excellence, and the American Bar Association, to gain a perspective on court best practice standards. We conducted our work from April through November 2003 in accordance with generally accepted government auditing standards. The following architectural drawings depict the final configuration of the D.C. Family Court. D.C. Courts plans to complete procurement of its permanent physical space, configured on multiple floors of the Moultrie Courthouse, in 2009. The following individuals also made important contributions to this report: Steve Berke, Richard Burkard, Karen Burke, Mary Crenshaw, Patrick diBattista, Linda Elmore, Nila Garces-Osorio, David G. Gill, Joel Grossman, James Rebbe, and Norma Samuel. D.C. Child and Family Services: Better Policy Implementation and Documentation of Related Activities Would Help Improve Performance. GAO-03-646. Washington, D.C.: May 27, 2003. D.C. Child and Family Services: Key Issues Affecting the Management of Its Foster Care Cases. GAO-03-758T. Washington, D.C.: May 16, 2003. District of Columbia: Issues Associated with the Child and Family Services Agency’s Performance and Policies. GAO-03-611T. Washington, D.C.: April 2, 2003. District of Columbia: More Details Needed on Plans to Integrate Computer Systems With the Family Court and Use Federal Funds. GAO-02-948. Washington, D. C.: August 7, 2002. Foster Care: Recent Legislation Helps States Focus on Finding Permanent Homes for Children, but Long-Standing Barriers Remain. GAO-02-585. Washington, D. C.: June 28, 2002. D. C. Family Court: Progress Made Toward Planned Transition and Interagency Coordination, but Some Challenges Remain. GAO-02-797T. Washington, D.C.: June 5, 2002. D. C. Family Court: Additional Actions Should Be Taken to Fully Implement Its Transition. GAO-02-584. Washington, D.C.: May 6, 2002. D. C. Family Court: Progress Made Toward Planned Transition, but Some Challenges Remain. GAO-02-660T. Washington, D. C.: April 24, 2002. D. C. Courts: Disciplined Processes Critical to Successful System Acquisition. GAO-02-316. Washington, D. C.: February 28, 2002. District of Columbia Child Welfare: Long-Term Challenges to Ensuring Children’s Well-Being. GAO-01-191. Washington, D. C.: December 29, 2000. Foster Care: Status of the District of Columbia’s Child Welfare System Reform Efforts. GAO/T-HEHS-00-109. Washington, D. C.: May 5, 2000. Foster Care: States’ Early Experiences Implementing the Adoption and Safe Families Act. GAO/HEHS-00-1. Washington, D. C.: December 22, 1999.
The D.C. Family Court Act (P.L. 107-114) mandated that GAO examine the performance of the D.C. Family Court. GAO addressed the following objectives: (1) What procedures were used to make judicial appointments to the Family Court and what effect did qualification requirements have on appointment timeframes? (2) How timely was the Family Court in meeting established timeframes for transferring and resolving abuse and neglect cases, and what impact did magistrate judges have on the workload of judges and other personnel? (3) What progress has the D.C. Courts made in procuring permanent space? And (4) What progress have the Superior Court and District agencies made in sharing data from their computer systems? To address these objectives, GAO analyzed court data on its timeliness in resolving cases, reviewed the Family Court Act, applicable District laws, and reports required by the act; reviewed documents regarding the Family Court's progress in acquiring permanent space and those related to sharing data from the computer systems of the Superior Court and the District; and interviewed relevant District, Superior Court, and Family Court officials. In commenting on this report, the Superior Court agreed with our conclusions and cited additional progress. The Deputy Mayor for Children, Youth, Families, and Elders clarified the roles and responsibilities of various District offices. The Superior Court and the District of Columbia used established procedures to appoint magistrate and associate judges to the Family Court, but an issue related to qualification requirements and other factors delayed appointments. One nominee expressed some reluctance to meeting Family Court training requirements. A second nominee was found to have had delinquent tax filing issues a few years prior to his nomination. The Senate Committee charged with approving the nominees determined that these issues were adequately resolved, but chose to defer their confirmation until other Superior Court nominees were approved. The Family Court met its statutory deadlines for transferring cases into the court from other Superior Court divisions and closed 620, or 19 percent, of these cases. The court has also decreased the timeframes for resolving abuse and neglect matters and magistrate judges have played a key role in handling cases. Several factors, however, such as shortages of substance abuse treatment services, posed barriers to achieving Family Court goals. To accommodate the operations of the Family Court, D.C. Courts--comprised of all components of the District's judiciary branch--has made progress in procuring permanent space for the Family Court. This new space, expected to be complete in late 2009, will consolidate 76 percent of the Family Court functions and associated personnel. The Superior Court and the District of Columbia have made progress in exchanging data from their respective information systems. In August 2003, the Superior Court implemented the Integrated Justice Information System, which is used to manage its cases and exchange data with other agencies. Although the District has developed a model to enable the exchange of data between various District agencies and the court, it has not fully resolved several critical issues we reported in August 2002. The District plans to address these issues as it incorporates solutions into the plans it is developing to modernize District agency computer systems.
As I noted, known demographic trends and rising health care costs are major drivers of the nation’s large and growing structural deficits. The nation cannot ignore this fiscal pressure—it is not a matter of whether the nation deals with the fiscal gap, but how and when. GAO’s long-term budget simulations illustrate the magnitude of this fiscal challenge. Figures 1 and 2 show these simulations under two different sets of assumptions. Figure 1 uses the CBO January 2005 baseline through 2015. As required by law, that baseline assumes no changes in current law, that discretionary spending grows with inflation through 2015, and that all tax cuts currently scheduled to expire are permitted to expire. In Figure 2, two assumptions about that first 10 years are changed: (1) discretionary spending grows with the economy rather than with inflation and (2) all tax cuts currently scheduled to expire are made permanent. In both simulations discretionary spending is assumed to grow with the economy after 2015 and revenue is held constant as a share of Gross Domestic Product (GDP) at the 2015 level. Also in both simulations long-term Social Security and Medicare spending are based on the 2005 trustee’s intermediate projections, and we assume that benefits continue to be paid in full after the trust funds are exhausted. Long-term Medicaid spending is based on CBO’s December 2003 long-term projections under midrange assumptions. As these simulations illustrate, absent policy changes on the spending and/or revenue side of the budget, the growth in spending on federal retirement and health entitlements will encumber an escalating share of the government’s resources. Indeed, when we assume that recent tax reductions are made permanent and discretionary spending keeps pace with the economy, our long-term simulations suggest that by 2040 federal revenues may be adequate to pay little more than interest on the federal debt. Neither slowing the growth in discretionary spending nor allowing the tax provisions to expire—nor both together—would eliminate the imbalance. Although federal tax policies will likely be part of any debate about our fiscal future, making no changes to Social Security, Medicare, Medicaid, and other drivers of the long-term fiscal gap would require at least a doubling of federal taxes in the future—and that seems both unrealistic and inappropriate. These challenges would be difficult enough if all we had to do is fund existing commitments. But as the nation continues to change in fundamental ways, a wide range of emerging needs and demands can be expected to compete for a share of the budget pie. Whether national security, transportation, education, or public health, a growing population will generate new claims for federal actions on both the spending and tax sides of the budget. Although demographic shifts and rising health care costs drive the long- term fiscal outlook, they are not the only forces at work that require the federal government to rethink its role and entire approach to policy design, priorities, and management. Other important forces are working to reshape American society, our place in the world, and the role of the federal government. These include evolving defense and homeland security policies, increasing global interdependence, and advances in science and technology. In addition, the federal government increasingly relies on new networks and partnerships to achieve critical results and develop public policy, often including multiple federal agencies, domestic and international non- or quasi-government organizations, for-profit and not- for-profit contractors, and state and local governments. If government is to effectively address these trends, it cannot accept all of its existing programs, policies, and activities as “givens.” Many of our programs were designed decades ago to address earlier challenges. Outmoded commitments and operations constitute an encumbrance on the future that can erode the capacity of the nation to better align its government with the needs and demands of a changing world and society. Accordingly, reexamining the base of all major existing federal spending and tax programs, policies, and activities by reviewing their results and testing their continued relevance and relative priority for our changing society is an important step in the process of assuring fiscal responsibility and facilitating national renewal. A periodic reexamination offers the prospect of addressing emerging needs by weeding out programs and policies that are redundant, outdated, or ineffective. Those programs and policies that remain relevant could be updated and modernized by improving their targeting and efficiency through such actions as redesigning allocation and cost-sharing provisions, consolidating facilities and programs, and streamlining and reengineering operations and processes. The tax policies and programs financing the federal budget can also be reviewed with an eye toward both the overall level of revenues that should be raised as well as the mix of taxes that are used. We recognize that taking a hard look at existing programs and carefully reconsidering their goals and financing are challenging tasks. Reforming programs and activities leads to winners and losers, notwithstanding demonstrated shortfalls in performance and design. Moreover, given the wide range of programs and issues covered, the process of rethinking government programs and activities may take a generation to unfold. We are convinced, however, that reexamining the base offers compelling opportunities to both redress our current and projected fiscal imbalance while better positioning government to meet the new challenges and opportunities of this new century. In this regard, the management and performance reforms enacted by Congress in the past 15 years have provided new tools to gain insight into the financial, program, and management performance of federal agencies and activities. The information being produced as a result can provide a strong basis to support the needed review, reassessment, and reprioritization process. While this kind of oversight and reexamination is never easy, it is helped by the availability of credible performance information focusing on the outcomes achieved with budgetary resources and other tools. Performance budgeting can help enhance the government’s capacity to assess competing claims in the budget by arming budgetary decision makers with better information on the results of both individual programs as well as entire portfolios of tools and programs addressing common outcomes. To facilitate application of performance budgeting in reexamination, it is useful to understand the current landscape. Going forward, decision makers need a road map—grounded in lessons learned from past initiatives—that defines what successful performance budgeting would look like and identifies the key elements and potential pitfalls on the critical path to success. Central to this is an understanding of what is meant by success in performance budgeting and the key factors that influence that success. Performance budgeting efforts are not new at the federal level. In the 1990s, Congress and the executive branch drew on lessons learned from 50 years of efforts to link resources to results to lay out a statutory and management framework that provides the foundation for strengthening government performance and accountability. With GPRA as its centerpiece, these reforms also laid the foundation for performance budgeting by establishing infrastructures in the agencies to improve the supply of information on performance and costs. GPRA is designed to inform congressional and executive decision making by providing objective information on the effectiveness and efficiency of federal programs and spending. A key purpose of GPRA is to create closer and clearer links between the process of allocating scarce resources and the expected results to be achieved with those resources. Importantly, GPRA requires both a connection to the structures used in congressional budget presentations and consultation between the executive and legislative branches on agency strategic plans. Because these requirements are grounded in statute, this gives Congress an oversight stake in GPRA's success. Over a decade after its enactment, GPRA has succeeded in expanding the supply of performance information and institutionalizing a culture of performance as well as providing a solid foundation for more recent budget and performance initiatives. In part, this success can be attributed to the fact that GPRA melds the best features, and avoids the worst, of its predecessors. Building on GPRA’s foundation, the current administration has made the integration of performance and budget information one of five governmentwide management priorities under its PMA. PART is central to the Administration’s budget and performance integration initiative. OMB describes PART as a diagnostic tool meant to provide a consistent approach to assessing federal programs as part of the executive budget formulation process. It applies 25 questions to all “programs” under four broad topics: (1) program purpose and design, (2) strategic planning, (3) program management, and (4) program results (i.e., whether a program is meeting its long-term and annual goals) as well as additional questions that are specific to one of seven mechanisms or approaches used to deliver the program. Drawing on available performance and evaluation information, the PART questionnaire attempts to determine the strengths and weaknesses of federal programs with a particular focus on individual program results and improving outcome measures. PART asks, for example, whether a program’s long-term goals are specific, ambitious, and focused on outcomes, and whether annual goals demonstrate progress toward achieving long-term goals. It is designed to be evidence-based, drawing on a wide array of information, including authorizing legislation, GPRA strategic plans and performance plans and reports, financial statements, inspector general and GAO reports, and independent program evaluations. Since the fiscal year 2004 budget cycle, OMB has applied PART to 607 programs (about 60 percent of the federal budget) and given each program one of four overall ratings: (1) “effective,” (2) “moderately effective,” (3) “adequate,” or (4) “ineffective” based on program design, strategic planning, management, and results. A fifth rating, “results not demonstrated,” was given—independent of a program’s numerical score— if OMB decided that a program’s performance information, performance measures, or both were insufficient or inadequate. During the next 2 years, the Administration plans to assess all remaining executive branch programs with limited exceptions. As I testified before this subcommittee in April, PMA and its related initiatives, including PART, demonstrate the Administration’s commitment to improving federal management and performance. By calling attention to successes and needed improvements, the focus that these initiatives bring is certainly a step in the right direction, and our work shows that progress has been made in several important areas over the past several years. However, it is not clear that PART has had any significant impact on congressional authorization, appropriations, and oversight activities to date. In order for such efforts to hold appeal beyond the executive branch, developing credible performance information and garnering congressional buy-in on what to measure and how to present this information to them are critical. Otherwise, as some congressional subcommittees have noted, PART is unlikely to play a major role in the authorization, appropriations, and oversight processes. Prior initiatives have left us with some lessons about how to build a sustainable approach to linking resources to results. Before I discuss those critical factors let me touch briefly on the importance of realistic expectations. I say this because previous management reforms have been doomed by inflated and unrealistic expectations. Performance budgeting can do a great deal: it can help policymakers address important questions such as whether programs are contributing to their stated goals, are well- coordinated with related initiatives at the federal level or elsewhere, and are targeted to the intended beneficiaries. However, it should not be expected to provide the answers to all resource allocation questions in some automatic or formula-driven process. Performance problems may well prompt budget cuts, program consolidations, or eliminations, but they may also inspire enhanced investments and reforms in program design and management if the program is deemed to be of sufficiently high priority to the nation. Conversely, even a program that is found to be exceeding its performance expectations can be a candidate for budgetary cuts if it is a lower priority than other competing claims in the process. The determination of priorities is a function of competing values and interests that may be informed by performance information but also reflects other factors, such as the overall budget situation, the state of the economy, security needs, equity considerations, unmet societal needs, and the appropriate role of the federal government in addressing any such needs. Accordingly, we found that while PART scores for fiscal year 2004 were generally positively related to the Administration’s proposed funding changes in discretionary programs, the scores did not automatically determine funding changes. That is, for some programs rated “effective” or “moderately effective” OMB recommended funding decreases, while for several programs judged to be “ineffective” OMB recommended additional funding in the President’s budget request with which to implement changes. As we have noted, success in performance budgeting should not be defined only by its impact on funding decisions but also on the extent to which it helps inform Congress and executive branch policy decisions and improve program management. In this regard, for the fiscal year 2004 PART assessments we reported that over 80 percent of the PART recommendations focused on improving program management, assessment, and design; less than 20 percent related to funding. We also reported that OMB’s ability to use PART to identify and address future program improvements and measure progress—a major purpose of PART—is predicated on its ability to oversee the implementation of PART recommendations. At the request of the Chairman of the House Subcommittee on Government Management, Finance, and Accountability, Committee on Government Reform, we are currently conducting a review of (1) OMB's and agencies' perspectives on the effects PART recommendations are having on agency operations and results and issues encountered in responding to PART recommendations; (2) OMB's leadership and direction in ensuring an integrated, complementary relationship between PART and GPRA, including how OMB is assessing performance when multiple programs or agencies are involved in meeting goals and objectives; and (3) steps OMB has taken to involve Congress in the PART process. Let me now turn to three factors we believe are critical to sustaining successful performance budgeting over time: 1. building a supply of credible performance information, 2. encouraging demand for that information and its use in congressional processes by garnering stakeholder buy-in, and 3. taking a comprehensive and crosscutting approach to assessing related programs and policies. The credibility of performance information, including related cost data, and the ability of federal agencies to produce credible evaluations of their programs’ effectiveness are key to the success of performance budgeting. As I testified before this subcommittee in April, this type of information is critical for effective performance measurement to support decisions in areas ranging from program efficiency and effectiveness to sourcing and contract management. To be effective, this information must not only be timely and reliable, but also both useful and used. Agencies are expected to implement integrated financial and performance management systems that routinely produce information that is (1) timely—to measure and affect performance, (2) useful—to make more informed operational and investing decisions, and (3) reliable—to ensure consistent and comparable trend analysis over time and to facilitate better performance measurement and decision making. Producing timely, useful, and reliable information is critical for achieving the goals that Congress established in GPRA, the Chief Financial Officers (CFO) Act of 1990, and other federal financial management reform legislation. Unfortunately, as our work on PART and GPRA implementation shows, the credibility of performance data has been a long-standing weakness. Likewise, our work has noted limitations in the quality of agency evaluation information and in agency capacity to produce rigorous evaluations of program effectiveness. We have previously reported that agencies have had difficulty assessing many program outcomes that are not quickly achieved or readily observed and contributions to outcomes that are only partly influenced by federal funds. Furthermore, our work has shown that few agencies deployed the rigorous research methods required to attribute changes in underlying outcomes to program activities. Our 2003 review of agencies’ evaluation capacity identified four main elements that can be used to develop and improve evaluation efforts. They are (1) an evaluation culture, (2) data quality, (3) analytic expertise, and (4) collaborative partnerships. OMB, through its development and use of PART, has provided agencies with a powerful incentive for improving data quality and availability. Agencies may make greater investments in improving their capacity to produce and procure quality information if agency program managers perceive that program performance and evaluation data will be used to make actual resource decisions throughout the resource allocation process and can help them get better results. Improvements in the quality of performance data and the capacity of federal agencies to perform program evaluations will require sustained commitment and investment of resources. Over the longer term, failing to discover and correct performance problems can be much more costly. More importantly, it is critical that budgetary investments in this area be viewed as part of a broader initiative to improve the accountability and management capacity of federal agencies and programs. Federal performance and accountability reforms have given much attention to increasing the supply of performance information over the past several decades. However, improving the supply of performance information is in and of itself insufficient to sustain performance management and achieve real improvements in management and program results. Rather, it needs to be accompanied by a demand for and use of that information by decision makers and managers alike. Key stakeholder outreach and involvement is critical to building demand and, therefore, success in performance budgeting. Lack of consensus by a community of interested parties on goals and measures and the way that they are presented can detract from the credibility of performance information and, subsequently, its use. Fifty years of past executive branch efforts to link resources with results have shown that any successful effort must involve Congress as a full partner. We have previously reported that past performance budgeting initiatives faltered in large part because they intentionally attempted to develop performance plans and measures in isolation from the congressional authorization, appropriations, and oversight processes. While congressional buy-in is critical to sustain any major management initiative, it is especially important for performance budgeting given Congress’s constitutional role in setting national priorities and allocating the resources to achieve them. Obtaining buy-in on goals and measures from a community of interested parties is critical to facilitating use of performance information in resource allocation decisions. PART was designed for and is used in the executive branch budget preparation and review process; as such, the goals and measures used in PART must meet OMB’s needs. However, the current statutory framework for strategic planning and reporting is GPRA—a broader process involving the development of strategic and performance goals and objectives to be reported in strategic and annual plans. OMB’s desire to collect performance data that better align with budget decision units means that the fiscal year 2004 PART process became a parallel competing structure to the GPRA framework. Although OMB acknowledges that GPRA was the starting point for PART, the emphasis is shifting. Over time, as the performance measures developed for PART are used in the executive budget process, these measures may come to drive agencies’ strategic planning processes. Opportunities exist to strengthen PART’s integration with the broader GPRA planning process. Some tension about the amount of stakeholder involvement in the internal deliberations surrounding the development of PART measures and the broader consultations more common to the GPRA strategic planning process is inevitable. Compared to the relatively open- ended GPRA process, any budget formulation process is likely to seem closed. However, if PART is to be accepted as other than one element in the development of the President’s budget proposal, congressional understanding and acceptance of the tool and its analysis will be critical. As part of the executive branch budget formulation process, PART must clearly serve the President’s interests. However, measures developed solely by the executive branch for the purposes of executive budget formulation may discourage their use in other processes, such as internal agency management and the congressional budget process, especially if measures that serve these other processes are eliminated through the PART process. PART’s focus on outcome measures may ignore stakeholders’ needs for other types of measures, such as output and workload information. Our recent work examining performance budgeting efforts at both the state and federal levels revealed that appropriations committees consider workload and output measures important for making resource allocation decisions. Workload and output measures lend themselves to the budget process because workload measures, in combination with cost-per-unit information, can be used to help develop appropriation levels and legislators can more easily relate output information to a funding level to help define or support a desired level of service. Like PART, GPRA states a preference for outcome measures. However, in practice, GPRA also recognizes the need to develop a range of measures, including output and process measures. Since different stakeholders have different needs and no one set of goals and measures can serve all purposes, PART can and should complement GPRA but should not replace it. Moreover, as we have previously reported, several appropriations subcommittees have cited the need to link PART with congressional oversight. For example, the House Report accompanying the Transportation and Treasury Appropriations Bill for fiscal year 2004 included a statement in support of PART, but noted that the Administration’s efforts must be linked with the oversight of Congress to maximize the utility of the PART process, and that if the Administration treats as privileged or confidential the details of its rating process, it is less likely that Congress will use those results in deciding which programs to fund. Moreover, the subcommittee said it expects OMB to involve the House and Senate Committees on Appropriations in the development of the PART ratings at all stages in the process. In our January 2004 report on PART, we suggested steps for both OMB and Congress to take to strengthen the dialogue between executive branch officials and key congressional stakeholders, and OMB generally agreed. We recommended that OMB reach out to key congressional committees early in the PART selection process to gain insight about which program areas and performance issues congressional officials consider warrant PART review. Engaging Congress early in the process may help target reviews with an eye toward those areas most likely to be on the agenda of Congress, thereby better ensuring the use of performance assessments in resource allocation processes throughout government. The importance of getting buy-in for successful performance budgeting can be seen in the experience of OMB’s recent efforts to restructure budget accounts. While OMB staff and agency officials credited budget restructuring with supporting results-oriented management, the budget changes did not meet the needs of some congressional appropriations committees. While congressional appropriations subcommittee staff expressed general support for budget and performance integration, they objected to changes that substituted rather than supplemented information traditionally used for appropriations and oversight purposes. As we said in our February 2005 report on this issue, the greatest challenge of budget restructuring may be discovering ways to reflect both the broader planning perspective that can add value to budget deliberations and foster accountability in ways that Congress considers appropriate for meeting its authorizing, appropriations, and oversight objectives. Going forward, infusing a performance perspective into budget decisions may only be achieved when the underlying information becomes more credible, accepted, and used by all major decision makers. Thus, Congress must be considered a full partner in any efforts to infuse a performance budget perspective into budget structure and budget deliberations. In due course, once the goals and underlying data become more compelling and used by Congress, budget restructuring may become a more compelling tool to advance budget and performance integration. While existing performance budgeting initiatives provide a foundation for a baseline review of federal policies, programs, functions, and activities, several changes are in order to support the type of reexamination needed. For example, PART focuses on individual programs, but key outcome- oriented performance goals—ranging from low income housing to food safety to counterterrorism—are addressed by a wide range of discretionary, entitlement, tax, and regulatory approaches that cut across a number of agencies. While PART’s program-by-program approach fits with OMB’s agency-by-agency budget reviews, it is not well suited to addressing crosscutting issues or to looking at broad program areas in which several programs address a common goal. The evaluation of programs in isolation may be revealing, but a broader perspective is necessary for an effective overall reexamination effort. It is often critical to understand how each program fits with a broader portfolio of tools and strategies—such as regulations, direct loans, and tax expenditures—to accomplish federal missions and performance goals. Such an analysis is necessary to capture whether a program complements and supports other related programs, whether it is duplicative and redundant, or whether it actually works at cross-purposes to other initiatives. OMB reported on a few crosscutting PART assessments in the fiscal year 2006 budget and plans to conduct additional crosscutting reviews in 2005. However, we would urge a more comprehensive and consistent approach to evaluating all programs relevant to common goals. Such an approach would require assessing the performance of all programs related to a particular goal—including tax expenditures and regulatory programs—using a common framework. Our federal tax system includes hundreds of billions of dollars of annual expenditures—the same order of magnitude as total discretionary spending. Yet relatively little is known about the effectiveness of tax incentives in achieving the objectives intended by Congress. PART, OMB’s current framework for assessing the performance of federal programs, has not been applied to tax expenditures. Assessing complete portfolios of tools related to key outcome-oriented goals is absolutely critical to the type of reexamination needed. The governmentwide performance plan required by GPRA could help address this issue. GPRA requires the President to include in his annual budget submission a federal government performance plan. Congress intended that this plan provide a “single cohesive picture of the annual performance goals for the fiscal year.” The governmentwide performance plan could help Congress and the executive branch address critical federal performance and management issues, including redundancy and other inefficiencies in how we do business. It could also provide a framework for any restructuring efforts. Unfortunately, this provision has not been fully implemented. Instead, OMB has used the President’s budget to present high-level information about agencies and certain program performance issues. The agency-by-agency focus of the budget does not provide the integrated perspective of government performance envisioned by GPRA. If the governmentwide performance plan were fully implemented, it could also provide a framework for congressional oversight and other activities. In that regard, we have also suggested that Congress consider the need to develop a more systematic vehicle for communicating its top performance concerns and priorities; develop a more structured oversight agenda to prompt a more coordinated congressional perspective on crosscutting performance issues; and use this agenda to inform its authorization, appropriations, and oversight processes. One possible approach would involve developing a congressional performance resolution identifying the key oversight and performance goals that Congress wishes to set for its own committees and for the government as a whole. Such a resolution could be developed by modifying the current congressional budget resolution, which is already organized by budget function. Initially, this may involve collecting the “views and estimates” of authorization and appropriations committees on priority performance issues for programs under their jurisdiction and working with such crosscutting committees as this committee, the House Committee on Government Reform, and the House Committee on Rules. In addition, we have previously recommended that Congress consider amending GPRA to require the President to develop a governmentwide strategic plan to provide a framework to identify long-term goals and strategies to address issues that cut across federal agencies. A strategic plan for the federal government, supported by key national outcome-based indicators to assess the government’s performance, position, and progress, could be a valuable tool for governmentwide reexamination of existing programs, as well as proposals for new programs. Developing a strategic plan can help clarify priorities and unify stakeholders in the pursuit of shared goals. Therefore, developing a strategic plan for the federal government would be an important first step in articulating the role, goals, and objectives of the federal government. If fully developed, a governmentwide strategic plan can potentially provide a cohesive perspective on the long-term goals of the federal government and provide a much-needed basis for fully integrating, rather than merely coordinating, a wide array of federal activities. The development of a set of key national indicators could be used as a basis to inform the development of governmentwide strategic and annual performance plans. The indicators could also link to and provide information to support outcome-oriented goals and objectives in agency-level strategic and annual performance plans. Successful strategic planning requires the involvement of key stakeholders. Thus, it could serve as a mechanism for building consensus. Further, it could provide a vehicle for the President to articulate long-term goals and a road map for achieving them. In addition, a strategic plan can provide a more comprehensive framework for considering organizational changes and making resource decisions. The federal government is in a period of profound transition and faces an array of challenges and opportunities to enhance performance, ensure accountability, and position the nation for the future. In addition to the serious long-term fiscal challenges facing the nation, a number of overarching trends, such as defense and homeland security policies, increasing global interdependence, and advances in science and technology, drive the need to reconsider the proper role for the federal government in the 21st century, including what it does, how it does it, who does it, and how it gets financed. This will mean bringing a variety of tools and approaches to bear. In our February 2005 report on 21st century challenges, we outline a number of approaches that could facilitate a reexamination effort. Today, I’ve discussed several of these, as well as some additional steps that I believe are necessary for an effective reexamination effort. Much is at stake in the development of a collaborative performance budgeting process. This is an opportune time for the executive branch and Congress to consider and discuss how agencies and committees can best take advantage of and leverage the new information and perspectives coming from the reform agenda under way in the executive branch. Through PMA and its related initiatives, including PART, the Administration has taken important steps in the right direction by calling attention to successes and needed improvements in federal management and performance. Some program improvements can come solely through executive branch action, but for PART to meet its full potential the assessments it generates must also be meaningful to and used by Congress and other stakeholders. Successful integration of inherently separate but interrelated strategic planning and performance budgeting processes is predicated on (1) ensuring that the growing supply of performance information is credible, useful, reliable, and used (2) increasing the demand for this information by developing goals and measures relevant to the large and diverse community of stakeholders in the federal budget and planning processes, and (3) taking a comprehensive and crosscutting approach. It will only be through the continued attention of the executive branch and Congress that progress can be sustained and, more importantly, accelerated. This effort can both strengthen the budget process itself and provide a valuable tool to facilitate a fundamental reexamination of the base of government. We recognize that this process will not be easy. Given the wide range of programs and issues covered, the process of rethinking the full range of federal government programs, policies, and activities could take a generation or more to complete. Regardless of the specific combination of reexamination approaches adopted, success will require not only the factors listed above but also sustained leadership throughout the many stages of the policy process. In addition, for comprehensive reexamination of government programs and policies, clear and transparent processes for engaging the broader public in the debate are also needed. Mr. Chairman, this concludes my prepared statement. I would be pleased to answer any questions you or the other Members of the Subcommittee may have at this time. For future information on this testimony, please contact Paul L. Posner at (202) 512-9573 or posnerp@gao.gov. Individuals making key contributions to this testimony include Jacqueline Nowicki, Tiffany Tanner, and Benjamin Licht. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
As part of its work to improve the management and performance of the federal government, GAO monitors progress and continuing challenges in performance budgeting and the Administration's related initiatives, such as the Program Assessment Rating Tool (PART). In light of the nation's long-term fiscal imbalance and other emerging 21st century challenges, we have also reported that performance budgeting can help facilitate a needed reexamination of what the federal government does, how it does it, who does it, and how it is financed in the future. GAO remains committed to working with Congress and the Administration to help address these important and complex issues. The federal government is in a period of profound transition and faces an array of challenges and opportunities to enhance performance, ensure accountability, and position the nation for the future. A number of overarching trends--including the nation's long-term fiscal imbalance--drive the need to reexamine what the federal government does, how it does it, who does it, and how it gets financed. This will mean bringing a variety of tools and approaches to bear on the situation. Performance budgeting holds promise as a means for facilitating a reexamination effort. It can help enhance the government's capacity to assess competing claims for federal dollars by arming decision makers with better information both on the results of individual programs as well as on entire portfolios of tools and programs addressing common goals. However, it is important to remember that in a political process, performance information should be one, but will not be the only, factor in decision making. Existing performance budgeting efforts, such as PART, provide a means for facilitating a baseline review of certain federal policies, programs, functions, and activities. Successful application of these initiatives in this reexamination process rests on building a supply of credible and reliable performance information, encouraging demand for that information by garnering congressional buy-in on what is measured and how it is presented, and developing a comprehensive and crosscutting approach to assessing the performance of all major federal programs and policies encompassing spending, tax expenditures, and regulatory actions. Through the President's Management Agenda and its related initiatives, including PART, the Administration has taken important steps in the right direction by calling attention to successes and needed improvements in federal management and performance. However, it is not clear that PART has had any significant impact on authorization, appropriations, and oversight activities to date. It will only be through the continued attention of the executive branch and Congress that progress can be accelerated and sustained. Such an effort can strengthen the budget process itself and provide a valuable tool to facilitate a fundamental reexamination of the base of government. We recognize that this process will not be easy. Furthermore, given the wide range of programs and issues covered, the process of rethinking government programs and activities could take a generation or more to complete.
The enactment of the Personal Responsibility and Work Opportunity Reconciliation Act of 1996 dramatically altered the nati’on’s system to provide assistance to the poor. The 1996 act replaced the existing entitlement program for poor families with block grants to the states to provide temporary assistance for needy families under the TANF Program. Also, under the TANF Program, states provide cash assistance to needy families with children and provide parents with job preparation; work; and support services, including transportation benefits. The 1996 act gave states flexibility in designing their programs to best provide those benefits and services. HHS’s Administration for Children and Families manages the TANF Program and has provided about $16.5 billion annually for states to use to assist needy families to become self-sufficient, including about $800 million annually for transportation benefits. In addition, Labor’s Employment and Training Administration administers programs authorized under WIA, with about $4 billion in fiscal year 2002 appropriations to provide individuals with job training and placement services. The WIA- sponsored programs also provide transportation services to take their clients to program-supported services, such as job training and placement. The TANF- and WIA-sponsored transportation efforts focus on their program clients, while the Job Access Program attempts to improve transportation for low-income people in general. With the enactment of TEA-21, DOT became a sponsor of welfare-to-work initiatives. The Job Access Program is focused on assisting address the transportation aspect of welfare reform by assisting low-income people travel to work and/or employment-related activities. Many low-income people and welfare recipients do not have access to cars and existing public transportation systems cannot always bridge the gap between where low-income people live and where jobs are located. In addition, many entry-level jobs require shift work in evenings or on weekends, when public transportation services are either limited or unavailable. When the Job Access Program was established, $750 million was authorized from fiscal years 1999 through 2003 for the Program. Appropriations have totaled $375 million through fiscal year 2002, with $75 million appropriated in each of the fiscal years 1999 and 2000, and $100 million and $125 million appropriated for fiscal years 2001 and 2002, respectively. The Job Access Program was established to close gaps in transportation services for low-income people in places where and at times when such transportation was not available. The program addressed these gaps by funding, through grants, new transportation and related services and expanding existing services to help low-income people access employment opportunities and related support services. TEA-21 identified a variety of factors for DOT to consider in funding Job Access projects, such as the need for Job Access services as evidenced by the percentage of the population in the area receiving welfare benefits; the demonstrated collaboration between the grantee and other stakeholders, such as other transportation and human service agencies; and the extent to which an applicant identified long-term financing strategies that would support the Job Access services after the end of the grant. Job Access grantees are required to provide at least 50 percent matching funds from other sources, which may include federal sources of funds available for transportation services, such as the TANF or WIA programs. DOT has consistently used two goals that it synthesized from TEA-21 as the primary criteria for evaluating, selecting, and funding Job Access projects to be funded through program grants. Those goals are that Job Access projects and services funded should provide transportation and related services to urban, suburban, and rural areas to assist low-income individuals, including welfare recipients, with access to employment and related services, such as child care and training, and increase collaboration among such parties as transportation providers, human service agencies, employers, and others in designing, funding, and delivering those transportation services. In selecting Job Access projects, DOT also considered the extent to which the projects would be financially sustainable after the end of Job Access Program funding. DOT has not reported to the Congress on the results of an evaluation of the Job Access Program, as TEA-21 required. DOT therefore is missing an important opportunity to provide information that could be useful as the Congress considers whether to reauthorize the program in 2003. FTA Program officials are not certain about when the report will be submitted to the Congress and have not established a date for doing so. DOT has delayed completion of its evaluation of the Job Access Program, and the date that it will be submitted to the Congress is uncertain. TEA-21 required that DOT evaluate the Job Access Program and submit a report to the Congress by June 2000. FTA officials stated their intentions to us several times and to the Congress to complete and submit the required evaluation report. DOT’s delays in issuing the report have cost it an important opportunity to provide information to the Congress on the effectiveness of the Job Access Program as the Congress begins its debate on the reauthorization of the program. In addition, as shown below, we have repeatedly reported on and emphasized the need for DOT to evaluate the effectiveness of the Job Access Program. In May 1998, before the program was enacted into law, we reported that DOT lacked specific information for assessing how a Job Access Program would improve mobility for low-income workers, and we recommended that DOT establish specific objectives. In December 1998, we reported that DOT was in the process of establishing an evaluation plan for the program. In November 1999, we noted that DOT had not yet completed a plan for evaluating the program, although we had recommended that it do so. In December 2000, we reported that the evaluation plan had been completed and that for the purposes of reporting under the Government Performance and Results Act of 1993, DOT had established a goal of serving 4,050 new employment sites in fiscal year 2000, and 8,050 in fiscal year 2001. On April 17, 2002, before the Subcommittee on Highways and Transit, House Committee on Transportation and Infrastructure, we testified that DOT had not yet prepared the required evaluation report and had no definite date for submitting the study. At that hearing, DOT officials stated that the report would be completed and sent to the Congress by June 2002. Throughout our study, FTA program officials discussed the reasons for the delays in issuing the report to the Congress. FTA program officials explained that to meet the requirement that they submit an evaluative report to the Congress by June 2000, they asked the grantees to submit data regarding the employment sites served by Job Access projects as well as the numbers of employers and entry-level jobs at those sites. They explained that they found that only about two-fifths of the data they obtained from grantees proved to be useful, because the rest of the data were inconsistently or inaccurately reported. By the summer of 2001, DOT officials decided that the data were out-of-date. They decided to wait for new data to be reported to them and to redraft the report to the Congress using the new data. As of the end of our review, FTA program officials continued to be unsure of the date the evaluative report will be submitted to the Congress. In November 2002, they said that they had completed their draft report and that the draft was being reviewed by the Office of the Secretary of Transportation for approval before the report could be sent to the Office of Management and Budget for its approval. FTA Program officials did not provide us with an estimated date for submitting the draft report to the Office of Management and Budget and the final report to the Congress. When we testified on April 17, 2002, DOT had planned to use only employment sites as a measure of program effectiveness. We testified that this measure presents only a partial picture of program effectiveness in meeting program goals for the following reasons: First, employment sites attempt to measure whether the Job Access Program establishes effective transportation services that help low- income people reach jobs—only one of the program goals. However, employment sites do not address the other program goal of whether projects were designed and implemented in a collaborative fashion involving the grantee and stakeholders. In addition, employment sites do not address the selection criterion of whether Job Access projects can be financially sustained after the end of program funding. Second, the use of employment sites does not fully capture whether the Job Access Program effectively addresses the program goal of providing transportation-related services to low-income people. Employment sites do not capture such information as the number of jobs available at a site or the number of Job Access beneficiaries using a Job Access service over a period of time. Grantees that responded to our survey reported that they are using additional indicators of and data on the performance of Job Access services. These grantees reported that, for internal reporting purposes, they collect a variety of data that can indicate the effectiveness of Job Access services. These survey results are shown in table 1. In addition to the measures listed in the table, experts we contacted suggested that DOT consider such measures as (1) the number of new and expanded transportation services (including data on service frequency, hours, and miles); (2) the level of collaboration achieved; and (3) the beneficiaries’ views of the effectiveness of Job Access services. In 1998, DOT funded a study that also identified many of these same measures for evaluating prospective Job Access projects. After we discussed with FTA program officials their plans to use only employment sites as performance measures, they stated that they planned to issue an evaluative report to the Congress that would contain information in addition to employment sites. In September 2002, we requested that FTA program officials provide a draft of the report for our review or a description of the evaluative methodology. In response, on October 4, 2002, they provided us with a memorandum that listed the contents of the report they proposed for the evaluative report to the Congress, including a list of the performance measures they proposed to use. According to FTA program officials and this document, the evaluative report would contain the results of a study by the University of Illinois, including surveys of passengers who were riding Job Access vehicles and studies of Job Access grantees. FTA program officials told us that they hoped to profile the services provided by the Job Access Program, including data on the geographic distribution of the services, the types of transportation-related services provided, the costs of the services, and the cost per ride. They also proposed an assessment of the program that would include such indicators as the number of employment sites, the number of jobs, and the job support services made available by Job Access projects. Other proposed measures include Job Access project ridership and user characteristics, such as users’ age, income, car ownership, driver’s license status, and work history, and information about users’ assessments of the importance of the Job Access service. According to the measures that FTA program officials listed, their report might address the second objective of the program by conveying information on Job Access planning partnerships between transportation and human service providers as well as community representatives and employers. The report also might address the financial partnerships established to fund Job Access services, such as the partnerships involving transportation providers, human service providers, and private and not-for-profit organizations. Notwithstanding the information given to us by FTA program officials about the proposed contents for their report to the Congress, for the following reasons we continue to believe that contents of the final report are uncertain, including whether the report would evaluate the program against both program goals and selection criterion: First, FTA’s list of the performance indicators that FTA proposed for its report to the Congress did not specify how FTA would collect these additional performance data—an important consideration given the results of FTA’s earlier efforts to collect performance data from the Job Access grantees. Second, the document did not contain sufficient information for us to comment on the adequacy of the report that FTA program officials propose to submit to the Congress or the rigor of the proposed evaluative methodology. For example, the document did not specify how the data would be used to address the goals and selection criterion of the Job Access Program. Third, as previously stated, FTA program officials must still submit the draft through a review process. The reviewing parties may not approve the contents of the report as proposed by FTA program officials. In awarding over $355 million in grants in 42 states through fiscal year 2002, the Job Access Program funded such services as extending existing bus routes to serve low-income populations and implementing services that provide information to clients about available transportation services and their use. Moreover, the program has increased planning, financial, and service delivery collaboration among local transportation providers, human service and job placement agencies, employers, and others in providing access to employment and employment support services. However, the ability of many Job Access projects to be financially sustainable after the end of program assistance (a criterion FTA considered in the selection of Job Access projects) is uncertain. In addition, some states have not used WIA funds as Job Access matching funds, because specific guidance on the use of WIA funds has not been issued. Through Job Access grants, the program served a broad range of geographic areas, including large and medium-size cities as well as small towns and rural areas. Most grantees—about two-thirds of them—are traditional transit providers, such as metropolitan transit authorities or bus companies. The remaining grantees, such as local human service agencies, local housing agencies, and faith- and community-based organizations, do not provide transit services as a primary activity. As shown in table 2, Job Access grantees used a variety of approaches to provide transportation services that assist low-income people to access job opportunities. Many Job Access projects involved expanding existing transit resources, such as bus routes. On the basis of our analysis of project documentation, about 51 percent of the 181 grantees selected in fiscal year 1999 modified an existing fixed transit route by adding new areas served or by enhancing the frequency of the service, while 43 percent added entirely new bus routes to serve the needs of low-income people. For example, the Santa Rosa, California, transit agency started a new route that provides bus service from a low-income neighborhood to employment locations on the other side of town and training centers en-route. According to transit officials, this service will eventually be incorporated into the existing transit network once their Job Access grant is ended. According to our analysis, grantees used a variety of transportation modes—in particular, vans, buses, or rail—to provide those transportation services for low-income people. Forty-one percent of the Job Access grantees used vans to serve low-income people. For example, because some low-income people faced problems getting to and from work during late hours, the Washington Metropolitan Area Transit Authority (WMATA) began a demand-responsive shuttle van service that operated 24 hours a day, 7 days a week for those needing transportation during late evening and early morning service hours. In addition, 14 percent of the grantees utilized buses or rail to provide Job Access services, while 9 percent utilized carpools or ridesharing, and 4 percent utilized taxis. About one-third of the grantees provided information to assist low-income people to better utilize existing transportation resources to get to employment and related support services. Specifically, 31 percent of the grantees employed an information coordinator or information brokerage center to provide information on how to use existing transit facilities and services for travel to work, training, child care, and other support services. For example, since fiscal year 1999, WMATA has received about $3.2 million in Job Access funds and, among other things, created the Washington Regional Call Center that provides a central location where eligible, low-income people can call to get exact trip information. Under this same grant, Montgomery County, Maryland, used Job Access funds to provide transit information by creating a Web page for human service employment centers to use to help their clients find ways to get to work. In another example, the Kentucky Transportation Cabinet received a $2.5 million grant in fiscal year 2000 and established a centralized brokerage system to help low-income people utilize demand-responsive service in rural areas. Some grantees have provided innovative services for the specialized needs of low-income people or to serve special populations, as the following examples demonstrate: The Good News Garage—a community-based, nonprofit association, which is based in Burlington, Vermont—used $277,935 in Job Access funding in 2000 for a service called CommuteShare. The Good News Garage obtains, repairs, and provides used vehicles to economically disadvantaged people. The CommuteShare Program made some repaired vehicles available for carpools and demand-responsive transportation to take low-income people to and from work. According to Good News Garage officials, about 75 percent of the TANF recipients who receive cars provided by the project eventually leave TANF and become economically self-sufficient. About 190 people have participated in the project, with about 25 participating at one time. Project Renewal, a rehabilitation center for homeless men and women located in New York City, used Job Access funding of $799,337 to implement its Suburban Jobs project. Project Renewal identifies and secures job opportunities in suburban areas around New York City and places formerly homeless New Yorkers in unsubsidized employment. According to the project’s administrator, Suburban Jobs directs vans daily to five worksites, where employers offer at least $6.50 per hour to each participant. Project Renewal’s housing facilities as well as other nonprofit employment programs refer qualified candidates for Suburban Jobs. Project Renewal identifies appropriate employment opportunities, prepares clients for interviews, supplements public transportation through its own van service to the suburban jobsites, and provides counseling to project beneficiaries on their way to and from work. (See app. III for more information about the projects we visited.) The Job Access Program has met its goal of increasing planning, financial, and service delivery collaboration among local transportation providers, human service and job placement agencies, employers, and others in providing access to employment and employment support services. Individual Job Access grantees and welfare reform and transportation experts we contacted stated that the Job Access Program brought together transit and human service agencies that have not widely collaborated in the past. According to our survey of grantees selected in fiscal years 1999 and 2000, almost 80 percent of the 152 grantees that responded indicated that the Job Access Program increased cooperation with other transit agencies, and 88 percent indicated that the program increased cooperation with human service agencies. In addition, all but one of the nine transportation and welfare reform experts we contacted stated that this significant increase in collaboration at the grantee level was the most successful result of the Job Access Program. One expert noted that the Job Access requirement for matching funds further encouraged grantees to approach state and local agencies that administer TANF funds to use those funds as part of a project’s matching funds. About 58 percent of the grantees that responded to our survey indicated they used TANF funds as part of their required matching funds. On the basis of our survey and visits to Job Access grantees, coordination between grantees and state and local stakeholders to plan and implement Job Access services occurred in varied forms. In some cases, transit agencies consulted with human service agencies to design new transportation services for low-income people. In other cases, coordination included simple referrals of low-income clients from human service agencies to the Job Access grantee for information about transportation services, such as vanpools, bus routes, and demand-responsive van services. Housing authorities also collaborated with transit agency grantees to transport low-income people from public housing to jobs, training, and/or child care. In addition, transit agency grantees often partnered with local human service agencies and local workforce investment boards by sending representatives to job fairs and one-stop job placement and training facilities to train low-income people to use the transit system to commute to work. Each of the 14 grantees we visited cited increased cooperation as a program benefit, although they ascribed varying degrees of difficulty in achieving such cooperation. Officials of state transportation and human service agencies we contacted said that applying for the Job Access grant made transit agencies aware of the need to tailor transportation services to low-income persons. Human service agency officials also said their involvement with the Job Access grant increased their awareness of the need to consider low-income persons’ transportation needs when implementing human service programs. The Capital District Transit Authority in Albany, New York, credited its Job Access project with encouraging it to develop new working relationships. Transit Authority officials stated that information from those agencies helped it redesign its bus routes to provide service that was more responsive to the needs of low- income people. WMATA officials also credited the Job Access Program with enabling them to take the lead, as the region’s largest transit agency, in coordinating the Job Access services with smaller, regional, transit service providers. In Louisville, Kentucky, the Transit Authority of River City coordinated with 43 different private, public, and nonprofit agencies in developing its Job Access project. The Job Access project received its matching funds from the City of Jeffersontown, Kentucky; United Parcel Service; and Kentuckiana Works—the Workforce Investment Board sponsored by Labor. The New Mexico State Highway and Transportation Department and the University of New Mexico developed several databases of publicly funded vehicles, TANF households by zip code, and jobsites to help local agencies plan transportation services for low-income people. On the other hand, Ft. Worth Transit Authority officials cited the administrative burden related to obtaining funds from other federal programs as their reason for being reluctant to seek out matching funds from other partners. DOT agreed that the use of WIA funds as a match for Job Access grants needs to be clarified, and it plans to continue its efforts to collaborate with Labor to issue new guidance to states. Currently, it is not clear to grantees or to the state agencies that administer Labor programs that WIA funds can be used as matching funds for Job Access grants, in part because Labor, which administers WIA programs, and DOT have not issued written guidance indicating that WIA funds can be used for this purpose. Labor, DOT, and trade association officials we contacted agreed that existing guidelines on the use of WIA funds indicate that those funds can be used for a variety of purposes, but are ambiguous on whether those funds can be used to pay for transportation services. As previously mentioned, applicants for Job Access grants must obtain at least 50 percent matching funds from other sources. Some grantees used WIA funds as Job Access project matching funds, while others did not. DOT and Labor officials are in the process of trying to issue guidelines about using WIA funds for Job Access purposes. Labor issued an internal E- mail stating that WIA funds could be used as matching funds for Job Access projects; however, Labor did not disseminate this knowledge outside of the department to the state and local agencies that provide the WIA-funded services. FTA program officials told us that they are currently working with Labor to issue clarification about the use of WIA funds and have has sponsored an effort by a CTAA working group for this purpose. FTA officials said that the working group queried Labor’s Employment and Training Administration about the use of WIA funds. Once answers are received, they may be published over the Internet on a federal Web site, according to FTA program officials. According to experts we contacted, as well as CTAA, DOT, and Labor officials, clarification of federal guidelines could help states understand that federal funds, such as WIA funds, can be used as part of the match. According to these officials, some states, such as New York, have interpreted federal guidelines to reach a conclusion that it is not permissible to use WIA funds as Job Access project matching funds. The interpretation has precluded grantees in those states from using WIA funds as a source for obtaining the necessary match for a Job Access grant. Using federal funds as matching funds—such as WIA and TANF funds—can be advantageous for Job Access grantees because federal funds may be more predictable and stable than nonfederal matching funds. According to the Job Access Program coordinator, federal matching funds, such as TANF and WIA, have an advantage over nonfederal matching funds because these are formula programs that have a predictable funding stream to the states and localities so that funding can be maintained without disruption. Also, more sources of funds available as a match for Job Access grants would provide additional options to grantees and improve their ability to sustain their projects. One of our previous surveys of Job Access grantees indicated that soliciting, finding, and maintaining matching funds was difficult for many grantees. For example, 34 percent of the grantees selected in fiscal year 1999 that responded to our 2000 survey reported that FTA’s lengthy grant approval process caused problems with the availability of their project’s matching funds, and seven projects were withdrawn (about 4 percent of the Job Access projects) after losing their matching funds. The ability of many Job Access projects to be financially sustainable after the end of Job Access assistance is questionable. DOT selected Job Access projects by considering, among other factors, the ability of projects to achieve financial sustainability after the end of Job Access Program funding. More specifically, in evaluating applications for Job Access projects, FTA program officials assessed the extent to which a prospective grantee identified long-term financing strategies to support the Job Access services after the end of Job Access funding. However, FTA program officials consider financial sustainability to be secondary to other program goals. The results of our survey of grantees selected in fiscal years 1999 and 2000 indicate that many Job Access projects would probably be discontinued after the end of DOT funding, and many other projects would face uncertain prospects for continuation. Specifically, about 41 percent of the respondents to our survey reported that after the end of Job Access funding, they would have to decrease the scope of services or discontinue services altogether once their Job Access funding ends. Another 47 percent of the grantees responded that they were uncertain about their ability to continue their services. The remaining 12 percent reported that they would continue their projects at the same or expanded levels after the end of their Job Access funding. One expert explained that many Job Access services are more costly than the services for the general transit clientele; grantees would likely continue operating the Job Access services only as long as federal funding covered the associated costs. Because DOT has not evaluated the Job Access Program and reported the findings to the Congress as required by law, the department is missing an opportunity to provide important information on a timely basis to the Congress on the effectiveness of the program. FTA program officials have not provided us with a specific date for issuing the report because the draft must still be reviewed and approved by the Office of the Secretary of Transportation and the Office of Management and Budget before release to the Congress. In addition, the usefulness of the report is also in doubt: If the report contains information only on employment sites, then it would address only the first program goal of providing transportation services to low-income people while ignoring the other goal of promoting collaboration in the design, financing, and delivery of those services and the criterion of ensuring that Job Access projects are financially sustainable after the end of program funding. Finally, while the law and guidelines allow the use of other federal funds to match Job Access grants, neither DOT nor Labor have provided written guidance clarifying the eligibility of funds from Labor’s WIA programs for those purposes. As a result, some states will not allow grantees to use WIA funds to match Job Access grants. We recommend that the Secretary of Transportation take the following actions: Report to the Congress, as required by TEA-21, on the results of the evaluation of the Job Access Program. Include in the report to the Congress, an evaluative methodology that examines the Job Access Program’s effectiveness in meeting its goals of (1) establishing transportation-related services that help low-income individuals, including welfare recipients, reach jobs and employment support services, such as child care and training, and (2) increasing planning, financial, and service delivery collaboration among local transportation providers, human services agencies, and others in providing access to employment and employment support services. The report also should examine the financial sustainability of Job Access projects after the end of Job Access Program funding. In conjunction with the Department of Labor, issue guidance to states providing clarification on the use of Workforce Investment Act funds as matching funds for Job Access projects. We provided DOT with a draft of this report for review and comment. We met with DOT and FTA program officials who provided us with comments on our draft report. The officials generally agreed with most aspects of our report. They stated that our survey of Job Access grantees provides interesting, unique, and useful data, worthy of greater emphasis in our report. Nevertheless, we continue to believe that it is important to emphasize both our survey and DOT’s progress in its evaluation report because DOT risks not having the report available to the Congress in time to assist in making decisions about reauthorizing the program. With regard to our first recommendation, agency officials stated that the Job Access Program evaluation, required by TEA-21, has been drafted and is being processed through the Department; however, the officials were not sure when the report would be issued. With regard to our second recommendation, the officials said that the evaluation report would fulfill the department’s statutory requirement and address most of the elements specified in the recommendation. With regard to our third recommendation, the officials indicated that DOT has been working closely with Labor to clarify issues and provide guidance related to using Labor’s WIA funds as matching funds for Job Access projects. As appropriate, we revised our report to, among other things, provide updated information on the status of DOT’s evaluative report to the Congress and DOT’s efforts to coordinate with the Labor to clarify the use of WIA funds as Job Access matching funds. We are sending copies of this report to the cognizant congressional committees; the Secretary of Transportation; the Administrator, Federal Transit Administration; the Secretary of Labor; the Secretary of Health and Human Services; and other interested parties. We will make copies available to others on request, and the report will be available on GAO’s Web site at www.gao.gov for no charge. If you have any questions about this report, please call me at (202) 512-2834 or e-mail me at siggerudk@gao.gov. Key contributors to this report are listed in appendix V. The Transportation Equity Act for the 21st Century (TEA-21) requires that we report on the implementation of the Job Access and Reverse Commute (Job Access) Program. To date, we have issued a report on transportation and welfare reform efforts in May 1998, before the program was established, and five other reports on the program: in December 1998, November 1999, December 2000, August 2001, and December 2001. In May 1998, we reported that the proposed Job Access Program would aid the national welfare reform effort by, among other things, providing additional resources to transport welfare recipients to work. We recommended that the Department of Transportation (DOT) (1) establish specific objectives, performance criteria, and goals for measuring the program’s progress; (2) require grantees to coordinate transportation strategies with local job placement and other social service agencies; and (3) work with other federal agencies to coordinate welfare-to-work activities. TEA-21 reflected these recommendations and required appropriate action by DOT. Our December 1998 report was the first to be completed in response to the TEA-21 mandate that we periodically review and report on the implementation of the Job Access Program. We reported on DOT’s preliminary steps and strategy for implementing the Job Access Program, noting that DOT’s overall plan for implementing the program included distributing grant funds to as many areas throughout the United States as possible, subject to grant funding limits of $1 million for large urban areas and $150,000 for rural areas. DOT announced that it would use several criteria for selecting projects to fund, including a project’s effectiveness in serving a demonstrated regional need; the degree of local coordination with other regional stakeholders demonstrated by the prospective grantee in designing and identifying funding for a project; and the project’s financial plans and sustainability after the end of Job Access funding. An application’s compliance with these factors would be weighted for each factor, and DOT said that it would also award bonus points for innovative approaches to providing Job Access services. DOT also considered the geographic dispersion of projects in making award decisions. We noted that DOT made important efforts in attempting to establish communication channels with various federal welfare reform agencies through its role in a policy council that involved the White House and other agencies in formulating interagency policy decisions about the Job Access Program. DOT also formulated “Joint Guidance” with the Department of Health and Human Services (HHS) and the Department of Labor (Labor) on how the Temporary Assistance for Needy Families (TANF) Program and Welfare-to- Work Program funds could be used as matching funds to help pay for Job Access projects. Regarding evaluation of the Job Access Program, DOT initially established four types of data it would collect from grantees in assessing the performance of Job Access grants and the Job Access Program: (1) the number of new and expanded transportation services (including data on service frequency, hours, and miles); (2) the number of jobs made accessible by the Job Access project; (3) the number of people using the new service; and (4) the level of collaboration achieved. We agreed that these were good measures for monitoring Job Access projects, but DOT still needed to measure the program’s overall success by establishing programwide goals or benchmarks against which the cumulative data on new routes, new system users, and newly accessible jobs could be compared. In November 1999, we reported on the implementation of the pProgram in fiscal year 1999, its first year. We found that DOT had implemented our second and third recommendations in carrying out TEA-21. Specifically, DOT had required grantees to coordinate transportation strategies with local job placement and other social service agencies and had worked with other federal agencies to coordinate welfare-to-work activities. DOT also had taken preliminary steps to implement our first recommendation that it establish specific objectives, performance criteria, and goals for measuring the program’s progress. However, we also found that DOT’s process for selecting Job Access grant proposals was not consistent in fiscal year 1999, and the basis for some selections was unclear. Our December 2000 report examined DOT’s implementation of the Program in fiscal year 2000. We found that DOT had taken steps to improve its process for selecting Job Access proposals. For example, to promote greater consistency in the evaluation and selection of grantees, DOT developed a standard format for reviewing Job Access proposals and provided more detailed guidance to its reviewers. Almost 90 percent of the fiscal year 1999 Job Access grantees that responded to our survey were satisfied with the goals and intent of the program. However, 51 percent said that satisfying various standard FTA grant requirements took too long—about 9 months, on average. As a result, about one-third of respondents reported experiencing problems in obtaining matching funds. In addition, seven projects were withdrawn (about 4 percent of Job Access projects) for various reasons, including, in one case, the loss of matching funds. In this report, we note that DOT had implemented our recommendation that it develop specific objectives, performance criteria, and measurable goals for its Job Access Program evaluation. DOT developed a goal to increase new employment sites by 4,050 in fiscal year 2000, and 8,050 in fiscal year 2001, and it had requested specific data from the grantees. Our August 2001 report provided our preliminary observations on (1) DOT’s proposal to use a formula for allocating grant funds to the states, (2) the status of obligations for the Job Access Program, and (3) DOT’s plans for reporting on the program to the Congress. First, DOT had proposed a change to the Job Access Program beginning in fiscal year 2002, under which it would allocate funding to the states via a formula, instead of to individual grantees. DOT proposed this change in response to language in the conference reports accompanying DOT’s appropriations acts for fiscal years 2000 and 2001 that designated Job Access funding for specific states, localities, and organizations. Second, as of August 7, 2001, DOT had obligated 94 percent of the funds for fiscal year 1999, 67 percent of the funds for fiscal year 2000, and 20 percent of the funds for fiscal year 2001. Third, DOT officials had missed the June 2000 deadline for a status report to the Congress but expected to report instead in September 2001. Our December 2001 report primarily addressed DOT’s response to language in conference reports that accompanied its fiscal year 2000 and fiscal year 2001 appropriations statutes. The conference reports designated specific grantees that were to receive Job Access funding; these grants involved up to three-quarters of the appropriated funding for the Job Access Program in a fiscal year. DOT elected to award grants to the designated parties in a noncompetitive fashion; however, in doing so, it was not in compliance with the provisions of the authorizing legislation—TEA-21—because the act called for a competitive grant selection process. To address this finding, we recommended that DOT implement a competitive selection process for all prospective grantees, including those that were designated by language in conference reports. As a result of our recommendation, DOT announced that it would implement a competitive selection process for all grantees— congressional designated and otherwise. On April 17, 2002, we testified on the Job Access Program before the Subcommittee on Highways and Transit, House Committee on Transportation and Infrastructure. We emphasized the need for DOT to evaluate the program as directed by TEA-21. We noted that, at the time of our testimony, DOT had no estimated date for issuing the required report. Further, we stated that DOT’s use of employment sites as the sole measure of program success does not address key aspects of the program nor specifically relate to DOT’s criteria for selecting Job Access grantees. For its first objective, this report examines the status of DOT’s efforts to evaluate the Job Access Program and report to the Congress. For its second objective, the report discusses our findings about the Job Access Program’s efforts to (1) provide transportation and related services to allow low-income people to reach employment and related opportunities; (2) increase collaboration in the design, financing, and delivery of the services of Job Access projects; and (3) foster the financial sustainability of the services delivered by Job Access projects after program funding terminates. In responding to our first objective, we contacted FTA Program officials to discuss and document their efforts to evaluate the Program and to issue a report to the Congress. We monitored FTA’s plans to evaluate the Program, including their proposal to utilize employment sites, and we queried Program officials about reasons for their delay in issuing the report to the Congress and plans for expediting completion of the evaluation. In addition, through our discussions with program officials, transportation and welfare reform experts, and national associations, we identified prospective measurements of program success and discussed the availability and appropriateness of those measurements for an evaluation of the Job Access Program. In responding to our second objective, we examined the services delivered by Job Access projects in assisting low-income people access jobs and job- related services. Specifically, we followed up on our previous findings, observations, and recommendations from our reports; reviewed the agency’s ongoing efforts to solicit, evaluate, and select Job Access grantees in fiscal year 2002; and examined DOT’s ongoing implementation of existing grants and projects. Our November 1999 report contained an analysis of project data regarding the transportation-related services delivered by all 181 projects selected in fiscal year 1999. Those projects constitute over 80 percent of the projects that are still operating today. We used this information to supplement our discussion of the types of services funded through Job Access grants. As part of the work for our second objective, we assessed whether the Job Access Program was increasing collaboration in the design, financing, and delivery of the services of Job Access projects—a program goal. We addressed the issue of collaborative project design, financing, and delivery in our survey of 173 Job Access grantees selected in fiscal years 1999 and 2000 that are still implementing Job Access projects. We also examined how the implementation of individual Job Access projects has been integrated into the transportation and human service efforts of states and local communities by observing the interactions between grantees, metropolitan planning organizations, transit agencies, and human service agencies, such as those in the Albany, New York, area; the Washington, D.C., metropolitan area; the Dallas-Fort Worth, Texas, area; the San Francisco Bay area; and the Louisville, Kentucky, area. Meeting our second objective also required that we assess whether the Job Access Program was meeting a criterion for FTA’s selection of Job Access projects—whether the projects could achieve financial sustainability of their services after program funding terminates. Our previous work on the Job Access Program showed that many projects might not be sustained if their Job Access funding terminated; therefore, our survey of Job Access grantees included questions about the likelihood of Job Access projects retaining their matching funds and continuing to operate. We also inquired about the prospects for projects’ financial sustainability with the grantees we selected for site visits and discussed financial sustainability with Job Access Program officials, national associations, and welfare reform and transportation experts. Finally, as part of our second objective, we examined the use of federal funds from other programs as matching funds for Job Access projects. Job Access Program regulations require that grantees obtain at least 50 percent of their project funding from non-DOT sources, which may include funding from federal sources such as the TANF Program and the Workforce Investment Act (WIA)-sponsored programs of Labor. We therefore reviewed policies affecting coordination and cost-sharing in federal programs, which included Office of Management and Budget Circular A-87, and we contacted DOT, HHS, and Labor officials about their efforts to refine the interagency “Joint Guidance” regarding matching funds for Job Access Program grants. In addition, we selected and utilized three broad methodologies that addressed both objectives of our study. These methodologies included performing a detailed review of selected, ongoing Job Access projects at surveying all Job Access grantees selected during fiscal years 1999 and consulting with nine welfare reform and transportation experts. We performed detailed reviews of selected, ongoing Job Access projects at different locations. We selected these projects to represent the geographic dispersion of Job Access projects across the United States. In addition, we selected projects that served the different sizes of areas prescribed by the Federal Transit Administration’s (FTA) Job Access administrative requirements: large urban areas, medium-size urban areas, and rural areas/small cities. These grantees provided different types of Job Access service delivery methods (e.g., carpools, fixed bus and van routes, demand- responsive transportation, and trip information and assistance). We visited the following: Grantees serving large cities: 1. Project Renewal (not-for-profit, community-based, organization, New York City). 2. Washington Metropolitan Area Transit Authority (transit agency, Washington, D.C.). 3. Maryland Transit Administration (statewide transit agency, Baltimore, Maryland). 4. Fort Worth Transit Authority (transit agency, Fort Worth, Texas). Grantees serving medium-size cities: 1. Capital District Transit Authority (transit agency, Albany, New York). 2. Santa Rosa City Department of Transit and Parking (transit agency, Santa Rosa, California). 3. Transit Authority of River City (transit agency, Louisville, Kentucky). City of Albuquerque Transit (transit agency, Albuquerque, New Mexico). Small cities and rural areas: 1. Good News Garage (community-based, not-for-profit, organization, Burlington, Vermont). 2. New Mexico State Highway and Transportation Department (state DOT, Albuquerque, New Mexico). 3. Las Vegas Housing Department (public housing agency, Las Vegas, New Mexico). 4. California DOT (CALTRANS, state DOT, Sacramento, California). 5. Kentucky Transportation Cabinet (state transportation agency, Frankfort, Kentucky). 6. Alliance for Children and Families (not-for-profit organization, based in Milwaukee, Wisconsin). At each location, we examined how Job Access services were delivered, how the design and delivery of Job Access services were coordinated with those of other transportation services and human service agencies in the area, and whether the grantees could financially sustain their services if Job Access funding terminated. The grant recipients that we visited included state and regional agencies that distributed Job Access funds to subgrantees and that made substantial efforts to coordinate those services to avoid duplicating ongoing transportation services that serve low-income people, including welfare recipients. We conducted a mail survey of all 173 Job Access grantees that were funded in fiscal years 1999 or 2000. (See app. IV for the survey results.) We did not survey the grantees selected in fiscal year 2001 because they did not have enough time to begin implementing their Job Access projects. Our survey addressed issues pertaining to the grantees’ implementation of their projects, including costs, ridership, collaboration with other agencies, their financial ability to sustain services in the absence of Job Access funding, and their views on the usefulness of the program in addressing the transportation needs of low-income individuals. Our response rate, about 88 percent (152 respondents), can be generalized to the universe of all grantees funded in fiscal years 1999 and 2000. We consulted nine experts from academia, federal and state transportation and welfare programs, and national associations with backgrounds in the fields of welfare reform and transportation. They provided information and views on such matters as the strategy DOT used to implement the Job Access Program, the role of the Job Access Program in the national welfare reform effort, the overall effectiveness of the Program in serving low- income people, ways that the program could be improved, the sustainability of Job Access projects, and ways in which DOT could evaluate the program as required by TEA-21. We selected these experts on the basis of our review of transit and welfare reform literature and referrals from DOT, HHS, Labor, and national associations, such as the American Public Transportation Association. Our work was performed from January 2002 through October 2002 in accordance with generally accepted government auditing standards. To address the objectives of our review, we visited ongoing Job Access projects at different locations. At each location, we examined how Job Access services were delivered, how the design and delivery of Job Access services were coordinated with those of other transportation services and human service agencies in the area, and whether these projects could financially sustain their services if Job Access funding terminated. We selected these projects to represent the geographic dispersion of Job Access projects across the United States as well as the different sizes of areas prescribed by FTA’s Job Access administrative requirements: large urban areas, medium-size urban areas, and rural areas/small cities. The projects we visited, as well as their locations, the services they delivered, and the kinds of matching funds used, are summarized in table 3. Following table 3, we provide more detailed information on each project. The information contained in this text is based on interviews with project officials as well as project-specific documentation, including program and budget information. Ways to Work, a subsidiary of the Alliance for Children and Families, provides low-income people with loans of various sizes, ranging from $750 for car repairs up to $3,000 for the purchase of a used car. For its Job Access project, Ways to Work implemented a carpool project. Low-income people that participate in Ways to Work volunteer to be in a carpool project with other participants. Ways to Work then coordinates the pool on the basis of home location and jobsite. While approximately three-fourths of borrowers received government aid at the time of their loan application, their use of public assistance dropped by 40 percent within 2 years, and less than 1 percent of borrowers became "new" users of public assistance since receiving their loans. Ways to Work officials stated that internal studies show that borrowers can average a 20 percent increase in household income. Currently, the Job Access project in Alabama is the only ongoing effort under this grant. These officials told us that Ways to Work is also applying for Job Access grants in other locations, such as New Philadelphia, Canton, and Akron, Ohio. The following table describes the Job Access Program funding for this project as well as the sources of matching funds and the amounts. Ways to Work has had different results with state DOTs. According to the president of Ways to Work, the relationship with the Alabama DOT was positive but slow. The Alabama DOT assisted Ways to Work by coordinating on behalf of Ways to Work with other state and local resources, such as human service agencies. However, according to Ways to Work’s president, this coordination process was too slow because Ways to Work confronted Alabama’s slow budget process. As a result, the Program experienced delays in being implemented. Local participants in the Job Access project did not have the cash flow to advance the project, but the funding was obtained from the Alabama DOT. Ways to Work would reduce its carpool project if Job Access funds were no longer available, according to its president. Job Access funding expanded and strengthened the carpool project and Ways to Work overall. Ways to Work, however, has been in operation since 1984 and received most of its funding from foundations, banks, and other private funding resources; thus, it could find alternative funding sources to maintain the project. Moreover, as people repay the loans, Ways to Work can reuse the money, so that successful lending would help stretch funding out over many years. However, the carpool budget would be reduced, thereby reducing the scope of the project. In May 2001, the California Department of Transportation (CALTRANS) established the Agriculture Industry Transportation Services (AITS) pilot project in response to a series of accidents involving farm labor vehicles in the San Joaquin Valley—specifically, the death of 14 farm workers. Many workers had been using unlicensed and uninsured van service that charged passengers about $6 to $10 per day. The AITS pilot project was designed to improve access to safe public transportation for farm workers and their communities by providing expanded or new transit service in Fresno, Kern, Kings, and Tulare Counties. There are two components of the project—first, the Kings County component, which encompasses the Fresno and Tulare Counties, and second, the Kerns County component. Service started in May 2002. The Kings County component involves purchasing 134, 15-passenger vans. Residents in each of the targeted communities are trained to safely operate the vanpool vehicles. The operators of these vehicles both drive the vans and work at the agricultural fields and nearby packing facilities. Vanpool fare for the pilot project is $50 per person, per month. In addition, the Kings County component involves purchasing 12, 28-passenger buses. Residents of the community operate the buses between the communities and nearby agricultural employment centers. Bus fare is $3 per person, per day, and service frequency varies (from 4 to 7 days a week), depending upon seasonal demand for labor. An average of 26 people per day are currently riding the first bus in operation in Kings County. The combined van and bus service costs each person about $5 per day. The Kerns County component of the AITS Job Access project is an expansion of a fixed route bus service. Kern Regional Transit provided expansion of existing portions of the transit system. Previously, service consisted of one fixed route bus serving the Lamont/Weedpatch communities, one demand-responsive bus for those, and an intercity commuter bus linking Lamont with the Bakersfield area. Expansion of service under this Job Access project consists of a second intercity bus operating in the communities of Arvin, Weedpatch, and Lamont 6 days a week, with limited service provided on Sundays. An additional bus was placed into service for the Lamont/Weedpatch communities providing improved service for residents in the communities that required transit services to jobsites. Because the pilot project began operating in May 2002, data compilation and reporting of the project’s success has not been completed. CALTRANS officials plan in the future to collect data on the number of agricultural workers who use the service to measure the success of the program. The following table describes the Job Access Program funding for this project and the sources of matching funds and amounts. CALTRANS was awarded $4.5 million for the AITS pilot project. According to CALTRANS officials, the $4.5 million was to be matched by funds from the State Public Transportation Account, derived from fuel tax revenue. However, the state did not fund the request for the additional $500,000 from the State Public Transportation Account to match the funding awarded in January 2001. Therefore, CALTRANS will only use $4 million of the total $4.5 million of Job Access funds. CALTRANS officials fostered collaboration with other agencies by conducting statewide workshops to explain the Job Access effort. They invited the regional transportation agencies, metropolitan planning organizations, and associated agencies to these workshops. According to CALTRANS officials, coordination efforts have faced some challenges, especially because of incompatible tracking systems. CALTRANS has only been able to track the total number of passengers, whereas the California Office of Health and Human Services is required to track the ridership of each individual TANF client. As a result, the Office of Health and Human Services has had difficulties providing TANF funds for the required Job Access match because they could not account for the number of TANF clients who specifically used the mass transit system. CALTRANS officials said that the vans would continue to operate without Job Access funds. The fares paid by the passengers of the Kings County services are used for insurance, maintenance, fuel, vehicle replacement, overhead, and drivers’ salaries. However, the bus service in Kern County would need to be funded with contributions from local governments or other organizations to continue operation in the absence of Job Access funding. The Capital District Transportation Authority provides fixed route bus and van transit service as well as individualized trip planning and information brokering. The transit authority’s Job Access funds are used to expand the hours of operation for their suburban services, primarily in Albany, Rensselaer, and Schenectady Counties. The extension allowed the transit authority to operate late night service as well as service during the weekend. As a result of its Job Access project, transit authority officials said they identified and filled gaps in its service by developing a system that provides a transportation solution for TANF clients who had difficulties getting to and from work. According to transit authority officials, specific projects being funded by Job Access are not for traditional fixed route bus services, so services are contracted to companies that use vans in all three counties. These services are paid for on a cost-per-trip basis. Taxis are also utilized to take some of the grantee’s Job Access clients to and from work. The following table describes the Job Access Program funding for this project as well as the sources of matching funds and the amounts. Transit authority officials said that the Job Access Program has helped improve the coordination between transit and human service agencies. According to grantee officials, as part of the Job Access grant process, the transit authority established a relationship with the New York State Department of Labor, which created a link to local human service agencies and made transportation more accessible for TANF clients while broadening their client focus. However, coordinating the implementation of the Job Access project between the grantee and various stakeholders has been complicated by differences in reporting requirements between these parties and by the volume of data that has to be collected. To conform with TANF reporting requirements, the grantee has had to collect data it does not normally collect for the numbers of TANF recipients that use the Job Access service. Grantee officials explained that they had difficulties accounting for their ridership along with determining if their passengers are TANF clients. In addition, according to these officials, private and nonprofit organizations operate their own transportation vans and have services that overlap with Capital District Transportation Authority services in some areas. Some of these other agencies operating in the community include the following: the Department of Aging Markets, the Association of Retarded Citizens, the Veterans Administration, and the Office of Mental Retardation and Developmental Disabilities. Transit authority officials stated that better coordination among these services could result in a more efficient transportation network. Capital District Transportation Authority officials said that their agency’s sources of revenue are limited, and that without Job Access funding, they would be unable to continue the services started under the Program. The officials stated that the need to provide transportation during weekends and second and third shifts has produced a greater need for heavier subsidies. However, many counties are feeling a budget squeeze, resulting in less funding being available to contribute to the match necessary to obtain FTA Job Access funding. The Albuquerque Transit Department has seven Job Access projects that include demand-responsive rides for work, job training, or transportation emergencies; subsidized vanpools; reduced price bus passes; a free 1-day bus pass available for job-training trips; a free 6-month bus pass for social service agency staff who volunteer to be travel trainers for their clients; and a mobility manager service that teaches people how to utilize bus schedules, ride buses, and use other transit services. The demand- responsive services are available to anyone at or below 150 percent of the poverty level. Participants are offered 120 round-trips within 2 years to their jobs, job-related training, and child care required for their jobs and/or job-related training. Participants can only utilize the services to these designated trips if they lack (1) a local bus stop within a quarter-mile of their home or destination, (2) a local bus service that duplicates the route in less than 90 minutes, and (3) a local bus service that is available to their destination. Albuquerque Transit officials estimate that on-demand van services cost about $17.50 per ride; eligible participants pay 75 cents per trip. Albuquerque Transit officials stated that, on the basis of qualitative measures, their project is successful. The agency has tried to measure the success of the Program by obtaining community feedback. This feedback has indicated that the communities like the projects and feel that the services were long overdue, according to transit officials. Although it does not have exact numbers, the grantee claims that the project is helping to reduce the welfare rolls, and that overall ridership is increasing. In the first 3 months of 2002, the Job Access project had 120 riders, according to Albuquerque Transit officials. The following table describes the Job Access Program funding for this project as well as the sources of matching funds and the amounts. Albuquerque Transit has had mixed results coordinating with other agencies. The project successfully involved about 95 organizations in the Job Access project. The project has obtained matching funds from the City of Albuquerque, the New Mexico Human Service Department, and the University of New Mexico Career Works. Coordination efforts, however, have faced some challenges. The biggest barriers to coordinating human service activities with transit services stemmed from different agency cultures for complying with services standards for clients, according to transit officials. They said transportation providers do not speak the same language as human service providers because the two have different missions and philosophies. The officials said that compliance with all of the requirements to acquire matching funds is also a barrier because human service agencies usually only provide funds under certain conditions, such as are not paying for nonclients. Consequently, the transit agency has spent considerable financial resources tracking the number of TANF clients using Job Access project services. According to transit officials, Albuquerque’s project requires some sort of public assistance and support to exist. Without Job Access funding, they would no longer be able to provide the services created under the project. The grantee officials are unsure of what they would do if the funds stopped or the Job Access Program was not reauthorized. Furthermore, they do not expect the state to fill the funding void—New Mexico is one of four states that provide no state funds for public transit. Santa Rosa, California, is a rapidly growing metropolitan area that is approximately 55 miles north of San Francisco, in Sonoma County. Santa Rosa’s City Department of Transit and Parking (Santa Rosa CityBus) operates 16 bus routes, most emanating from the transit center in the downtown area. Santa Rosa CityBus’s Job Access project established a new public transit route, Route 15–Stony Point Road, in August 1999. This new transit route serves the highest concentration of TANF recipients in Santa Rosa, and it links job seekers with multiple job opportunity worksites (e.g., light industry and telecommunications) and human service agencies. According to Santa Rosa CityBus officials, Route 15 has decreased the travel time of route users because it eliminates unnecessary transfers through the downtown area. The cross-town route, extending over 15 miles, requires about 1 hour to make a round-trip and uses two buses. The service is available to the general public and all passengers pay the same fare. However, the local health and human services agency (SonomaWorks) purchases monthly bus passes at the regular price and provides them at no cost to TANF participants. Transit officials estimate that the route will service about 132,000 people in fiscal year 2002. The following table describes the Job Access Program funding for this project as well as the sources of matching funds and the amounts. Santa Rosa CityBus officials worked closely with the Metropolitan Transportation Commission (MTC), the area’s metropolitan planning organization, to develop and acquire funding for Route 15. MTC worked on a plan that identified the regional shortfalls of existing transportation services in terms of areas covered and times served. MTC also helped gather the background information and produced Geographic Information Systems maps, which plotted the TANF population that needed to be served in the City of Santa Rosa. This allowed Santa Rosa and transportation commission officials to identify the service gaps in Santa Rosa and the areas where the highest concentration of TANF recipients lived. Because of this collaboration, Santa Rosa CityBus was able to provide a route linking low-income people to the resources they need, such as jobs, child care, and health care. The Job Access Program has also improved collaboration between Santa Rosa CityBus and the local health and human service agency officials. SonomaWorks officials, with the assistance of Santa Rosa CityBus staff, trained local health and human services’ caseworkers to better inform their clients about all services being provided by Santa Rosa CityBus— specifically, Route 15. However, Santa Rosa officials did not discuss using TANF funds as a match with SonomaWorks. Santa Rosa officials stated that they were aware that the Job Access Program allowed for a federal-to-federal funds match, but they chose not to pursue the possibility of using TANF funds because they did not face any difficulties in raising the matching funds. Currently there is no plan to discontinue service in the absence of Job Access funding. Santa Rosa CityBus officials stated that even if the Job Access Program were discontinued, Route 15 would continue to operate. They added that it would be virtually impossible to discontinue any established transit line because the transit users in the community depend on these services. The goal of the Santa Rosa CityBus was to use Job Access funding to assist in the establishment of the route. They expect the route to be self-sustaining without Job Access funds. Fort Worth Transportation Authority (FWTA) is the primary public transportation system for the city of Fort Worth, Texas. With the use of the Job Access grant, FWTA employed a vanpool for the city’s outlying areas and contracted with a taxi company to provide demand-responsive service to its clients within the Fort Worth area. Officials at FWTA identified their target population as TANF recipients and people with incomes at or below 150 percent of the poverty level. They stated that their project has resulted in individuals finding jobs and maintaining employment. They said that they provided transportation services for 6 months, in the belief that if a person is employed for that period of time, the person has increased his or her chances of being hired again. As a result of the Job Access project, FWTA officials said they have been able to help some people transition from welfare to work by providing them with transportation to and from work, daycare, and other services. The following table describes the Job Access Program funding for this project as well as the sources of matching funds and the amounts. FWTA officials have been reluctant to consult and coordinate with other stakeholders, such as other transit agencies and human service agencies, to receive assistance in operating their Job Access project because of their negative experience in obtaining matching funds from other stakeholders. They explained that they experienced a significant administrative burden in complying with the data collection and reporting requirements imposed by those stakeholders. Specifically, in the early days of the project when seeking financial contributions from other partners, they encountered problems in receiving TANF and other matching funds, because of the significant reporting requirements imposed by those funding sources. Although FWTA officials received the required matching funds, they concluded that they would rather have its partners provide noncash contributions since those contributions would not result in any administrative requirements. FWTA officials are now reluctant to request any operating assistance from other stakeholders. FWTA officials said that their project could not be continued without Job Access funds and would require additional assistance and support for it to continue. FWTA spends about $11.50 on its demand-responsive taxi service. The cost per trip averages about $17.50 because of the shared ride nature of each family trip. These costs are comparable to taxi rides provided on a zone basis. For rides on its fixed route services, FWTA assumes almost all of the $20 per ride costs of providing the transit services, charging an average of only 50 cents per trip. According to FWTA officials, their agency has no dedicated funding stream for the Job Access services and the fares they collect are not enough to continue their Job Access project in the absence of FTA funding. CommuteShare is the Job Access project component of the Good News Garage--a nonprofit association that repairs used vehicles and provides them to economically disadvantaged applicants. The Good News Garage donates some of its vehicles to CommuteShare for one carpool service and four demand-responsive services. Under the carpool service, a driver keeps the vehicle and provides rides to three other participants. The demand-responsive service has an assigned volunteer-driver take people to work upon request. CommuteShare services are free and available to any person whose household income is less than 225 percent of the federal poverty level. Individuals receiving case-managed services have free access to the vehicle for 6 months. After that, there is a sliding fee scale fee that is based on income. Carpool group members split the cost of fuel and parking, while demand-responsive passengers each pay a $1 fuel contribution. One-way rides cost CommuteShare roughly $16—this includes all operating expenses, such as fuel, insurance, and repairs. About 190 people have participated in the program, with 25 people participating at any given time. The following table describes the Job Access Program funding for this project as well as the source of matching funds and the amounts. Good News Garage and CommuteShare have enjoyed strong coordination with other agencies, according to project administrators. Lutheran Social Services, the nonprofit association that is based in New England that created the Good News Garage, aligned the Garage with the Vermont Department of Prevention Assistance, Transition, and Health Access (PATH)—the state TANF clearinghouse—and the Vermont Department of Employment and Training. Most Good News Garage and CommuteShare referrals come from PATH. The Good News Garage also receives referrals from local battered women’s shelters. It used PATH state funds to satisfy its matching funds requirements. According to a PATH official, CommuteShare and PATH have coordinated effectively in the overall welfare-to-work effort. PATH helps the Good News Garage and CommuteShare clients pay for repairs or get them to work if their car is not working. The support lasts 1 year and thereafter, is no longer continued. The PATH official added that positive experiences with demand-responsive service have resulted in plans to expand such projects. PATH wants to have at least one car in each of Vermont’s 12 districts for demand-responsive service. CommuteShare officials are not sure if the project can maintain operations in the absence of Job Access funding. Because of a tight budget cycle, Vermont may not be able to supplement the required match—making a loss of Job Access funding critical to the projects sustainability. According to project administrators, the project has some support from the private sector but needs strong public funding to maintain services. Grantee officials said that CommuteShare appears to be a successful and innovative Job Access project but may have problems sustaining itself after the end of Job Access funding. The Kentucky Transportation Cabinet’s Human Service Transportation Delivery project involved consolidating transportation services previously provided by various state governmental agencies to transport Medicaid and low-income people to job interviews, job training, employment, and child care facilities. According to cabinet officials, services were consolidated because the previous transportation delivery process was fragmented, increasingly costly, and vulnerable to fraud and abuse. Kentucky’s welfare reform initiative was expected to double transportation needs for TANF recipients. In addition, transportation services were not easily accessible in some rural areas. For example, in an 11-county region in Southeast Kentucky, an average of 32.5 percent of the households were living in poverty, while an estimated 46,977 people over the age of 60 and 13,570 households did not have access to an automobile. As a result, the cabinet began a statewide demand-responsive service program. Seniors and low-income passengers needing transportation could contact 1 of the 14 regional transportation brokers within 72 hours of their trip. Although the cabinet targets low-income people, the project is open to the public. The service costs 50 cents to $1 for low-income individuals and the general public. The Kentucky Cabinet for Family and Children Services pays for TANF recipients’ fares. According to cabinet officials, the Job Access project is efficient and a major improvement from past welfare reform efforts. The brokerage system resulted in more people taking more trips at less cost. As of June 2002, the project has provided 549,914 trips for TANF recipients and 330,596 trips for those participating in Medicaid. Under the Job Access project, revenue projections indicate that reductions in expenditures will result in a Medicaid savings of $3 million annually. The following table describes the Job Access Program funding for this project as well as the sources of matching funds and the amounts. The Kentucky Transportation Cabinet coordinated with the Cabinet for Family and Children Services, local communities, human service agencies, transit departments, and private transportation operators to implement its Job Access project. According to grantee officials, the Transportation Cabinet also relied heavily on local community support. Local areas provided 40 percent of the necessary match and were excited that the needed services were to start, according to the cabinet official. The project also required coordination from the 14 different transportation brokers. One such broker, the Kentucky River Foothills, estimates that 95 percent of the clients it transports have incomes that are less than or equal to 150 percent of the official federal poverty income threshold, and 65 percent to 70 percent earn less income than the official federal poverty level. According to Kentucky River Foothills officials, the biggest obstacle to coordination is convincing localities to invest in public transportation. Kentucky Transportation Cabinet officials estimate that about half its services can be sustained in the absence of Job Access funding. The services most likely to survive would be those that have the strongest community and employment ties. For example, in one region, a chicken factory depends on low-income labor. The factory would most likely support the Job Access service to keep its workers. The Las Vegas Housing Department’s Job Access project—a subgrantee of the New Mexico State Highway Transportation Department—is a continuation of a welfare-to-work project run by Highlands University. According to the housing director, the university’s project was not working very well because it lacked an effective transportation component. When the Job Access project began, the Las Vegas housing director took on the responsibilities of transportation coordinator and organized a new welfare-to-work project. The new project leveraged funds from a variety of sources, including the Department of Housing and Urban Development (HUD) and DOT, and had an existing clientele. The city’s Housing Department also provided the city Transportation Department with a facility to enhance their coordination. The Las Vegas project is open to the public and provides demand-responsive van service during the traditional workweek. Although the project is targeted to those who are low-income, anyone can utilize the service. Las Vegas has a sliding scale for the service costs; the general public pays a general cost (about $1.50), those who are below 30 percent of the county median-income level—60 percent of their participants meet this criteria—pay 75 cents (after applying through the local TANF office at Highlands University), and residents of the housing department can access the service for free. The van service requires 24-hour notice to schedule rides and can be used to travel to work, child care, and retail locations as well as other purposes. According to the Las Vegas housing director, the Las Vegas Housing Department’s project is a success; ridership has doubled in 3 years and housing participants have improved their lives. The following table describes the Job Access Program funding for this project as well as the sources of matching funds and the amounts. The housing director was also able to use HUD’s Drug Elimination grant funds to help fund the project because the services included taking teens to after-school activities. The grantee did not use DOT funds for rural transportation as matching funds for the project, but it did use them to purchase vans for the service. According to the housing director, the project is an excellent example of coordination between local, state, and federal agencies. Highlands University is the human service provider and is receiving welfare-to-work money from Labor’s Welfare-to-Work Program. The state’s Department of Labor runs the TANF Program and also is providing matching funds. The state Department of Labor has increased its number of vans from three to five, and the department’s director said he is starting to receive interest from other rural communities looking to replicate the project. According to project officials, the project cannot exist without public support, particularly Job Access funds. The housing director argued for the need to have continued federal funding of the Job Access project. The housing director stated that clients need to have flexible and free or cheap service, or else they will purchase a car. However, buying an old car makes it harder for clients to develop financial independence, because the costs of maintaining such a car are burdensome. The director added that Job Access and other welfare-to-work services fail because people who purchase a car do not have enough remaining funds to pay for non-work-related automobile trips and to pay for such necessities as child care, food, and clothing. The Maryland Transit Administration’s (MTA) project serves as a broker for transportation funds throughout the state. Much of its Job Access funding is applied toward demand-responsive services, but several subprojects serve existing public service routes. MTA solicits subprojects for its Jobs Access project by mailing applications, advertisements, and guidelines to Maryland localities. MTA uses performance indicators and standards to grant both awards and award amounts. The following table describes Job Access subprojects funding for this project as well as the sources of matching funds and the amounts. According to MTA officials, the Job Access Program has encouraged greater collaboration and coordination between the transportation agency and human service organizations at the state and local level. MTA officials said that through an executive order, the Governor established the State Coordinating Committee for Human Services Transportation to encourage state agencies to identify needs and develop strategies to ensure the coordination of human services transportation. This committee facilitated MTA’s ability to market the Job Access project to other state agencies. In addition, MTA mapped out all transportation projects across the state and extended its outreach efforts to the local level. MTA officials credited the Jobs Access Program with helping to formulate and standardize coordination between transit and social service agencies in providing transportation services to low-income people. Officials at MTA stressed the importance of having the state’s administration support public transit that has facilitated other state and local agencies supporting the Job Access Program. According to MTA officials, several factors contributed to their success. These officials said that Maryland’s existing transportation services (1) do not serve areas that are too rural, remote, or small; (2) are supported by the state legislature; and (3) are supported by the public. Moreover, MTA created its own set of guidelines that would help the sustainability of its programs. MTA officials added that state legislation required that at least 25 percent of the required match toward FTA transportation funds—including Job Access grants—would be paid by the state. These officials said that this legislation also requires that a portion of the matching funding be automatically included in the state’s transportation budget and provided a total of $503 million over a 6-year period. They said that were not sure if they would be able to maintain all of the services they have started without continued Job Access funding. Under its Job Access grant, Project Renewal—a rehabilitation center for homeless men and women—operates a Suburban Jobs Program that places formerly homeless New Yorkers in unsubsidized employment by identifying and securing job opportunities in suburban areas around New York City. Suburban Jobs vans travel daily to five worksites, carrying an average of about 150 people daily, and has an average employment retention rate of 81 percent. At Montclair State University—one of the project’s five worksites—participants account for up to 75 percent of the university’s nonfaculty staffing, according to University personnel. Each position at the University has a training element and promotional opportunity. Project Renewal’s housing facilities as well as other nonprofit employment programs refer qualified candidates for Suburban Jobs. All candidates are screened to ensure that they have undergone vocational education and job readiness training. Project Renewal then identifies appropriate employment opportunities, prepares clients for interviews, and supplements public transportation through its own van service to the suburban jobsites. Vans are necessary since public transportation, even in a transit-rich city like New York, was not designed for reverse commutes during nontraditional work shifts, according to project officials. Personal counseling is provided to Suburban Jobs beneficiaries while they are being transferred to and from the jobsite. Including capital and operating expenditures, the rides cost Project Renewal about $15 per person. The following table describes the Job Access Program funding for this project as well as the sources of matching funds and the amounts. Project Renewal has worked closely with its metropolitan planning organization —the New York Metropolitan Transportation Council—and has coordinated with other agencies. The Metropolitan Transportation Council supports Project Renewal by providing guidance for improving and starting new routes. For its required Job Access matching funds, Project Renewal utilized HUD funds. Project Renewal also partners with dozens of agencies and does not ask them to help pay for the transportation costs of Suburban Jobs beneficiaries. According to project officials, Suburban Jobs would likely not exist without Job Access funding. However, in the absence of DOT funding, the managers of the project would attempt to continue the services currently funded by DOT by soliciting greater contributions from employers. For example, Montclair State University currently contributes about $300 monthly for the service, and other employers might be persuaded to contribute also. The New Mexico State Highway and Transportation Department developed a statewide “Transportation Toolkit” to coordinate welfare-to-work resources and to administer rural Job Access services. The Toolkit contains several databases of the inventory of vehicles that were purchased through publicly funded programs and TANF households by zip code. The Toolkit helps agencies prepare TANF adults for employment: A TANF adult will be referred, as needed, to appropriate resources, which may be in different geographic locations. These resources include counseling for substance abuse, mental illness, and domestic violence; classes in parenting, life skills, and job preparation; and programs to improve literacy and/or to obtain a general equivalency diploma. The State Highway and Transportation Department also solicits and awards Job Access grants to rural areas. New Mexico has 22 state Job Access projects, 18 of which combine resources from DOT’s funds for rural transportation. Most of the Job Access projects are demand-responsive, which makes tracking the number of rides easier. Participants call 24 hours in advance to request a trip. These services are available to the general public as well as low-income and TANF recipients. New Mexico promotes the services to the general public through local advertisements and to targeted clients through referrals from local human service agencies. The following table describes the Job Access Program funding for this project as well as the sources of matching funds and the amounts. According to the Highway and Transportation’s Chief of Programs, the Toolkit has resulted in remarkable coordination of transportation resources. Responsible state and local agencies, such as the highway, labor, and social development offices, use the Toolkit to determine the most effective transportation mode to transport low-income and TANF people to employment. The Toolkit helps localities determine (1) where TANF recipients reside, (2) the inventory of vehicles that were purchased through publicly funded programs, and (3) where jobs are located for the entire state. The chief of highway programs stated that the Toolkit is intended to avoid duplication of services and helps localities determine if the vehicles that were purchased through publicly funded programs were available for welfare to work. In addition, the Job Access projects utilized matching funds from the New Mexico Department of Labor and Human Service Department. The State Highway and Transportation Department believes the projects can exist without Job Access funding. New Mexico uses self-sustainability as a selection criterion in determining grantees; the state Highway and Transportation department has been in constant discussion with subgrantees about finding a way to fund their projects without Job Access money. Highway and Transportation Department officials said they believe that the Human Service Department will continue to fund the Job Access service even if the federal Job Access Program is not reauthorized by the Congress. Through its Job Access project, the Transit Authority of River City (TARC) offers a variety of services to low-income people: The Night Owl bus offers demand-responsive transportation for $1.50 each way to those who live and work in Louisville’s Jefferson County. A Flex-Route deviates from a fixed-path bus route whenever a person living near the route needs to access the bus service. The Job Hunter bus provides free, demand-responsive service for transporting potential employees to interviews and career development opportunities. The Job Hunter Bus has transported over 3,500 people since 1999. In coordination with the United Parcel Service, two bus routes transport students and low-income workers to the United Parcel Service’s worldwide hub in Louisville. A demand-responsive rideshare service to disabled workers. Three fixed route bus services—one for teenagers seeking jobs and two for taking employees to Blue Grass Industrial Park—a large employment site. A Bikes on Board project placed bike racks on 208 buses. This allows people to travel from the end of the bus route to their place of employment and back. The following table describes the Job Access Program funding for this project as well as the sources of matching funds and the amounts. TARC collaborated with 43 different private, public, and nonprofit agencies in developing its Job Access project. TARC also has a kiosk located in the Workforce Investment Board One-Stop Center, which provides unemployed persons with job placement and training services. TARC has integrated the Job Access services into its general services. TARC officials said they would evaluate the efficiency of all their routes and cut the least efficient if the agency lost its Job Access funding. Currently, however, 80 percent of their ridership is in their top five routes—none of which are Job Access services. Thus, services funded through the Job Access project may be among those that could be eliminated. The Washington Metropolitan Area Transit Authority (WMATA) provides fixed route bus and rail service for Washington, D.C., and surrounding areas. In its Job Access grant, WMATA provides three types of services: (1) a trip brokerage service, (2) improved access to fixed bus routes, and (3) a demand-responsive van service. WMATA found that some clients faced problems in getting to and from work during nontraditional work hours—for example, late-night or early morning hours. Due to a lack of transportation, these employees face a difficult time maintaining employment. As a result, under its Job Access grant, WMATA began in 1999 a trip brokerage system that allows its clients to reserve transportation services for odd hours and in areas that are underserved by traditional public transit services. According to WMATA officials, their program has been able to serve thousands of individuals every month. WMATA’s demand-responsive component incurs a cost of about $46 per trip, with no costs to the client. In addition to the trip brokerage system, WMATA implements a fixed-route component of its Job Access Program. While the fixed route service costs an average of $41.68 per ride, passengers pay a fare of roughly $1.75 (both the Job Access client and general public pay this amount). With just over 330 trips per month on fixed bus routes, WMATA’s Job Access project has been able to provide service to about 9,500 individuals. WMATA also established a one-stop information center—the Washington Regional Call Center—that allows people to access exact trip information to various locations. The following table describes the Job Access Program funding for this project as well as the sources of matching funds and the amounts. According to WMATA officials, the Job Access Program increased their coordination with human service agencies and others in their service area because it was designed to address problems related to low-income people getting transportation services. As the most prevalent transit provider for the Washington metropolitan area, WMATA and the Metropolitan Washington Council of Government’s Transportation Planning were the regional catalysts for getting transportation and social services together to provide Job Access services. According to WMATA officials, WMATA has been able to leverage its status as the region’s primary transit provider to encourage the involvement of other regional transit systems as well as human service agencies across two states, Washington, D.C., and multiple local jurisdictions. In addition, WMATA has been able to use the information and outreach component of its Job Access project to promote the use of transit programs throughout the region. WMATA has been able to direct welfare clients to utilize more timely and efficient public transit routes to and from work, thereby enabling individuals to get to work on time and keep their jobs. Consequently, WMATA officials noted that employers have begun to take notice of the WMATA Job Access project because it has produced a dependable form of transportation for their employees. WMATA is currently working with the Washington, D.C., Board of Trade and Chamber of Commerce to encourage more public and private collaborations in serving low-income populations. WMATA officials said they would not be able to sustain services started with Job Access funding if they did not continue to receive grants. They believe that theirs is a model program, but they have not been able to encourage sufficient private sector involvement in the program to replace Job Access funds. In addition, when responding to these questions, please coordinate with the members of your staff as well as any sub-recipients or sub-grantees, as appropriate. The General Accounting Office (GAO) is an agency of facilitate the return of your questionnaire. If the return the Congress that performs studies of federal programs. The Transportation Equity Act has mandated the GAO for the 21st Century (TEA-21) to periodically examine how the Department of Transportation’s Federal Transit Administration (FTA) is implementing the Access to Jobs Program (Job Access/Reverse Commute Program). If you have any questions, please call Frank Taliaferro funding grantees have received under this program. Please review the label above and respond to the questions in this questionnaire as they relate to the we receive, will be used in our report to the Congress on this program. project named. If you have a question about the information on the label or it is incorrect, please call one of the GAO contact persons listed below. Thank you for your help. Please provide the following information for the person we should contact if we have any questions. rounding). GAO Control # : (N=152) General Information About Your Project 3. How does this Job Access project (your organization or sub-grantee) primarily provide transportation 1. What is the name and address of the grant recipient services (including mobility manager services) to for this Job Access project? program participants? (Please check one.) (N=145) _(N=152)___________________ 1. Uses a combination of direct transportation service, contractors, or vouchers. 2. 3. Hires companies to provide services 2. Which of the following would best characterize your such as bus, van, or taxi organization? (Please check one.) (N=151) 4. Provides vouchers to participants to 1. 2. State Human Service Department 5. Other (Please specify) 3. Regional transit agency 4. Local transit agency 4. Currently, about how many passengers per month 5. Local government human service office does your Job Access project serve? (Enter number; if none, enter ‘0’) (N=135) 6. Other local government (city/county) (Range: 0 – 257,856 passengers) 7. Nonprofit human service organization (Median: 1,880 passengers) (Mean: 11,079 passengers) 8. 9. 10. Other (Please specify) (A) (B) (C) (D) (E) (F) (G) zones, what is the service? service? each.) cents.) service? trip? (Please enter month? each month? dollars and cents.) cents.) numbers.) numbers.) cents.) Range: $0 - $4.00 (N=31) $0 - $187.00 $0 - $4.00 $0 - $4.00 rail) (N=91) (N=91) (N=89) $0 - $12.00 (N=84) (N=83) (N=135) (N=31) Range: $0 - $2.00 service, that is, $0 - $74.00 $0 - $4.00 $0 - $4.00 (N=05) (N=32) (N=31) (N=20) Range: $0 - $3.00 (N=29) (N=31) (N=05) (N=117) Range: $0 - $4.00 service, that is, $1.00 – $52.34 $0 - $10.95 $0 - $12.17 (N=18) (N=63) (N=61) (N=49) $0 - $44.00 (N=62) (N=58) (N=124) (N=18) (N=02) (N=03) (N=01) (N=01) (N=02) (N=04) project) (N=111) (N=114) 6. Other (Please specify) (N=07) (N=07) (N=05) (N=02) (N=06) (N=06) (N=90) (Please note: The number of respondents for question 5.4 and 5.6 were too small to provide meaningful statistics.) 6. Of the passengers that your Job Access 8. About what percent of your project’s Job Access transportation project serves, about what percent funding is used for a mobility or trip manager, or could be described as the following? (Enter percent, broker services? (Enter percentage; if none, enter if none, enter ‘0’.) ‘0’.) (N=137) (Range: 0 – 100%) (Median: 0%) (Median: 16%) (Range: 0 – 60%) 9. As of June 30, 2002, under your Job Access (Median: 4%) grant(s), for how many months have these participants (Department of Labor) (N=93) % transportation services been provided? (Please enter 3. Other low income (Range: 0 - 100%) (below 150% of poverty) (Median: 30%) number of months) (N=135) (Range: 0 – 58 months) (Median: 24 months) (Mean: 23 months) (Median: 0%) 10. How are employers involved with either the funding or implementation of this Job Access project? (Range: 0 – 100%) (Please check all that apply.) (N=139) (Median: 19%) 1. Employers provide some or all of the matching funds for the service 6. Other project participants (for example, Medicaid 2. Employers provide additional funding-- recipients, senior citizens, beyond the Job Access project match etc.) (Please specify (Range: 0 – 100%) program) (Median: 0%) (N=89) % 3. Employers make in-kind contributions such as vehicles, maintenance for 7. How are Job Access project fare box receipts used? (Please check all that apply.) (N=142) 4. Employers pay fares for employees who 1. Included as part of match for Job Access 5. Employers supplement the service by providing emergency rides for special 2. Used to support operating costs of Jobs circumstances (for example, when employees must leave early for family 3. Returned to organization’s general emergencies) 6. Employers have adjusted work schedules to accommodate the operating 4. Other (Please specify.) limitations of the Job Access 7. Employers provide vans or shuttle buses 5. Not applicable – do not have fare box to take workers from the end of the transit service to the workplace 8. Other (Please identify) 9. No employers are involved with the implementation of this project (Go to Question 13) Job Access project? (Please check all that apply.) ‘0’.) (N=93) (N=145) (Range: 0 – 3,000 employers) 1. Not Applicable – this project is in a (Median: 3 employers) (Mean: 54 employers) rural area not subject to an MPO. (Go to Question 15) 2. Provided data for project plan 3. Reviewed plans for project 12. Which specific employers (public or private) in your area are significantly involved with helping TANF 4. Coordinated and facilitated the creation recipients or low-income people get to work? (Please identify the employers; use additional sheets if necessary.) (N=152) 5. Provided funding for this project that was included in the match 6. Created a process and selected MPO 7. Helped prepare/write grant application 8. Other (Please explain) 13. In addition to your Job Access services, what other 15. What was the role of the state DOT in planning and transportation services in your area are provided by approving your Job Access project? (Please check other human service or transit agencies? (Please all that apply.) (N=145) check all that apply.) (N=147) 1. Not Applicable – this project is in an 1. None (Go to Question 14) urban area, subject to an MPO. (Go to Question 16) 2. Fixed route bus or van service 2. Provided data for project plan 3. Trains or light rail 3. Reviewed plans for project 4. Flexible route bus or vanpools 4. Coordinated and facilitated the creation 5. Demand response vanpools 5. Provided funding for project that was 6. Created a process and selected 8. Medicaid, Medicare, Head Start, or 7. Helped prepare/write grant application 9. Other (Please identify) 8. Other (Please explain) Job Access project has enabled your organization to program affected coordination and collaboration help people get to work? (Please check one.) with other transportation or transit organizations in (N=138) your service area? (Please check one.) (N=142) The level of coordination and collaboration 3. Neither satisfied nor dissatisfied 3. 17. How has your participation in the Job Access 5. program affected coordination and collaboration with social service organizations in your service area? (Please check one.) (N=143) The level of coordination and collaboration has… 8. Not applicable—we are the only transit 20. Please briefly explain below the reason for your 3. response in question 19. (Please use additional sheets if necessary.) (N=100 comments) 5. 18. Please briefly explain below the reason for your response in question 17. (Please use additional sheets if necessary.) 21. Please describe any difficulties your project experienced, if any, in coordinating and (N=126 comments) collaborating with transportation, transit or social service organizations. (Please use additional sheets if necessary.) No difficulties--> (Go to Question 23) (N=87 checked the no difficulties box) (N=43 comments) Access project. About what percent of your total social service were overcome, if at all. (Please use project funding came from this Job Access grant? additional sheets if necessary.) (Please enter percent) (N=133) Difficulties identified in question 21 were not (Range: 3 – 100%) (Median: 50%) (N=19 checked the no difficulties box) (N=28 comments) 26. As of June 30, 2002, have this project’s Job Access funds from FTA grants been fully depleted? (Please check one.) (N=144) 1. Yes (Go to Question 28) Funding for the Job Access Project 2. No (Continue) Job Access grant, what other sources of funds were 27. Consider the period after June 30, 2002. At the used to fund your organization’s Job Access current rate of expenditures, for how many months transportation services? (Please check all that can your organization continue the transportation apply.) (N=144) services that were started under the Job Access program, without getting more funds? (Please enter 1. the number of months.) (N=83) 2. State transportation funds (Range: 0 – 102 months) (Median: 9 months) 3. Other State funds (Mean: 13 months) 4. Local government funds 5. Local transit operator funds 28. Has your organization applied to receive a Job 6. Access project grant to fund the period after June 30, 2002? (Please check one.) (N=145) 7. Employer donations or contributions 8. 2. No Please explain the reasons below. 9. Other (Please specify) (N=12 comments) 24. What was the total funding for all years and from all sources for this Job Access project? (Please enter total dollar amount.) (N=132) (Range: $1,275 – $68,586,800) (Median: $1,022,509) (Mean: $2,821,652) $________ 29. If your Job Access project funding should end, will 31. What was your organization’s expectation for the your organization continue to provide transportation funding cycle for your Jobs Access project? (Please services that were previously provided through the check one.) (N=143) Job Access grant? (Please check one.) (N=142) 1. Funding would be available on a 1. Yes, with expanded services 2. Yes, at same level of services 2. Funding would be available for a limited 3. Yes, at a reduced level of services 3. Funding was for a one-time only grant 4. Uncertain, will completely discontinue services if other sources of funds are not 4. Other (Please specify.) 5. No, will discontinue services (Go to Question 31) 32. Which of the following does your organization currently use to measure the success of your Jobs Access project? (Please check all that apply and, if 30. If your Job Access project funding ends, what necessary, explain how you measured success in the sources of funds does your organization expect to comments section.) (N=145) use to pay for continued operations of the services that were started or expanded under the program? 1. Number of passengers (Please check all that apply.) (N=126) Access service not previously served by 7. Private nonprofit organization donations 6. Number of employers that low-income people can access with the Job Access 9. Employer donations or contributions 7. Number of jobs that low-income people 10. Other (Please specify) can access with the Job Access service 8. Additional hours during the day that the project was able to provide service 9. Transportation service can be sustained without continued Job Access funding 10. Other (Please specify) __________________________ 33. Overall, based on the above measures used by this Job Access project, how successful has/have this project been? (Please check one.) (N=130) 4. 5. 34. The Congress is currently considering the reauthorization of many transportation programs, including the Job Access program. Consider the individual services that are funded through your Job Access project when answering the questions below. a. If federal funding for the Job Access program were no longer available for your project, how and to what extent would these individual services be sustained, if at all. (Please use the back of this sheet or additional sheets if needed.) (N=129 comments) b. If federal funding for the Job Access program were no longer available for your project, how and to what extent would these individual projects’ resources, clients, and services be affected, if at all. (Please use the back of this sheet or additional sheets if needed.) (N=124 comments) 35. Please provide below any additional comments that you have about the implementation of the Job Access program, your project, the transportation needs of low-income people and people moving from welfare-to-work, or any issues raised by questions contained in this questionnaire. (Please use the back of this sheet or additional sheets if needed.) (N=72 comments) Thank you for your help. In addition, Sam Abbas, Ernie Hazera, JayEtta Hecker, Landis Lindsey, Susan Michal-Smith, LuAnn Moy, Josephine Perez, and Frank Taliaferro made key contributions to this report. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading.
Pursuant to Transportation Equity Act for the 21st Century (TEA-21), GAO periodically reports on the implementation of the Job Access and Reverse Commute (Job Access) program. The program is designed to assist low-income people in accessing employment opportunities. This report examines the Department of Transportation's (DOT) efforts to evaluate the program and report the results to the Congress. GAO also examined (1) transportation and related services provided by the program; (2) whether the program fosters collaboration between Job Access grantees and others in the design, financing, and delivery of those services; and (3) whether Job Access services would be financially sustainable after the end of Job Access funding. Since 1999, DOT has awarded over $355 million for 352 Job Access grants in 42 states to help low-income people get to job opportunities and job support services, such as training and child care. Job Access grantees used various approaches to provide transportation for this purpose, such as expanding existing bus service, adding new areas to be served by an existing fixed transit route, or enhancing the frequency of the service. The program has met its goal of encouraging collaboration among transportation, human service, and other community-based agencies in Job Access service design, implementation, and financing. However, most of the program's services are not financially sustainable. For example, 12 percent of Job Access grantees indicated that they could continue their services after the end of program funding, while 41 percent reported they would likely terminate or decrease services, and 47 percent were uncertain about their ability to continue those services. DOT has not evaluated the Job Access program or reported to the Congress, as TEA-21 requires. The department therefore is missing an opportunity to provide timely information to the Congress that could assist it in deciding whether to reauthorize the program in 2003. GAO has several concerns about DOT's plans to evaluate the Job Access program. For its evaluation, DOT initially planned to use one performance measure--employment sites served. However, using a methodology that is based on this measure would yield limited information because it only partially addresses the program's goal of providing transportation to low-income people and does not address other program goals and criteria. Federal Transit Administration (FTA) program officials informed GAO that they also plan to use other performance measures, but they did not provide sufficient detail for GAO to comment on the quality of their evaluation. Moreover, the final report's date of issuance and its contents are uncertain because the report has yet to be reviewed and approved by the Office of the Secretary of Transportation, and the Office of Management and Budget. DOT officials did not provide GAO with an estimated date for submitting the report to the Congress.
We assessed MDA’s progress made during fiscal year 2003 toward its Block 2004 program goals by reviewing the progress of individual BMDS elements, because MDA program goals are ultimately derived from element-level efforts. We selected seven elements for our review on the basis of congressional interest and because they account for about 70 to 75 percent of the cumulative research and development funds MDA budgeted for fiscal years 2002 through 2009. We compared each element’s actual cost, completed activities, demonstrated performance, and test results with their internal fiscal year 2003 cost, schedule, performance, and testing goals. To assess progress toward program schedule goals, we examined, for each element, prime contractor Cost Performance Reports, the Defense Contract Management Agency’s analyses of these reports, System Element Reviews, and other agency documents to determine whether key activities scheduled for the fiscal year were accomplished as planned. We also developed a data collection instrument to gather additional, detailed information on completed program activities, including tests, design reviews, prime contracts, and estimates of element performance. Because MDA allocates a large percentage of its budget to fund prime contractors that develop system elements, and because MDA’s cost goal did not apply to fiscal year 2003 expenditures, we limited our review of cost- related matters to assessments of prime contractor cost performance. To make these assessments, we applied earned value analysis techniques to data captured in contractor Cost Performance Reports. We compared the cost of work completed with the budgeted costs for scheduled work for the fiscal year 2003 period. Results were presented in graphical form to determine fiscal year 2003 trends. We also used data from the reports to project the likely costs at the completion of prime contracts through established earned value formulas. We also analyzed data related to system effectiveness provided by MDA, focusing on the Ground-based Midcourse Defense and Aegis Ballistic Missile Defense elements—the weapon components of the Block 2004 defensive capability. We supplemented this information by holding discussions with, and attending overview briefings presented by, various program office officials. Furthermore, we interviewed officials within DOD’s office of the Director, Operational Test and Evaluation, to learn more about the adequacy of element test programs and the operational capability demonstrated by them to date. As we reviewed documents and held discussions with agency officials, we looked for evidence of key cost, schedule, and technical risks. We identified key risks as those for which we found evidence of problems or significant uncertainties that could negatively affect MDA’s ability to develop, demonstrate, and field a militarily useful capability within schedule and cost estimates. During our review, we observed shortcomings in how MDA defines its goals that could make oversight by external decision makers more difficult. To pursue this matter, we examined how MDA reported its goals by reviewing MDA budget submission statements that were submitted for fiscal years 2004 and 2005. In addition, to gain insight into the formulation of the goals, we held numerous discussions with MDA officials and reviewed acquisition documents such as MDA’s Integrated Master Plan, Integrated Program Plan, and System Integration Strategy. Our work was primarily performed at MDA headquarters, Arlington, Virginia; Aegis Ballistic Missile Defense Program Office, Arlington, Virginia; Airborne Laser Program Office, Albuquerque, New Mexico; Command, Control, Battle Management, and Communications Program Office, Arlington, Virginia; Ground-based Midcourse Defense Program Office, Arlington, Virginia; Kinetic Energy Interceptors Program Office, Arlington, Virginia; Space Tracking and Surveillance System Program Office, Los Angeles, California; and the Theater High Altitude Area Defense Project Office, Huntsville, Alabama. We also visited the office of the Director, Operational Test and Evaluation, Arlington, Virginia. We conducted our review from June 2003 through April 2004 in accordance with generally accepted government auditing standards. MDA has the mission to develop and field a Ballistic Missile Defense System capable of defeating ballistic missiles of all ranges in all phases of flight. In particular, the system is intended to defend the U.S. homeland against intercontinental ballistic missile (ICBM) attacks and to protect deployed U.S. armed forces, which are operating in or near hostile territories, against short- and medium-range ballistic missiles. Additionally, the BMDS is to evolve into a system that is capable of defending friends and allies of the United States. Figure 1 depicts the three phases of a missile’s flight during which the BMDS is designed to engage it. Much of the operational capability of the Block 2004 BMDS results from capabilities developed in legacy programs. These include the GMD, Aegis BMD, and Patriot elements. Existing space-based sensors would also be available, including Defense Support Program satellites, for the early warning of missile launches. The Block 2004 BMDS can be viewed as a collection of semi-autonomous missile defense systems interconnected and coordinated through the Command, Control, Battle Management, and Communications (C2BMC) element. Functional pieces of system elements, such as radars or interceptors, are referred to as “components.” Block 2004 program goals involve developmental activities of five MDA elements: Aegis BMD, ABL, C2BMC, GMD, and Theater High Altitude Area Defense (THAAD). As indicated above, three of these five elements— GMD, Aegis BMD, and C2BMC—comprise the Block 2004 defensive capability that is currently being fielded. MDA is also funding the development of two other elements—Space Tracking and Surveillance System (STSS) and Kinetic Energy Interceptors (KEI)—but these elements are part of future blocks of the MDA missile defense program. Table 1 provides a brief description of these seven elements. More complete descriptions of these elements are provided in the appendixes of this report. During Block 2006, MDA will focus on fielding additional hardware and enhancing the performance of the BMDS. For example, MDA plans to field additional GMD interceptors at Fort Greely, add new radars that can be deployed overseas, and incorporate enhanced battle management capabilities into the C2BMC element. For Blocks 2008 and 2010, MDA plans to augment the Block 2006 capability with boost phase capabilities being developed in the ABL and KEI programs. Additionally, MDA plans to field the THAAD element for protecting deployed U.S. forces against short- and medium-range ballistic missiles. According to MDA officials, the integrated BMDS offers more than simply the deployment of individual, autonomous elements. A synergy results from information sharing and enhanced command and control, yielding a layered defense with multiple shot opportunities. This preserves interceptor inventory and increases the opportunities to engage ballistic missiles. MDA developed overarching goals for the development and fielding of the Block 2004 BMDS. The goals describe the composition of Block 2004; provide the costs and schedule associated with its development, testing, and fielding; and summarize its performance capabilities. As part of MDA’s Statement of Goals, MDA also identified and scheduled a number of events that must be completed by individual program elements in 2004 and 2005 if the goals are to be achieved. At the core of MDA’s Block 2004 program goals is the continued development and testing of ABL, Aegis BMD, C2BMC, GMD, and THAAD. These goals are referred to as “Block 2004 Development Goals” and identify the developmental areas MDA is funding during the Block 2004 time frame, that is, during calendar years 2004 and 2005. MDA also established a complementary set of goals—referred to as Block 2004 “Operational Alert Configuration” Goals—in response to the President’s December 2002 direction to begin fielding a ballistic missile defense capability. These fielding goals build directly upon the development goals and identify the operational missile defense capability that MDA expects to deliver by the end of December 2005. The Block 2004 cost goal covers budgeted costs for development and fielding during calendar years 2004-2005. When MDA submitted its fiscal year 2004 budget in February 2003, MDA declared that its Block 2004 cost goal was $6.24 billion. However, MDA recently revised its Block 2004 cost goal with the submission of its fiscal year 2005 budget in February 2004. The revision reflects updated developmental costs and an update to the additional costs associated with the initial fielding. MDA’s Block 2004 cost goal is now $7.36 billion. The missile defense capability of Block 2004 is primarily one for defending the United States against long-range ballistic missile attacks. As summarized in table 2, it is built around the GMD element, augmented by Aegis BMD radars, and integrated by the C2BMC element. The Block 2004 BMDS also contains the Patriot PAC-3 element for point defense of deployed U.S. armed forces against short- and medium-range ballistic missiles. Because MDA no longer has funding or management responsibility over Patriot, an assessment of progress made by the Army in fiscal year 2003 toward delivering the listed capability was not addressed in this review. Patriot-specific goals are, therefore, not listed in the table. In this section, we summarize our assessment of MDA’s progress in fiscal year 2003 toward achieving Block 2004 program goals. Key risks associated with developing and fielding system elements are summarized, as well. Detailed evaluations of element progress and risks are given in the appendixes of this report. MDA identified a number of events that must be completed to meet Block 2004 program goals. These activities, which are part of MDA’s program goals, are ultimately derived from element-level efforts and, in general, have completion dates in calendar years 2004 or 2005 to coincide with the start of defensive operations. Progress made toward achieving Block 2004 goals, relative to these defining events, is summarized in tables 3 through 6. MDA reports that performance indicators associated with Block 2004 elements are generally on track for meeting expectations. This methodology leads MDA to predict with confidence that the September 2004 defensive capability will provide full coverage of the United States against limited attacks from Northeast Asia. However, testing in 2003 did little to demonstrate the predicted effectiveness of the system’s capability, as an integrated system, to defeat ballistic missiles. Without sufficient test data to anchor MDA’s analyses, models, and simulations, the predicted effectiveness of the system will remain largely unproven when IDO is available in September 2004. As discussed below, the uncertainty stems from a lack of system-level testing—using production-representative hardware under operationally realistic conditions—of the Aegis BMD and GMD elements and the highly scripted nature of developmental tests to date. The GMD program, which comprises the largest portion of the Block 2004 defensive capability, has demonstrated the capability to intercept target warheads in flight tests over the past 5 years. In fact, the program has achieved five successful intercepts out of eight attempts. However, because of range limitations, these flight tests were developmental in nature and, accordingly, engagement conditions were repetitive and scripted. Furthermore, as noted in our recent reports on missile defense, none of GMD’s components of the defensive capability have been flight tested in their fielded configuration (i.e., with production-representative hardware). For example, the GMD interceptor—booster and kill vehicle— will not be tested in its Block 2004 configuration until the next intercept attempt, which the GMD program office plans to conduct in the fourth quarter of fiscal year 2004. This intercept attempt will also test, for the first time, battle management software that will be part of the September 2004 defensive capability. Finally, MDA does not plan to demonstrate the operation of the critical GMD radar, called Cobra Dane, in flight tests before fielding IDO. Similarly, the Aegis BMD program has demonstrated the capability to intercept a non-separating target through its successes in four out of five attempts. These successes are noteworthy, given the difficulty of achieving hit-to-kill intercepts. In his fiscal year 2002 report, DOD’s Director, Operational Test and Evaluation (DOT&E) noted the successes but pointed out that the flight tests were developmental in nature and neither operationally realistic nor intended to be so. Test scenarios and target “presentation” were simple compared with those expected to be encountered during an operational engagement. While MDA is increasing the operational realism of its developmental flight tests—e.g., the Aegis Ballistic Missile Defense element employed an operational crew during its December 2003 intercept attempt—tests completed to date are highly scripted. We used contractor Cost Performance Reports to assess the prime contractors’ progress toward MDA’s cost and schedule goals during fiscal year 2003. The government routinely uses such reports to independently evaluate these aspects of the prime contractors’ performance. Generally, the reports detail deviations in cost and schedule relative to expectations established under the contract. Contractors refer to deviations as “variances.” Positive variances—activities costing less or completed ahead of schedule—are generally considered as good news and negative variances—activities costing more or falling behind schedule—as bad news. We addressed cost performance at the element level because the agency does not generate a single, overarching cost performance report for its contracts. Our detailed findings are presented in the element appendixes of the report. As shown in table 7, the Aegis BMD, C2BMC, STSS, and THAAD prime contractors performed work in fiscal year 2003 at or near budgeted costs. However, work completed in the ABL and GMD programs cost more than budgeted. The ABL prime contractor overran its budgeted cost by approximately $242 million, and the GMD prime contractor’s work cost about $138 million more than expected. Our analysis of fiscal year 2003 activities indicates that there are key risks associated with developing and fielding BMDS elements. Key risks are those for which we found evidence of problems or significant uncertainties that could negatively affect MDA’s ability to develop, demonstrate, and field a militarily useful capability within schedule and cost estimates. Key risks associated with BMDS elements expected to be fielded during Block 2004—Aegis BMD, GMD, and C2BMC—are exacerbated by the tight schedule to meet the September 2004 date for IDO. Element-specific risks are summarized below. A more complete discussion of these risks can be found in the appendixes of this report. ABL. The complexity and magnitude of integration activities to deliver a working system for the shoot-down demonstration have been substantially underestimated. Accordingly, the program continues to be at risk for additional cost growth and schedule slips. We also found that the uncertainty regarding the element’s ability to control environmental vibration on the laser beam—jitter—is a serious performance risk for the Block 2004 aircraft. Furthermore, we note that weight distribution across the airplane may be a key risk for future blocks. Aegis BMD. The program office is under a tight deadline to complete the development and testing of long-range surveillance and tracking software by the September 2004 date for IDO. By September, this software will not have been field-tested, and hence, its performance will be uncertain. However, program officials acknowledged that the greatest performance risk to the Aegis BMD program pertains to its interceptor’s divert system, the subsystem that generates “divert pulses” to control the orientation and direction of the interceptor’s kill vehicle. Program officials do not expect to implement any design changes to the divert system for the first set of five missiles being procured. Even with a reduced divert capability, program officials affirm that the missile’s performance is adequate for Block 2004 threats. Finally, there are also questions about the contractor’s readiness to produce interceptors. C2BMC. The C2BMC is tracking and mitigating key BMDS-specific risks pertaining to the fielding of the initial capability by September 2004 and the Block 2004 defensive capability by December 2005. Notably, development of the C2BMC element is proceeding concurrently with the development of other BMDS elements, and changes in one element’s design—especially in how that element interfaces with the C2BMC element—could cause temporary incompatibilities during Block 2004 integration that could delay fielding. In addition, the BMDS concept of operations continues to evolve, leading to uncertainties about how the C2BMC element will be operated. Finally, uncertainty regarding the reliability of communications links with the Aegis BMD element threatens to degrade overall system performance. GMD. The GMD program faces significant testing and performance risks that are magnified by the tight schedule to meet the September 2004 date for IDO. Specifically, delays in flight testing—caused by delays in GMD interceptor development and delivery—have left the program with only limited opportunities before IDO to demonstrate the performance of fielded components and to resolve any problems uncovered during flight testing. In addition, uncertainty with the readiness of interceptor production could prevent MDA from meeting its program goal of fielding 20 interceptors by December 2005. Finally, an unresolved technical issue with the kill vehicle adds uncertainty to element performance. KEI. From discussions with program officials, we found that KEI software costs could be underestimated, putting the program at risk for cost growth. The program office also acknowledges that it faces challenges in developing the first operational boost phase intercept capability that employs hit-to-kill concepts. STSS. The STSS program is on track for completing activities leading to the launch of the two demonstration satellites in 2007, provided that unforeseen problems do not arise during the process of (1) testing, assembling, and integrating hardware components of the satellites, which have been in storage for 4 years, and (2) developing software and integrating software and hardware—areas that historically have been responsible for negatively affecting a program’s schedule. THAAD. The THAAD program office is on track to develop, demonstrate, and field the Block 2008 THAAD element within schedule and cost estimates, provided that the contractor performs as efficiently as it has in the past. One risk area that covers the entire BMDS for Block 2004 (and future blocks) is whether the capabilities being developed and fielded will work as intended. As discussed above, testing to date has done little to demonstrate system effectiveness, because production-representative hardware is still being developed and has yet to be flight tested. Furthermore, tests to date have been developmental in nature and, accordingly, engagement conditions were repetitive and scripted. In the future, MDA is taking a number of actions to increase testing complexity and realism. However, it has no plans to conduct operational testing on the IDO or Block 2004 configurations being fielded. An operational test assesses the effectiveness of the system against the known threat and its suitability in an environment that mimics expected use. U.S. law requires that such tests be carried out on major defense acquisition programs under the oversight and with the approval of DOT&E. The law requires that DOT&E report test results to the Secretary of Defense and congressional defense committees before a full-rate production decision is made. As the principal operational test and evaluation official within DOD, DOT&E is independent of program offices and reports directly to the Secretary. In establishing MDA, the Secretary of Defense specified that when a decision is made to transition a block configuration to a military service for procurement and operations, an operational test agent would be designated. The Secretary specified further that an operational test and evaluation would be conducted at the end of the transition stage. In fielding IDO and the Block 2004 configuration, no decision is being made to transition the block configuration to a service. Thus, no operational test agent is being designated and no operational test and evaluation is planned. Furthermore, the fielding of IDO and the Block 2004 configuration is not connected to a full-rate production decision that would clearly trigger statutory operational testing requirements. MDA plans to incorporate both developmental and operational test requirements in integrated flight tests. It will also conduct operational assessments that involve the warfighter. Nonetheless, because these tests are scripted by MDA, they do not provide the opportunity for an independent assessment of how the equipment and its operators will function under unscripted, unforeseen conditions. An independent and objective assessment would, instead, involve having an independent operational test agent plan and manage tests that demonstrate operational effectiveness and suitability and having DOT&E approve the test plans and report its assessment of the test results to the Secretary and Congress. Such independent, operationally realistic testing of a missile defense capability being fielded for operational purposes, which meets the statutory definition of “operational test and evaluation,” would not be considered a developmental test and evaluation for which DOT&E is precluded from being assigned responsibility. MDA revised its program goals in February 2004 to reflect that the first BMDS block—Block 2004—will cost $1.12 billion more but consist of fewer fielded components than originally planned. Despite these revisions, we observed shortcomings in how MDA defines its goals. Specifically, the goals do not provide a reliable and complete baseline for accountability purposes and investment decision making because they can vary year to year, do not include life-cycle costs, and are based on assumptions about performance not explicitly stated. MDA’s program goals can vary from year to year. The Block 2004 cost goal of $7.36 billion is actually a budget allocation for program activities associated with the block's development and fielding. The flexibility available in its acquisition strategy allows MDA to request additional funding for the second year of a block or defer or cancel program activities if the budget allocation is not sufficient to deliver the BMDS as planned. Because the budget (i.e., the cost goal) and program content are subject to change over the 2-year block period, the goal cannot serve as a reliable baseline for measuring cost, schedule, and performance status over time. A comparison of MDA’s fiscal year 2004 and 2005 budget submissions illustrates how the cost goal and the program content can vary from year to year. In fiscal year 2004, MDA’s cost goal for Block 2004 was $6.24 billion. When MDA submitted its fiscal year 2005 budget, the Block 2004 cost goal had increased to $7.36 billion. Additionally, Aegis BMD interceptor inventory decreased from 20 to 9, the number of Aegis BMD destroyers upgraded for the long-range surveillance and tracking mission decreased from 15 to 10, and the potential operational use of ABL and the sea-based X-band radar as sensors is no longer part of Block 2004. The 2004 and 2005 budget submissions also presented changes in cost estimates for Blocks 2006, 2008, and 2010. Estimated costs for Block 2006 increased by $4.73 billion, which is largely attributed to an increase in planned GMD funding by $2.23 billion for fiscal years 2005 through 2007. Estimated costs for Block 2008 decreased by $8.33 billion, from $16.27 billion to $7.93 billion. The decrease results largely from MDA’s deferring KEI development to future blocks, which alone reduces estimated KEI costs for Block 2008 by $7.23 billion. Finally, estimated costs for Block 2010 increased by approximately $3.42 billion, of which $2.89 billion for the KEI program contributes to the increase. MDA program officials acknowledged the increase in the Block 2004 cost goal but indicated that it should be seen as an adjustment resulting from internal realignments of funds over the fiscal years 2004-2009 Future Years Defense Plan. For example, as noted above, a significant portion of funds originally allocated to Block 2008 was redistributed to Blocks 2004, 2006, and 2010. Overall, between its 2004 and 2005 budget submissions, MDA’s fiscal years 2004-2009 budget increased by about $3.23 billion, an increase of 6.5 percent. Program officials also noted that MDA’s budget increase is the direct result of additional funds being planned for fielding, as opposed to an increase in funding for research and development. While such flexibility is commonly seen with concept and technology development efforts, the Secretary of a military department is required by law to establish cost, schedule, and performance baselines for major defense acquisition programs entering the System Development and Demonstration (SDD) phase of the acquisition cycle. The program manager is required to report deviations from established baselines to senior DOD management. The baseline description also forms the basis of regular reporting to Congress on the status of the program through the Selected Acquisition Reports, including significant cost overruns. In establishing MDA in January 2002, the Secretary of Defense directed that BMDS elements enter the standard acquisition process at the Production and Deployment phase, which follows SDD. MDA has not addressed when, how, and if the BMDS, its block configurations, or its program elements will enter SDD—the typical initiation of an acquisition program. Accordingly, the agency has not established baseline descriptions for its block configurations that can be used to reliably measure the progress of the BMDS during development and for consistently reporting to Congress and senior DOD management on the cost, schedule, and performance status of the program. Congressional decision makers have traditionally used Selected Acquisition Reports to oversee the acquisition of weapon systems programs. Accordingly, MDA produces a Selected Acquisition Report annually, but because the missile defense program is not treated as being in the SDD phase of acquisition, reporting is limited. Programs that have not begun the SDD phase are not required to report life-cycle cost estimates, including all costs for procurement, military construction, and operations and maintenance, in the Selected Acquisition Report. Life-cycle cost estimates are important because an investment in a weapon system has ramifications beyond developing and procuring an inventory. Once operational, the system requires resources to ensure its continued operation, maintenance, and sustainment. For example, operators and maintenance personnel must be available to keep the system on alert and ready to perform its mission. Such costs—which MDA refers to as “operations and sustainment” costs—have been under review by MDA since 2003. Original MDA estimates for operations and sustainment costs across the Future Years Defense Plan (fiscal years 2004-2009) ranged from $1.9 billion to $3.5 billion. However, during the fall of 2003, MDA worked with the military services to better define requirements, which lowered the estimates while still maintaining acceptable levels of readiness and alert. Since there is no precedent for estimating what the actual contractor logistical services costs might be, MDA agreed to fund the GMD contractor for these costs for fiscal years 2005 and 2006 and begin aggregating actual costs. MDA estimates that contractor logistical services will cost approximately $105 million in fiscal year 2005. We note, in addition, that Congress expressed specific interest in obtaining life-cycle cost information for missile defense programs entering Engineering and Manufacturing Development (EMD), otherwise known as SDD. Specifically, Congress required MDA, with its statement of goals, to provide an annual program plan for each missile defense program that enters EMD. Section 232(b) of the act further specified that each program plan is to include a funding profile (estimating significant research and development, procurement, and construction), together with the estimated total life-cycle costs of the program. During the period covered by our review, MDA did not provide any program plans detailing life-cycle costs. MDA officials told us that the agency is working to better define its operations and sustainment costs and include total life-cycle costs in future Selected Acquisition Reports to Congress. They recognized that an understanding of total life-cycle costs for elements being fielded would help the military services plan their future budgets for procurement and operations and sustainment. However, MDA has not committed to when those reports would include total life-cycle costs. BMDS performance goals are based on assumptions regarding the system’s capability against threats under a variety of engagement conditions. However, critical assumptions used in establishing these goals—such as the type and number of decoys—are not clearly explained. Without knowing these implicit assumptions, an understanding of the operational capability of the fielded system is incomplete. As defined in table 8, MDA utilizes three performance metrics—probability of engagement success, defended area, and launch area denied—for measuring the capability of the Block 2004 BMDS to engage and negate ballistic missile attacks. MDA assigned values to its performance metrics to communicate the defensive capability of the Block 2004 system against ballistic missile attacks but did not explain the assumptions underlying those values. For example, although the probability of engagement success is affected by adversary parameters—trajectory, decoys, and warhead type—as well as the performance and orchestration of the defense elements, we found that these factors are not explicitly defined and provided in MDA’s Statement of Goals. Because threat characteristics such as countermeasure sophistication and warhead dynamics all factor into the determination of the performance metrics, knowledge of these assumptions is vital to understanding the true capability of the system. MDA’s new acquisition strategy for acquiring ballistic missile defenses is designed to give MDA greater flexibility so it can, for example, more easily develop and introduce new technologies to address evolving threats. However, having such flexibility does not diminish the importance of ensuring accountability over the substantial investments in missile defense. In exercising their oversight and funding responsibilities, decision makers in Congress and DOD would benefit from having more information about the expected performance and costs of the BMDS. Although MDA is executing a test program that aims, over time, to make its tests more complex and realistic, the agency has no plans to incorporate unscripted conditions found in operational testing. If independent, operationally realistic testing of block configurations being fielded were conducted and DOT&E approved, assessed, and reported on this testing, decision makers in Congress and DOD would have greater assurance that the fielded BMDS is an effective system when considering further investments in the system. With its statutorily based independence, DOT&E is in the best position to determine whether a weapon system can be trusted to work as intended when placed in the hands of the warfighter and to report operational test results objectively. We recognize that MDA may not have time before fielding IDO or Block 2004 to plan and carry out such testing. However, the agency should have the opportunity to conduct operational realistic testing of the Block 2004 configuration, once it is fielded. Notwithstanding that interceptor inventory is being procured, operations and sustainment costs are being funded, and the IDO system is nearing the time when it will be fielded, MDA has not treated the development and deployment of this capability as an acquisition program (i.e., one that has entered the SDD phase) subject to reporting program status (from the baseline) and life-cycle cost information that Congress traditionally receives for its oversight responsibilities. Accordingly, accountability would be strengthened if MDA provided Congress with the program status and life-cycle cost information that is typically associated with SDD status. Such actions would also help the military services with their future budgeting for procurement and operations and sustainment costs. MDA officials told us that the agency is working toward including life-cycle cost information in these reports. Follow-through is needed. Another means for MDA to strengthen accountability is through an improved definition of BMDS program goals and explanation of changes using the current reporting mechanisms. The Selected Acquisition Reports and MDA budget submissions would be much more useful for oversight and investment decision making if program goals for block configurations being fielded reflect program baselines that do not vary year-to-year; year- to-year changes in estimates are fully explained; full life-cycle costs for block configurations being fielded are presented; and assumptions behind performance goals are explicitly stated. To provide increased confidence that a fielded block of the BMDS will perform as intended when placed in the hands of the warfighter and that further investments to improve the BMDS through block upgrades are warranted, we recommend that the Secretary of Defense take the following three actions: direct the Director, MDA, to prepare for independent, operationally realistic testing and evaluation for each BMDS block configuration being fielded and appoint an independent operational test agent to plan and conduct those tests; assign DOT&E responsibility for approving such test plans; and direct DOT&E to report its evaluation of the results of such tests to the Secretary and the congressional defense committees. To provide decision makers in DOD and Congress with a reliable and complete basis for carrying out oversight of the BMDS program, we recommend that the Secretary of Defense take the following two actions: direct the Director, MDA, to establish cost, schedule, and performance baselines (including full life-cycle costs) for each block configuration of the BMDS being fielded and direct the Director, MDA, to explain year-to-year variations from the baselines in the Selected Acquisition Report to Congress. DOD’s comments on our draft report are reprinted in appendix I. DOD did not concur with our three recommendations on operational testing and evaluation but concurred with our two recommendations regarding cost, schedule, and performance baselines. In not concurring with our first recommendation, DOD stated that there is no statutory requirement for it to operationally test developmental items. That is, DOD is required only to operationally test a major defense acquisition program such as the ballistic missile defense system to assist in the decision as to whether to enter full-rate production. However, because of the capability-based structure under which MDA is operating, the decision to enter full-rate production will not be made in the foreseeable future and, in fact, may never occur. Given that significant resources have already been expended to procure inventory and field the system, and given that decision makers are continually being asked to invest further in the system, we believe DOD should provide evidence from independent, objective testing that the system will protect the United States as intended in an operationally representative environment. In not concurring with our first recommendation, DOD also stated that MDA is attempting to incorporate operational test objectives into developmental tests. For example, MDA conducted an Aegis BMD intercept test in December 2003 that included some conditions likely to be encountered during an armed conflict. However, as noted in our recent report on missile defense testing, MDA has not yet begun to incorporate operational realism on tests of the GMD element, which provides the bulk of the initial BMDS capability. GMD flight tests leading up to IDO are constrained by range limitations, are developmental in nature and, accordingly, are executed with engagement conditions that are repetitive and scripted. It is unlikely that MDA will be able to make developmental tests completely operationally realistic. Developmental tests are, by definition, conducted under controlled conditions so that the cause of design problems can be more easily identified and fixed and the achievement of technical performance specifications can be verified. Additionally, because operational test conditions are more stressing, operational testing provides an opportunity to identify problems or deficiencies that might not be revealed in developmental tests but need to be addressed in subsequent BMDS blocks. In not concurring with our second recommendation, DOD stated that DOT&E already has statutory responsibility for reviewing and approving operational test plans but is prohibited from approving plans for developmental testing. However, our recommendation is based on our view that the block configurations being fielded should be operationally tested. These tests would not be the developmental tests for which DOT&E is prohibited from approving. Because of its independence from the program, we believe DOT&E is in the best position to approve the plans for, and evaluate the results of, operational tests that are not required by statute- tests of block configurations being fielded that do not involve a full-rate production decision. DOD also did not concur with our third recommendation that DOT&E report the results of operational tests to the Secretary of Defense and to Congress. In responding to this recommendation, DOD cited the existing statutory reporting requirements for DOT&E, under which it has assessed the MDA test program. However, for the reasons cited above, we continue to believe that operational tests of the BMDS configurations being fielded are needed. The statutory requirement for operational testing and for DOT&E’s reporting responsibilities is not clearly triggered by the fielding of block configurations that do not involve a full-rate production decision. Also, although we recognize that DOT&E is providing an annual assessment of the BMDS to defense committees each year, we believe this assessment is limited. It is based on developmental tests that, because of their scripted nature, do not provide optimal conditions for assessing the system's readiness for operational use. DOD also provided technical comments to this report, which we considered and implemented, as appropriate. We are sending copies of this report to the Secretary of Defense and to the Director, Missile Defense Agency. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841. The major contributors to this report are listed in appendix X. The Aegis BMD element generally completed work planned for fiscal year 2003 on schedule. However, the program faces risks that include the uncertainty of software performance for the initial surveillance and tracking capability, questions about the contractor’s readiness to produce interceptors, and concerns about the interceptor’s divert system. Schedule: In fiscal year 2003, the program office initiated software upgrades to enable Aegis ships to perform the ballistic missile defense mission, began a series of activities related to producing and delivering the Aegis BMD interceptor, and conducted ground and flight tests to verify Aegis BMD performance. Although the program suffered its first failed intercept attempt in June 2003, overall, four of five intercept attempts conducted by the Aegis BMD program have been successful. The Department of Defense (DOD) budgeted about $4.8 billion for Aegis BMD development and fielding during fiscal years 2004 through 2009. Earlier, DOD expended approximately $2.9 billion between fiscal years 1996 and 2003 for related developmental efforts. Performance: The Aegis BMD element demonstrated the capability to intercept a non-separating target, that is, a target whose warhead has not separated from the booster. However, we were unable to fully assess progress in achieving performance goals during fiscal year 2003, because the program office began reporting performance indicators in calendar year 2004. Cost: Our analysis of prime contractor cost performance reports shows that the interceptor contractor completed fiscal year 2003 work at slightly less cost than budgeted. However, we were unable to determine how work progressed on the interceptor’s high-risk divert system—the component causing the greatest performance risk to the program—because that work was not reported in cost performance reports. Additionally, we could not readily assess cost and schedule performance of other Aegis BMD components associated with missile defense, because cost performance reports were not in a form we could use for our analysis, and these efforts did not undergo an integrated baseline review. Risks: Program officials are working under a tight schedule to complete the development and testing of software intended to enhance surveillance and tracking functions. Officials said there is inadequate time to flight test these new functions before September 2004. Moreover, they share our assessment that the greatest performance risk to the Aegis BMD program pertains to development of the interceptor’s divert system that steers the interceptor into the target. During a flight test in June 2003, subassemblies of the divert system failed, and the target was not intercepted. Program officials do not expect to implement any design changes to the divert system for the first set of five missiles being procured. Even with a reduced divert capability, program officials affirm that the missile’s performance is adequate for Block 2004 threats. Finally, program officials share our concern that missile production and delivery is a program risk. The Aegis Ballistic Missile Defense (Aegis BMD) element is a sea-based missile defense system that builds on the existing capabilities of Aegis- equipped Navy cruisers and destroyers. Aegis BMD is being designed to protect deployed U.S. armed forces and critical assets from short- and medium-range ballistic missile attacks. Key capabilities include the shipboard AN/SPY-1 radar, hit-to-kill interceptors, and command and control systems to detect, track, and destroy enemy warheads in the midcourse phase of flight. Aegis BMD is also expected to be used as a forward-deployed sensor that provides surveillance and early tracking of long-range ballistic missiles to support the Ground-based Midcourse Defense (GMD) mission. The program office is enhancing the existing Aegis Weapon System and Standard Missile (SM) currently installed on Navy cruisers and destroyers. The Aegis Weapon System was originally developed to protect U.S. Navy ships from air, surface, and subsurface threats. Planned hardware and software upgrades to the Aegis Weapon System will provide for enhanced tracking and target discrimination, which are functions needed to carry out the missile defense mission. The Aegis BMD interceptor, referred to as SM- 3, is a solid propellant, four-stage, hit-to-kill missile designed to intercept ballistic missiles above the atmosphere. SM-3 makes use of the existing SM- 2 propulsion stack (booster and dual thrust rocket motor) for the first and second stages. A third-stage rocket motor and a kinetic warhead (a hit-to- kill warhead known as the “kill vehicle”) complete SM-3. The first increment of the Aegis BMD element is expected to deliver an operational capability in the 2004-2005 time frame as an interoperable element of the Ballistic Missile Defense System (BMDS). Known as Block 2004, this increment will inaugurate Aegis BMD’s dual role for the missile defense mission. First, the element will be used as a forward-deployed sensor for the surveillance and tracking of long-range ballistic missiles, and second, it will be used to engage and intercept short- and medium-range ballistic missiles. According to program officials, Block 2004 is being rolled out in three phases: Initial fielding of the surveillance and tracking capability. By September 2004, the Missile Defense Agency (MDA) aims to upgrade three destroyers to be capable of performing the surveillance and tracking function in support of the GMD mission. Initial fielding of an intercept capability. By April 2005, two upgraded cruisers with an inventory of five interceptors are expected to be available for engaging short- and medium-range ballistic missiles. Completion of Block 2004 upgrades of 13 Aegis-equipped ships. By the end of December 2005, MDA aims to have a total of 10 Aegis destroyers available for performing the long-range surveillance and tracking function. In addition, MDA is planning to place up to 10 interceptors on three upgraded cruisers for the engagement role. The Aegis BMD program evolved from efforts in the 1990s to demonstrate the feasibility of a missile defense capability from a ship-based platform. The first demonstration of that concept was the Navy’s Lightweight Exoatmospheric Projectile (LEAP) program, which consisted of four flight tests conducted from 1993 through 1995. The LEAP program successfully married a lightweight exoatmospheric projectile—the kill vehicle—to an existing surface-to-air missile to show that the resulting interceptor could be launched from a ship. Subsequent to this demonstration, in fiscal year 1996, the Navy and the Ballistic Missile Defense Organization initiated the Navy Theater Wide missile defense program, the predecessor to Aegis BMD. Plans called for deploying the first increment of the Navy Theater Wide program— essentially the current Aegis BMD program—in 2010 and a final increment with an upgraded missile at a later, undefined date. The Navy Theater Wide program included an associated effort, the Aegis LEAP Intercept (ALI) program, as a follow-on flight demonstration effort to the earlier LEAP project. The ALI program consisted of a series of flight tests that culminated in 2002 with two successful intercepts using an early version of the SM-3 missile. The ALI program is the basis for the Aegis BMD Block 2004 program. Aegis BMD development and fielding is proceeding in a series of planned 2-year blocks known as Blocks 2004, 2006, and 2008. Furthermore, funding has been planned for Block 2010, but the configuration of this block has not been defined by MDA. Block 2004. Block 2004 is the first fielded increment to protect deployed U.S. forces and other assets from short- and medium-range ballistic missile attacks. Aegis BMD will also be used as a forward-deployed sensor to provide surveillance and early tracking of long-range ballistic missiles to support the GMD mission. Block 2006. The Aegis BMD Block 2006 configuration builds on the Block 2004 capability. MDA plans to add the capability to defeat long-range ballistic missiles with limited countermeasures, to increase Aegis BMD’s role as a remote sensor, and to assess emerging technologies for the element’s missile. Block 2008. The Aegis BMD Block 2008 configuration will incorporate enhancements to the AN/SPY-1 radar that are expected to improve the radar’s discrimination and command and control functionality so that the element can engage multiple threats simultaneously. The Aegis BMD element generally completed work planned for fiscal year 2003 on schedule. Achievements included initiating Aegis Weapon System upgrades on existing ships, beginning activities for the production and delivery of SM-3 missiles, and accomplishing test events. However, problems that arose with the divert system onboard the interceptor’s kill vehicle during flight-testing have affected future test events causing delays and the modification of test plans. Aegis BMD program officials told us that they expect to eventually modify 18 Aegis ships with enhanced surveillance, tracking, and intercept functions to make them capable of performing the BMD mission. These upgrades will improve the capability of the element’s AN/SPY-1 radar to identify the true target (discriminate), enable accurate tracking of long-range ballistic missiles in support of GMD operations, plan engagements, and launch an SM-3 missile to engage a ballistic missile threat. To achieve this enhanced functionality, the Aegis BMD program office is upgrading the Aegis Weapon System of designated ships through a series of software builds or computer programs referred to as CP3.0E, CP3.0, and CP3.1. Aegis BMD program officials stated that they originally planned two software builds—CP3.0 and CP3.1—as incremental increases to the Block 2004 capability through the end of 2005. The program expected that the CP3.1 software build, once developed and installed on Aegis ships, would enhance the existing combat system so that upgraded ships could perform the BMD mission. However, in response to the Presidential Directive to begin fielding a set of missile defensive capabilities in 2004, the Aegis BMD element began the development of an early, interim build referred to as “CP3.0E.” Several software development activities completed in fiscal year 2003 pertain to this build. CP3.0E is to be installed in one or more destroyers by September 2004, but it will enable these destroyers only to surveil and track enemy ballistic missiles. The ships will not be capable of launching interceptors to engage those missiles. According to program documentation, when CP3.0E is installed on ships at sea by September 2004, the program office will have achieved initial defensive operations for the Aegis BMD Block 2004 surveillance and tracking mission. MDA expects CP3.0, the next software build, to augment the surveillance and tracking capability of CP3.0E with an initial engagement capability for Aegis cruisers. The availability of CP3.0 on ships at sea by April 2005 enables initial defensive operations for the Aegis BMD Block 2004 engagement mission. Although CP3.0 allows ships to launch SM-3 missiles, this capability applies only to Aegis cruisers and not to Aegis destroyers. The capability to intercept short- or medium-range ballistic missiles is limited to the single cruiser that will be available in April 2005. The third version of the computer program—CP3.1—adds ship defense and planning support for cruisers. MDA intends for CP3.1 to be installed by December 2005, and it is the last software upgrade planned for the Block 2004 time frame. In fiscal year 2003, the program office conducted activities related to the development of the CP3.0E and CP3.0 software builds. All activities occurred within the expected schedule. The major event for CP3.0E was the July 2003 In Process Review. This review ensured that CP3.0E development and installation were on track to occur as planned. The CP3.0 System Design Disclosure, which occurred in March 2003, defined the design of CP3.0 and allowed the program office to proceed with the development of this software build. The program expects to continue developing CP3.0 and CP3.1 in fiscal year 2004 and to install CP3.0E on designated ships. As software builds are completed and installed, Navy cruisers and destroyers will become available to perform their expected missions. As indicated by program officials, table 9 summarizes the availability of Aegis ships for the BMD mission in the Block 2004 time frame. In fiscal year 2003, the Aegis BMD program office undertook a series of missile-related activities to begin procuring missiles for delivery in fiscal year 2004. The Aegis BMD element is developing evolving configurations of the SM-3 missile. The SM-3 “Block 0” configuration, which is used in Block 2004 flight-testing, is capable of intercepting simple non-separating targets. The “Block I” SM-3 configuration will be fielded as part of the BMDS Block 2004 defensive capability and provides a rudimentary target discrimination capability. Subsequent SM-3 configurations beyond Block I will not be available until calendar year 2006. Table 10 lists those activities and their respective completion dates. The missile-related activities shown in table 10 occurred as planned, with the exception of the missile nosecone critical design review. Program officials stated that a delay of less than 3 months occurred because the testing facility was not available as originally planned. Table 11 summarizes the delivery of SM-3 missiles in the Block 2004 time frame. The Aegis BMD program conducts both ground- and flight-testing to validate Aegis BMD’s performance. The program office expects flight-testing to progressively demonstrate the element’s capability to engage ballistic missile targets under increasingly complex conditions. Since 1999, the program conducted three flight tests (non-intercept attempts) to demonstrate basic missile functionality, such as booster performance and stage separation. During this same time frame, there have also been five intercept flight tests using the SM-3 missile. Of the five attempts, four were successful intercepts. Ground testing provides the opportunity to validate the flight-worthiness of Aegis BMD subcomponents on the ground before they are used in flight tests. In fiscal year 2003, ground-testing activities focused on the SM-3 missile and a redesigned subcomponent of the missile’s divert system—the Solid Divert and Attitude Control System (SDACS). This subcomponent is a collection of solid-fuel thrusters used to steer the kill vehicle into its designated target. Ground tests of the SDACS were conducted to verify its readiness for flight-testing. When the SDACS ground test program demonstrated good performance with the simpler, more producible SDACS design, the Aegis BMD program office gave approval for its use in flight mission 5 (FM-5). Despite of successful ground testing, the SDACS subcomponent did not perform as desired in flight. The program office is investigating the cause of the failure, but a resolution is not expected until sometime in early 2004. As indicated by program officials, table 12 shows key ground tests planned for fiscal year 2003. Program officials stated that the only ground test that was scheduled to occur in fiscal year 2003, but did not, was the qualification testing of the third-stage rocket motor. The officials told us that the test could not be performed as scheduled, because a safety shutdown at the test facility occurred because of an explosion in another test area at that facility. They noted that modifications are being made to prevent similar incidents. Repairs are expected to continue well into the second quarter of fiscal year 2004, after which rocket motor testing can be resumed. The program office conducted three flight missions—FM-4, FM-5, and FM-6—in fiscal year and calendar year 2003. With the exception of FM-5, these tests proceeded as planned. FM-4, which occurred in November 2002, marked the start of the Aegis BMD Block 2004 flight test phase. FM-4’s primary test objective was to verify an ascent phase intercept against a non-separating ballistic missile target using the Block 0 SM-3 missile, and the objective was achieved. FM-5 had objectives similar to those of FM-4, viz., to intercept an ascending non-separating target. The test also was to demonstrate the operation of the redesigned SDACS in flight. In the end, FM-5 did not achieve an intercept because the SDACS did not perform as expected. FM-6, a third test with objectives similar to those of FM-5, occurred later in calendar year 2003. Because of technical issues that arose in FM-5, the program office delayed FM-6 from September 2003 to December 2003 and modified the test plan. In particular, the program omitted its plan to exercise the full functionality of the newly designed SDACS, which failed during FM-5. Table 13 provides a summary of the flight tests. The Aegis BMD program has demonstrated the capability to intercept a non-separating target through its successes in FM-2, FM-3, FM-4, and FM-6. These successes are noteworthy, given the difficulty of “hit-to-kill” intercepts. DOT&E’s fiscal year 2002 Report to Congress noted the successes but pointed out that the flight tests were developmental in nature and neither operationally realistic nor intended to be so. Test scenarios and target “presentation” were simple compared with those expected to be encountered during an operational engagement. Furthermore, separating targets, which pose a particular challenge to the Aegis BMD element, will not be assessed until FM-8 is conducted in 2005. While MDA is increasing the operational realism of its developmental flight tests—e.g., the Aegis Ballistic Missile Defense program employed an operational crew in FM-6—tests completed to date are highly scripted. The Aegis BMD program developed a set of performance indicators that provides a top-level characterization of element effectiveness. We were unable to fully assess progress in achieving performance goals during fiscal year 2003, because the program office began reporting performance indicators in calendar year 2004. DOD expects to invest about $4.8 billion in Aegis BMD research and development from fiscal year 2004 through 2009. This is in addition to the $2.9 billion invested from fiscal year 1996 through 2003. The program uses most of the funds it receives to fund the element’s prime contract. In fiscal year 2003, the contractor completed all development work slightly under cost and ahead of schedule. However, because of early development problems with the SM-3 missile, the contractor incurred a cumulative cost overrun of about $39 million at the contract’s completion in August 2003. Aegis BMD costs for the next 6 fiscal years are expected to be around $4.8 billion. This includes funds for Blocks 2004, 2006, and 2008 as well as portions of Block 2010. Also included is cooperative work between the United States and Japan on SM-3 component development. Table 14 shows the expected costs of the program by fiscal year through 2009, the last year for which MDA published its funding plans. In fiscal years 2002 and 2003, MDA expended $446.5 million and $384.3 million, respectively, to develop the Aegis BMD element. Including these funds, the Navy and MDA have expended approximately $2.9 billion to develop a sea-based missile defense capability since the Navy Theater Wide program began. The prime contract consumes the bulk of the program’s budget: about 84 percent of the Block 2004 budget supports the prime contractor team and 16 percent supports government efforts. Up until 2003, seven separate contracts covered the development of element components—the Aegis Weapon System, the Vertical Launch System, and the SM-3 missile. Late in the fiscal year, MDA awarded new contracts and reduced the number of contracts to two, an Aegis Weapon System contract and an SM-3 contract. The Aegis Weapon System contract covers all Block 2004 activities. It also provides for initial future block definition activities for Blocks 2006, 2008, and 2010. The SM-3 contract is similarly structured. We used contractor Cost Performance Reports to evaluate the cost and schedule performance of the SM-3 contractor. The government routinely uses these reports to independently evaluate prime contractor performance relative to cost and schedule. Generally, the reports detail deviations in cost and schedule relative to expectations established under the contract. Contractors refer to deviations as “variances.” Positive variances—activities costing less or completed ahead of schedule—are generally considered as good news and negative variances—activities costing more or falling behind schedule—as bad news. According to the Aegis BMD program office, contractors produce Cost Performance Reports for the various components of the Aegis BMD element, such as the Aegis Weapon System and the SM-3 missile. However, we were able to assess cost and schedule performance only for the SM-3 missile. Cost Performance Reports associated with missile-defense activities for the other components were not in a form we could use for our analysis, and these efforts did not undergo an integrated baseline review. In the future, the new contracts will provide Cost Performance Reports for both the Aegis Weapon System and SM-3 missile. The SM-3 development contract accounts for approximately 50 percent of Aegis BMD Block 2004 development costs. Our analysis of SM-3 missile Cost Performance Reports shows that the contractor generally improved its cost and schedule performance throughout fiscal year 2003. During this time, the SM-3 missile contractor spent $7.4 million less than originally budgeted and completed planned work slightly ahead of schedule. In addition, in fiscal year 2003, work efforts on major components of the SM-3 were completed generally within their estimated budget and slightly ahead of schedule. The contractor’s improved performance in fiscal year 2003 resulted, in part, because in March 2003 the program removed the majority of the SDACS work from the SM-3 contract. As a result, the contractor was no longer required to incorporate SDACS activities, which had been the primary cause of prior cost and schedule growth, when providing Cost Performance Reports. Despite improved performance in fiscal year 2003, the contractor continued to carry a negative cost and schedule variance from problems that occurred in prior years. As figures 2 and 3 illustrate, the SM-3 contractor entered fiscal year 2003 with cost overruns of approximately $46 million and with uncompleted work valued at $4.6 million. By August 2003, however, the contractor reduced its cost overrun and improved its schedule performance. At its completion, the SM-3 contract exceeded its budget by $39 million. According to the contractor, technical problems with the development of the SDACS, kill vehicle, rocket motor, and guidance section, as well as failures during flight and ground tests, were responsible for the majority of the cost overrun on the SM-3 contract. Program officials told us that the majority of the technical problems associated with SM-3’s development, with the exception of the SDACS, have been resolved. The officials said that they do not expect these issues to cause negative variances on the new missile contract. However, technical problems associated with the SDACS could continue to affect cost and schedule performance on the new missile contract. Based on our assessment of fiscal year 2003 activities, we found that the Aegis BMD program faces key risks in fielding the planned initial capability by September 2004 and the Block 2004 defensive capability by December 2005. These risks include the uncertainty of CP3.0E software performance at the time of initial fielding, questions about the contractor’s readiness to produce interceptors, and concerns about SDACS development. Program officials are concerned with the inability to test the CP3.0E software in an operational environment (e.g., during a flight test) before September 30, 2004, when the element is fielded for its surveillance and tracking role. Officials told us that there is not adequate time to test the new surveillance and tracking functionality before initial defensive operations are declared, but risk reduction efforts (such as testing earlier builds of the software) are in place to minimize potential problems. Although the risk reduction efforts under way would not validate the full functionality of CP3.0E, the officials expect that these efforts will provide increased confidence that the CP3.0E software will perform as desired at the time of initial defensive operations. They noted that the need to deliver and install CP3.0E before September 30, 2004, was driving much of the schedule risk. Should the CP3.0E effort fall behind schedule, the program would need to compress its schedule to meet the deadline for initial defensive operations (IDO). Research pertaining to estimating the level of effort in developing software, however, has shown that when schedules are compressed, the quality of the software effort can be compromised. We found that missile production and delivery is a key program risk; program officials concurred with our assessment. They indicated that current MDA plans call for the delivery of 11 to 14 SM-3 missiles by the end of 2005. Program officials also stated that the first five missiles are being produced at the contractor’s research and development facility. Highly trained technical engineers, with manufacturing observers, are building these developmental missiles. Future production missiles will be built by manufacturing labor with engineering oversight as needed. A transition to this production is planned but will not occur until production begins on the next set of 12 missiles. We found that the greatest performance risk to the Aegis BMD program pertains to the development of the SDACS, the subsystem that generates divert pulses to control the orientation and heading of the interceptor’s kill vehicle; program officials agreed with our assessment. Ground tests conducted in 2002 revealed problems with the initial SDACS design, specifically with the subassemblies supporting the operation of the divert pulses. To find a solution to these problems, MDA in 2002 pursued multiple designs for the SDACS subassemblies of the kill vehicle, intending to use the most promising for the program. On the basis of ground test results, MDA selected a single-piece variation of the original design (referred to as the “Monolithic Design”). This design employs a multi-pulse concept whereby (1) a sustain-mode is used to provide low-energy divert and attitude control of the kill vehicle and (2) an energetic pulse-mode is available for maximum divert capability. When the Monolithic SDACS design with its sustain- and pulse-mode divert capability proved successful in ground testing, the program planned to flight-test it during FM-5. However, during FM-5, the subassemblies supporting the energetic pulse-mode failed, causing the kill vehicle to be less maneuverable. Program officials stated that they are investigating the failure and believe that the “diverter ball,” which acts as a valve to control the pulse, caused it. Incorporating the high-energy pulse into the SDACS increased internal operating pressures, and under the thermal stress, the protective coating of the diverter ball cracked, disabling normal SDACS operation. Aegis BMD program officials stated that they do not expect to implement any design changes related to pulse-mode divert capability in 2004. Nonetheless, MDA is moving ahead with the procurement of 5 of the 20 Block 2004 missiles utilizing the Monolithic SDACS with reduced divert capability. According to program officials, these less-capable missiles provide a credible defense against a large population of the threat and can be retrofitted to support pulse-mode operations upon the completion of design updates and testing. Without the energetic pulse-mode, performance against certain threats is limited, because the kill vehicle has less divert capability to compensate for initial targeting errors. This degradation is threat-dependent, that is, not significant for non-separating targets because the kill vehicle typically does not have to radically change course to engage a warhead attached to the booster tank. However, separating threats under specific scenarios may be problematic. The kill vehicle may need to expend additional energy to change course and engage a warhead that is physically separated from its booster tank. Activities in fiscal year 2003 progressed much more slowly and were more costly than anticipated. Nearly all hardware deliveries, integration activities, and test events slipped. The program’s underestimation of the complexity of integrating ABL subcomponents into a working system, in particular, resulted in significant cost growth and delays during fiscal year 2003. Schedule: The ABL program continued with the development of the prototype aircraft, but as noted above, fiscal year 2003 activities progressed more slowly than anticipated. For example, four of six key test events were either deferred indefinitely or delayed over a year. Furthermore, quality issues and difficulty with integration activities resulted in the slip of a critical test milestone—the demonstration of individual laser modules linked together to form a single laser beam, known as “First Light.” At the end of fiscal year 2003, the expected date for this demonstration was March 2004, but the event continues to slip. As a consequence of the test delays, the lethal demonstration continues to be pushed back. The Department of Defense (DOD) budgeted about $3.1 billion for ABL development during fiscal years 2004 through 2009. Earlier, the Air Force invested approximately $1 billion from 1996 through 2001, and MDA expended about $1 billion in 2002 and 2003 for related developmental efforts. Performance: At this stage of ABL development—before the laser has been operated at full power or critical technologies have been demonstrated in flight tests—any assessment of effectiveness is questionable. However, performance indicators used by the program office to monitor performance indicate that 9 of 12 of the indicators are at risk in achieving Block 2004 goals. Cost: Our analysis of prime contractor cost performance reports indicates that ABL cost performance deteriorated throughout fiscal year 2003. The contractor overran budgeted costs by $242 million and could not finish $28 million worth of work as planned. The underestimated complexity of integrating ABL subcomponents into a working system was the primary driver for the cost growth. Risks: Our analysis indicates that the complexity and magnitude of integration activities—delivering a working system for the lethal demonstration—have been substantially underestimated. Accordingly, the program continues to be at risk for cost growth and schedule slips. In addition, a major performance risk for ABL Block 2004 involves controlling and stabilizing the high-energy laser beam so that vibration does not degrade the beam’s aimpoint. Program officials stated that they are working to resolve this issue but cannot demonstrate final resolution before flight testing in 2005. The Airborne Laser (ABL) element is a missile defense system designed to shoot down enemy missiles during the boost phase of flight, the period after launch when the missile is powered by its boosters. As an element of the Missile Defense Agency’s (MDA’s) Boost Defense Segment, ABL is expected to engage enemy ballistic missiles early in their trajectory before warheads and countermeasures can be released. ABL plans to use a high- energy chemical laser to defeat enemy missiles by rupturing a missile’s motor casing, causing the missile to lose thrust or flight control. ABL’s goal is to prevent the delivery of the missile’s warhead to its intended target. ABL was initially conceived as a theater system to defeat short- and medium-range ballistic missiles. However, its role has been expanded to include the full range of ballistic missile threats, including intercontinental ballistic missiles (ICBMs). In addition, ABL could be used as a forward- deployed sensor to provide accurate launch point, impact point, and trajectory data of enemy missiles to the overarching Ballistic Missile Defense System (BMDS) in support of engagements by other MDA elements. The ABL element consists of the following three major components integrated onboard a highly modified Boeing 747 aircraft. In addition, ground support infrastructure for chemical storage, mixing, and handling is a necessary component of the element. High-energy chemical oxygen-iodine laser (COIL). The laser, which generates energy through chemical reactions, consists of six laser modules linked together to produce megawatt levels of power. By using a defensive weapon that incorporates the speed of light, ABL can destroy missiles more quickly, giving it a significant advantage over conventional boost-phase interceptors. Beam control/fire control (BC/FC). The BC/FC component’s primary mission is to maintain the beam’s quality as it travels through the aircraft and atmosphere. Through tracking and stabilization, the BC/FC ensures that the laser’s energy is focused on a targeted spot of the enemy missile. Battle management/command and control (BMC2). The BMC2 component is expected to plan and execute the element’s defensive engagements. It is being designed to work autonomously using its own sensors for launch detection, but it could also receive early warning data from other external sensors. ABL’s current development is based on more than 25 years of scientific research in the Departments of Defense and Energy. The program evolved primarily from airborne laser laboratory research, which developed applications for high-energy lasers. The laboratory’s research culminated in a demonstration showing that a low-power, short-range laser was capable of destroying a short-range, air-to-air missile. In 1996, the Air Force initiated the Airborne Laser program to develop a defensive system that could destroy enemy missiles from a distance of several hundred kilometers. Developmental testing for the program was expected to conclude in 2003 with an attempt to shoot down a short-range ballistic missile target. However, in 2002, management authority and funding responsibility transferred from the Air Force to MDA. In accordance with MDA planning, the Airborne Laser program restructured its acquisition strategy to conform to a capability-based approach. ABL development is proceeding in a series of planned 2-year blocks. The near-term blocks are known as Blocks 2004, 2006, and 2008. Other blocks may follow, but on the basis of recent budget documentation, MDA has not yet defined their content. Block 2004. The overall Block 2004 goal is to demonstrate the feasibility of the prototype ABL aircraft to defeat—via directed laser energy—a short-range, threat-representative ballistic missile. This concluding test event generally is referred to as the lethal shoot-down demonstration. MDA has no plans to deliver an ABL contingency capability in the Block 2004 time frame. Block 2006. The Block 2006 ABL program makes use of the Block 2004 aircraft, but the block’s focus is on testing, interoperability with the BMDS, and increased supportability for an emergency operational capability. Block 2008. The program expects to procure a second, upgraded ABL aircraft in the Block 2008 time frame. It will incorporate upgrades for enhanced lethality and increased operational suitability. Block 2008 will also focus on making ABL more affordable. During fiscal year 2003, the ABL program planned to complete a series of activities in preparation for Block 2004. Although the program made some progress, planned activities progressed much more slowly than anticipated. These activities included the following: designing, fabricating, and delivering subcomponent hardware critical to the operation of the ABL element (hardware delivery); integrating and testing subcomponents as functioning components; and completing a test milestone referred to as “First Light,” the first demonstration—in a ground-test facility—of the integration of six individual laser modules to produce a single beam of laser energy. ABL contractors delivered critical ABL element hardware during fiscal year 2003, including subcomponents of the BC/FC component. However, in each case, hardware delivery was originally scheduled for the end of fiscal year 2002. (See table 15.) Because these hardware deliveries were delayed, the schedule for subsequent integration and demonstration activities was also affected. Table 16 summarizes the status of major Block 2004 ABL test events, scheduled sometime during fiscal year 2003. As illustrated, four of the six test events were either deferred or delayed over a year due to late hardware and software availability, subcomponent test failures, and numerous design flaws. Consequently, the lethal demonstration—the focus of Block 2004 development—has been delayed until February 2005 at the earliest. Other than the surveillance and tracking tests, which were conducted in flight and have been completed, all scheduled testing listed in table 16 will be performed in ground facilities, such as the System Integration Laboratory (SIL) at Edwards Air Force Base, California. The Director, MDA, has made the achievement of Block 2004’s “First Light”—to prove that individual laser modules can be successfully integrated and operated to generate a single laser beam—a decisive event for the ABL program. In April 2003 testimony before the Senate Appropriations Committee, Subcommittee on Defense, the Director stated that his confidence in meeting the schedule goal for the lethality demonstration would increase tremendously if “First Light” occurred in 2003. “First Light” did not occur in February 2003 as scheduled and slipped throughout the fiscal year. As of March 2004, the test event had not been rescheduled. Numerous and continuing issues have caused the event to slip, including supply, quality, and technical problems. For example, specialized valves have been recalled twice, laser fluid management software has been delayed due to inadequate definition of requirements, and improperly cleaned plumbing and material issues have required over 3,000 hours of unplanned work. In addition, delays in hardware delivery occurred in almost every month of fiscal year 2003. As a result of the slip in “First Light,” the program office did not exercise a contract option to acquire the Block 2008 aircraft. The office expected to exercise the option and make the first payment to the contractor, $30 million of the $170 million total, during the fourth quarter of fiscal year 2003. The remaining payments of $40 million and $100 million were scheduled for fiscal years 2004 and 2005, respectively. Because this test event continues to slip, program officials do not know when they will initiate the acquisition of the second aircraft. Quantitative assessments of ABL effectiveness for boost-phase defense are necessarily based on end-to-end simulations of ABL operation, because the element has yet to be demonstrated in flight. At this stage of development—before the laser has been operated at full power or flown to examine the jitter issue—any assessment of element effectiveness is necessarily questionable. Nonetheless, the program office monitors performance indicators to determine whether the element is on track in meeting operational performance goals. Based on data provided to us by MDA, 9 of 12 performance indicators point to some risk in achieving Block 2004 goals. One indicator in particular, pertaining to the technology of managing “jitter,” was identified as a risk item by the program office early on and continues to be monitored. This issue is discussed in more detail later in this appendix. The cost of the ABL program continues to grow. MDA expects to invest about $3.1 billion from fiscal year 2004 through 2009 in the element’s development. This is in addition to the approximately $2 billion invested from the program’s initiation in 1996 through fiscal year 2003. The program uses most of the funds it receives to fund the element’s prime contract. However, in fiscal year 2003, the contractor overran its budgeted costs by $242 million. ABL program costs for the next 6 fiscal years are expected to be around $3.1 billion. This covers research and development efforts for Blocks 2004, 2006, and 2008. Table 17 shows the expected costs of the program by fiscal year through 2009, the last year for which MDA published its funding plans. ABL costs from 1996 through fiscal year 2001 were Air Force costs that were not broken out by block but totaled a little over $1 billion. During that time, the greatest amount expended on the program in a given fiscal year was $311.4 million in fiscal year 2000. When the ABL program transitioned to MDA in fiscal year 2002, the conversion to a more robust development program increased projected costs. The planned budget increased to approximately $465 million and $585 million in fiscal years 2002 and 2003, respectively. Program officials stated that they have also implemented a more robust developmental staff in response to numerous test failures, quality problems and complex engineering issues, all of which caused annual costs to increase after ABL’s transition to MDA. The government routinely uses contractor Cost Performance Reports to independently evaluate prime contractor performance relative to cost and schedule. Generally, the reports detail deviations in cost and schedule relative to expectations established under the contract. Contractors refer to deviations as “variances.” Positive variances—activities costing less or completed ahead of schedule—are generally considered as good news and negative variances—activities costing more or falling behind schedule—as bad news. Our analysis of contractor Cost Performance Reports indicates that ABL cost and schedule performance deteriorated throughout fiscal year 2003. In fiscal year 2003 alone, the ABL program incurred cost overruns of $242 million, which resulted primarily from integration and testing issues. Program officials indicated that it has taken longer to fabricate plumbing, install hardware, and conduct system checkouts. Furthermore, hardware that did not perform as expected and safety preparedness tended to slow down the program. In short, initial estimates of integration-related activities were substantially underestimated. Our analysis shows that these problems contributed to more than 80 percent of the overall cost overrun. The same analysis indicates that the contractor could not finish $28 million of work as planned during the same period of time. Finally, based on the contractor’s cost and schedule performance in fiscal year 2003, we estimate that the current ABL contract will overrun its budget by between $431 million and $942 million. Figures 4 and 5 show the contractor’s performance in fiscal year 2003. The negative variances indicate that the ABL program is exceeding its budgeted costs and is not completing scheduled work as planned. The element’s largest contract, known as the Block 2004 prime contract, covered a period of performance from November 1996 until about 6 months after the lethal demonstration when it was awarded. However, the program office recently announced that it will close-out this contract, valued at approximately $2.2 billion, and award, in increments, follow-on contracts for the remaining Block 2004 work. The program manager told us that by awarding the remaining work in about one-year increments, the contractor should be able to establish more accurate cost and schedule estimates. In addition, the new contract structure is expected to encourage the contractor to gain knowledge from near-term tests, rather than concentrating on the longer-term goal of conducting the lethal demonstration. Based on our assessment of fiscal year 2003 activities, we found that the complexity and magnitude of integration activities—to deliver a working system for the lethal shoot-down demonstration—has been substantially underestimated. Accordingly, the program continues to be at risk for cost growth and schedule slips. We also found that the uncertainty regarding the element’s ability to control environmental vibration on the laser beam— jitter—is a serious performance risk for the Block 2004 program. Finally, we found that weight distribution across the airplane may be a key risk for future blocks. The major performance risk for Block 2004 involves controlling and stabilizing the high-energy laser beam so that vibration unique to the aircraft environment does not degrade beam aimpoint. Reducing this so-called jitter is crucial if the laser beam is to impart enough energy on a fixed spot of the target to rupture the missile’s motor casing. Currently, jitter control is developed and tested in a laboratory environment and is the least mature of ABL’s critical technologies. Program officials told us that they are improving jitter analysis tools and even considering potential hardware design changes to reduce the level of vibration. They also noted that final tuning and resolution of the jitter issue would not be demonstrated before flight testing is conducted in 2005. If future blocks require additional laser modules to increase ABL’s military utility, weight distribution across the aircraft’s frame may become a key issue. The program office recognizes this problem and has initiated a weight-reduction and weight-redistribution effort that includes component redesign and composite materials. The program office is also studying a possible redesign of the aircraft frame that would allow laser modules to be moved forward to relieve stress on the airframe. The Command, Control, Battle Management, and Communications (C2BMC) element is the integrating and controlling element of the Ballistic Missile Defense System (BMDS). It is designed to link all system elements, manage real-time battle information for the warfighter, and coordinate element operation to counter ballistic missile attacks in all phases of flight. The C2BMC team executed the program within budget but slightly behind schedule in fiscal year 2003. Important activities, such as the completion of software testing and operator training, are continuing in fiscal year 2004 to ready the element for initial defensive operations (IDO) by September 2004. Schedule: The C2BMC program is on track to deliver the software needed for the September 2004 defensive capability. However, the program faces a tight schedule to complete software development and testing. Other activities, such as training, also are being completed to make the system operational. The program office indicated that all such activities are on track for timely completion. The C2BMC program is working toward the delivery of a limited capability by September 2004 followed by an upgrade in defensive capabilities by the end of 2005. Performance: The program office predicts that key indicators of C2BMC operational performance will meet established requirements when the element comes online in September 2004. Tests, which began in September 2003, will determine if C2BMC’s technical objectives are being achieved. Test results beyond fiscal year 2003 have been positive thus far. The Department of Defense (DOD) budgeted about $1.3 billion for C2BMC development during fiscal years 2004 through 2009. Earlier, MDA expended $165 million in fiscal years 2002 and 2003 for element development. Cost: Our analysis of the prime contractor’s cost performance reports shows that the contractor completed planned work under budget but slightly behind schedule. Specifically, the contractor under-ran budgeted costs by $5.3 million in fiscal year 2003 because of a slower than anticipated increase in staffing needed for the new IDO requirements. Key risks: The C2BMC is tracking and mitigating key BMDS-specific risks pertaining to the fielding of the initial capability by September 2004 and the Block 2004 defensive capability by December 2005. Notably, development of the C2BMC element is proceeding concurrently with the development of other elements in the BMDS. Changes in one element’s design—especially in how that element interfaces with the C2BMC element—could delay C2BMC development and fielding. In addition, the BMDS concept of operations continues to evolve, leading to uncertainties about how the C2BMC element will be operated. Finally, the uncertainty regarding the reliability of communications links with the Aegis BMD element threatens to degrade overall system performance. In spite of these communications problems, the existing capability is sufficient to support IDO performance goals. The Command, Control, Battle Management, and Communications (C2BMC) element is being developed as the overall integrator of the Ballistic Missile Defense System (BMDS). Its objective is to tie together all system elements—such as GMD and Aegis BMD—so that system effectiveness is enhanced beyond that achieved by stand-alone systems. Unlike other system elements, C2BMC has neither a sensor nor weapon. Rather, it is primarily a software system housed in command centers or suites. The C2BMC program is working to deliver a limited operational capability in the 2004-2005 time frame. The principal function of the first increment, Block 2004, is to provide situational awareness to certain combatant commanders and others—through the dissemination of, for example, early warning data—enabling them to monitor a missile defense battle as it unfolds. It also will provide certain combatant commanders with the ability to perform missile defense planning. However, battle management functions like centralized weapons allocation—such as determining the number and timing of interceptor launches—will not be part of the Block 2004 capability but is expected to be part of future C2BMC blocks. Over time, the C2BMC element will be enhanced to provide overarching control and execution of missile defense engagements with the aim of implementing “layered defense” through the collective use of individual BMDS elements. As the name indicates, C2BMC is comprised of three major components: Command and control. The command and control component is designed to plan, control, and monitor missile defense activities. When fielded, the command and control component provides warfighting aids needed by the command structure to formulate and implement informed decisions. In particular, the component is meant to quickly replan and adapt the element to changing mission requirements. Battle management. The role of the battle management component is to formulate and coordinate the various missile defense functions— surveillance, detection, tracking, classification, engagement, and kill assessment—needed to execute the ballistic mission defense mission. The planned battle management will direct the operation of various BMDS elements and components, consistent with pre-established rules of engagement, based upon data received from system sensors. Communications. Communication is a key enabler for the integration of the BMDS. The objective of systems communications is to manage and achieve the dissemination of information necessary to perform the battle management and command and control objectives. The C2BMC program is following the MDA capability-based acquisition approach that emphasizes testing, spiral development, and evolutionary acquisition through the use of 2-year capability blocks. Within these blocks, MDA expects to evolve the C2BMC element through a series of software upgrades known as “spirals,” each of which increases the element’s capability to perform the ballistic missile defense mission. MDA initiated the C2BMC program in 2002 as a new element of the BMDS. Program officials indicated that Block 2004 C2BMC software is based on the Air Force’s Combatant Commander’s Integrated Command and Control System, the Air Force’s Joint Defensive Planner software, and GMD- developed fire control (battle management) software. C2BMC development efforts are aligned according to Block 2004, Block 2006, and beyond. Block 2004. The Block 2004 defensive capability is being rolled out in two phases: initial defensive operations (IDO) and the Block 2004 defensive capability. By September 2004 when IDO is available, C2BMC will provide situational awareness, planning capabilities, and communications “backbone” to allow warfighters to monitor the ballistic missile defense battle. The software build associated with IDO’s defensive capability is referred to as “Spiral 4.3.” MDA is working with combatant commanders to define the capabilities of “Spiral 4.5”—the final version of the Block 2004 defensive capability that is expected to be fielded by December 2005— which will be an enhancement of the IDO C2BMC capability defined by Spiral 4.3. MDA is also activating C2BMC suites at U.S. Strategic Command (USSTRATCOM), U.S. Northern Command (USNORTHCOM), U.S. Pacific Command (USPACOM), and other locations including the National Capital Region. Block 2006. The incorporation of battle management capabilities in the C2BMC element begins with Block 2006. The element will provide real-time battle management to fuse available sensor information, track the threat throughout its entire trajectory, and select the appropriate elements to engage the threat. For example, the C2BMC battle manager may use radars across multiple elements to generate a single track of the threat and direct GMD to launch interceptors. Additional C2BMC sites will also be activated during this time frame. C2BMC’s long-term objective is to tie all BMDS elements and sensors into a distributed, worldwide, integrated, and layered missile defense system. The C2BMC program deputy director indicated that the program is on schedule to meet IDO and Block 2004 expectations, that is, to have the BMDS on alert by the end of September 2004 for IDO and upgraded by the end of December 2005. To achieve this goal, the C2BMC element is developing, testing, and verifying Block 2004 C2BMC software (Software delivery); integrating the C2BMC element into the BMDS and incorporating making the BMDS operational, including warfighter Concept of Operations (CONOPS), warfighter training, and activating C2BMC sites. Table 18 summarizes the principal activities pertaining to the development and testing of the first three spirals of Block 2004 C2BMC element software. The development of Spiral 4.3 in nearly completed, and BMDS- level testing (Cycle-3 testing and Cycle-4 testing) of this spiral will be conducted to some extent before IDO, e.g., during GMD integrated flight tests and war games. The program’s Spiral Engineering Team has not fully defined the capabilities planned for Spirals 4.4 and 4.5, the software builds leading up to the Block 2004 defensive capability of December 2005. The team expects to complete the definitions of the Spirals 4.4 and 4.5 in March 2004 and July 2004, respectively. The C2BMC element is upgrading existing communications systems and developing capabilities to allow all BMDS components to exchange data, including command and control orders. Table 19 summarizes the principal activities completed in fiscal year 2003 pertaining to C2BMC’s role in system integration and communications. These activities were generally completed on time. A variety of activities needed if the C2BMC is to deliver an operational BMDS have been completed or are ongoing. These activities include site activation, which is required before the C2BMC suites are built; the warfighter developing a CONOPS; and training military operators for conducting ballistic missile defense missions. Site activation. Full site surveys have been conducted, site installation plans have been signed, and equipment has been ordered for USSTRATCOM, USNORTHCOM, and USPACOM. This also has been done for one National Capital Region site. Equipment installation will begin at the end of March 2004 and continue throughout the summer. CONOPS. A conference to write a CONOPS was held in November 2003. Training. Full operator training is scheduled to begin at USNORTHCOM in June 2004, USSTRATCOM in June 2004, and USPACOM in July 2004. Training for the National Capital Region site is also expected to begin in July 2004. Part of the system-level training is participation in Integrated Missile Defense War Games. Spiral tests for each software build will determine if C2BMC’s technical objectives are being achieved. These tests are expected to indicate if the program needs to make adjustments, such as adding personnel to work on identified problems. The program office predicts, and planned fiscal year 2004 testing is expected to verify, that all top-level C2BMC performance indicators will meet operational performance goals when the IDO capability comes online in September 2004. MDA expects to invest about $1.3 billion from fiscal year 2004 through 2009 in the development and enhancement of the C2BMC element. This is in addition to the $165.4 million expended in fiscal years 2002 and 2003. The program uses most of the funds it receives to fund the element’s prime contract. During fiscal year 2003, the contractor completed planned work slightly behind schedule, but the work cost less than projected. The C2BMC program’s planned costs for the next 6 fiscal years are expected to be around $1.3 billion. This includes costs for Blocks 2004, 2006, and Block 2008. In addition, the program expended $68.0 million and $97.4 million in fiscal years 2002 and 2003, respectively. Table 20 shows expected C2BMC program costs by fiscal year through 2009, the last year for which MDA published its funding plans. The prime contract consumes the bulk of the program’s budget: about 97 percent of the Block 2004 budget supports the prime contractor team and 3 percent supports government efforts. The prime contract is an Other Transaction Agreement (OTA), which functions much like a prime contract. Through an OTA, the C2BMC element is able to take advantage of more collaborative relationships between industry, the government, Federally Funded Research and Development Centers, and University Affiliated Research Centers. The C2BMC Missile Defense National Team (MDNT), for which Lockheed Martin Mission Systems serves as the industry lead, is developing and fielding the C2BMC element of the BMDS. The government routinely uses contractor Cost Performance Reports to independently evaluate the prime contractor’s performance relative to cost and schedule. Generally, the reports detail deviations in cost and schedule relative to expectations established under the contract. Contractors refer to deviations as “variances.” Positive variances—activities costing less or completed ahead of schedule—are generally considered as good news and negative variances—activities costing more or falling behind schedule—as bad news. In fiscal year 2003, the program expended $97.4 million for all efforts associated with the development of the C2BMC element. Our analysis of contractor Cost Performance Reports indicates that C2BMC’s efforts are being completed with “cost efficiency.” That is, C2BMC work is costing slightly less than estimated. Specifically, there was a $5.3 million cost under-run incurred during fiscal year 2003. (See figure 6.) During this time, the contract also had an average cumulative Cost Performance Index of 1.04, meaning that for every budgeted dollar spent to accomplish scheduled work, the contractor actually completed $1.04 worth of scheduled work. However, contractor Cost Performance Reports showed that work is slightly behind schedule. According to program officials, understaffing is the primary reason for any schedule delays. The combination of a government-directed hiring slowdown and the limited numbers of highly qualified personnel in the areas of command, control, battle management, and communications available to work on the program resulted in a slower than anticipated increase in staffing. To ensure that information reported in Cost Performance Reports can be relied upon, programs generally conduct Integrated Baseline Reviews of the prime contract. The review verifies that the contractor’s performance measurement baseline, against which the contractor measures its cost and schedule performance, includes the work directed by the contract. It also verifies that the budget and schedule attached to each work task are accurate, that contractor personnel understand the work task and have been adequately trained to make performance measurements, and it ensures that risks have been properly identified. According to DOD guidance, a review should be conducted within 6 months of the award of a new contract or major change to an existing contract. Although our analysis of C2BMC Cost Performance Reports has not shown any significant cost or schedule variances, an Integrated Baseline Review was not conducted for the Other Transaction Agreement on which we reported the contractor’s cost and schedule performance. According to C2BMC contract officials, the technical baseline was re-established, and budgets and schedules were realigned to reflect changes in mission priorities, namely, to have the element ready and available for IDO. Integrated Baseline Reviews are planned for the future. The C2BMC is tracking and mitigating key BMDS-specific risks pertaining to the fielding of the initial capability by September 2004 and the Block 2004 defensive capability by December 2005. These risks pertain to the integration of C2BMC with other system elements, the continuing evolution of the BMDS CONOPS, and the unreliability of a communications link for the Aegis BMD element. Development of the C2BMC element is proceeding concurrently with the development of other system elements, such as GMD and Aegis BMD. Changes in one element’s design, especially with how it interfaces with the C2BMC element, could result in temporary incompatibilities during Block 2004 integration. The potential consequences include delays in C2BMC development and fielding, increased costs, and reduced software quality. The program office is tracking this item as a key BMDS-level risk and devoting resources to prevent the realization of integration incompatibilities. Changes in the roles and responsibilities of combatant commanders for the missile defense mission are leading to uncertainties in the BMDS concept of operations. This affects how the warfighter prepares, through training and other procedures, to operate the C2BMC element once it becomes operational. The C2BMC program office acknowledges this risk and has efforts under way to address it. For example, the office is actively engaging military users in exercises and war games to provide the users with an opportunity to recognize their needs in an operational environment so that they may better define CONOPS requirements. Uncertainty regarding the reliability of communications links with the Aegis BMD element, a system-level risk tracked by the C2BMC program office, threatens to degrade overall system performance. Nonetheless, program officials told us that the existing capability is sufficient to support IDO performance goals and that MDA plans to enhance Block 2004’s performance by upgrading existing communication components. The GMD program completed many planned activities that are expected to lead to the September 2004 initial capability known as IDO. The delay in the development and delivery of GMD interceptors, however, has caused flight tests (intercept attempts) leading to IDO to slip 10 months. These problems also resulted in the growth of program costs. Schedule: Site preparation, including construction of missile silos and facilities at Fort Greely, Alaska, and Vandenberg Air Force Base, California, is on schedule. Activities to upgrade existing radars are also on track. However, the program has been challenged by developmental and production issues with the interceptor—comprising a booster and kill vehicle—and will not meet MDA’s upper-end goal of delivering and fielding 10 interceptors by September 2004. The GMD program is expected to deliver an initial capability by the end of September 2004, which is known as Initial Defensive Operations (IDO). By the end of calendar year 2005, MDA plans to have augmented the IDO capability with additional interceptors and radars. Performance: GMD has demonstrated the ability to destroy target warheads through “hit-to-kill” intercepts in past flight tests. These flight tests, however, were developmental in nature—the element has yet to be tested under operationally realistic conditions. Moreover, as noted above, the flight test program leading up to IDO has been compressed. As a result, MDA has a limited opportunity to characterize GMD’s performance before initial fielding. Nonetheless, the program office contends that GMD is on track to meet operational performance goals. The Department of Defense (DOD) budgeted about $12.9 billion for GMD’s development and fielding during fiscal years 2004 through 2009. Earlier, DOD expended about $12.4 billion between fiscal years 1996 and 2003 for related research and development. Cost: Our analysis of the prime contractor’s cost performance reports shows that the contractor overran its budgeted costs in fiscal year 2003 by $138 million and was unable to complete $51 million worth of scheduled work. Developmental issues with the interceptor’s booster and kill vehicle have been the leading cause of cost overruns and schedule slips; for example, the interceptor’s development cost $127 million more in fiscal year 2003 than the contractor budgeted. Risks: GMD faces significant testing and performance risks, which are exacerbated by an optimistic schedule to meet the September 2004 deadline for fielding the initial capability. Specifically, delays in flight testing have left the program with only limited opportunities to demonstrate the performance of fielded components and to resolve any problems uncovered during flight testing prior to September 2004. Uncertainty with the readiness of interceptor production could prevent MDA from meeting its program goal of fielding 20 interceptors by the end of 2005. Finally, an unresolved technical issue with the kill vehicle adds uncertainty to the element’s performance. The Ground-based Midcourse Defense (GMD) program expects to deliver an operational capability in the 2004-2005 time frame as an interoperable element of the Ballistic Missile Defense System (BMDS). The first increment of the GMD element, known as Block 2004, is being fielded in two major phases: Initial Defensive Operations (IDO). GMD is expected to deliver an initial capability by the end of September 2004. The principal components include a maximum of 10 interceptors (6 at Fort Greely, Alaska, and 4 at Vandenberg Air Force Base, California); GMD fire control nodes for battle management and execution at Fort Greely and Schriever Air Force Base, Colorado; an upgraded Cobra Dane radar at Eareckson Air Station, Alaska; and an upgraded early-warning radar at Beale Air Force Base, California. With this initial capability, MDA expects to provide the United States with protection against a limited ballistic missile attack launched from Northeast Asia. Block 2004 Defensive Capability. By the end of calendar year 2005, MDA plans to augment the IDO capability by installing additional interceptors at Fort Greely and Vandenberg Air Force Base (for a total of 20), deploying a sea-based X-band radar, and upgrading the early- warning radar at Fylingdales, England. These enhancements are expected to provide additional protection from intercontinental ballistic missiles (ICBMs) launched from the Middle East. Figure 7 illustrates the Block 2004 GMD components, which are situated at several locations within and outside of the United States. The GMD element can be traced back to the mid-1980s, when the Department of Defense (DOD) conducted experiments designed to demonstrate the feasibility of employing hit-to-kill technology—the ability to destroy a missile through a direct collision—for missile defense. During the early 1990s, a technology readiness program continued the development of interceptor technology. These efforts culminated in the establishment of the National Missile Defense (NMD) program in 1996 to develop and field a national missile defense system as a major defense acquisition program. The NMD program office’s mission was to develop a system that could protect the United States from ICBM attacks and to be in a position to deploy the system by 2005, if the threat warranted. The system was to consist of space- and ground-based sensors, early-warning radars, hit-to-kill interceptors, and battle management components. The current GMD program is based directly on research and development conducted by the NMD program. GMD is now one “element” of the overarching BMDS, which is funded and managed by the Missile Defense Agency (MDA). GMD’s development and fielding are proceeding in a series of planned 2-year blocks. The near-term blocks are known as Blocks 2004 and 2006. The developmental efforts of each block incrementally increase element capability by maturing the hardware’s design and upgrading software. Block 2004. During Block 2004, MDA expects to field a basic hit-to-kill capability that can be enhanced in later blocks. Originally, the program’s Block 2004 focus was on development and testing. However, the December 2002 directive by the President to begin fielding a missile defense system in 2004 affected the program’s Block 2004 direction. According to program office officials, this change resulted in GMD’s shifting to a more production-oriented program, accelerating activities to make the element operational. Block 2006. Block 2006 is focused on improving and enhancing the Block 2004 GMD capability. The program expects to improve existing capabilities, field additional interceptors, and conduct tests to demonstrate performance against more complex missile threats and environments. It also expects to upgrade the early-warning radar located at Thule Airbase, Greenland, for expanded sensor coverage. The GMD program completed many of the activities planned for fiscal year 2003. For example, the program accomplished non-technical activities such as site preparation and facility construction at many locations, especially at Fort Greely, on or ahead of schedule. Similarly, activities leading to the development and delivery of the element’s battle management component and of radars that the element depends upon to detect and track targets were generally completed on schedule. However, delays in the development and delivery of the GMD interceptor—particularly due to one of its two boosters—caused intercept attempts leading up to IDO to slip 10 months or more. Many of the GMD activities completed in fiscal year 2003 pertain to the construction of infrastructure—missile silos, buildings, and other facilities—at GMD’s various sites. The largest construction effort is at Fort Greely, where missile silos and supporting facilities are being built. Additional construction activities are occurring at Eareckson Air Base and at Vandenberg Air Force Base (AFB), where four missile silos are being modified. According to MDA, all construction activities are on or ahead of schedule. Table 21 summarizes the major construction activities undertaken in fiscal year 2003 and their estimated completion dates. In fiscal year 2003, the GMD program focused on the development of its Block 2004 components: (1) GMD fire control nodes and communications network, (2) upgraded early-warning radars, (3) Cobra Dane radar, (4) sea-based X-band radar, and (5) ground-based interceptors. Many of the activities planned for fiscal year 2003, such as hardware delivery, did not culminate in 2003. Rather, the completion dates are scheduled in fiscal years 2004 or 2005 to coincide with the start of defensive operations. The fire control component integrates and controls the other components of the GMD element. With input from operators, the fire control software plans engagements and directs GMD components, such as its radars and interceptor, to carry out a mission to destroy enemy ballistic missiles. The in-flight interceptor communications system (IFICS), which is part of the fire control component, enables the fire control component to communicate with the kill vehicle while it is en route to engage a threat. According to contractor reports, the GMD fire control component effort is proceeding on schedule and is expected to be ready for IDO. For example, the installation of equipment for the communication networks and the fire control nodes are on schedule. Additionally, the program completed the installation of a fiber optic ring—the so-called CONUS Ring—that connects all the command, control, and communication networks of the GMD element. The early warning radar is an upgraded version of existing UHF-band surveillance radars used by the Air Force for strategic warning and attack assessment. For Block 2004, the GMD program is upgrading two early warning radars—one at Beale AFB and another at Fylingdales Airbase—to enable the radars to more accurately track enemy missiles. The upgrades include improvements to both the hardware and software. Fiscal year 2003 activities related to upgrading the early warning radar at Beale AFB included developing and testing software; acquiring radar hardware and data processors; completing the design of and constructing the Beale facility; and supporting flight, ground, and radar certification tests. According to program office documentation, the completion of the Beale upgrade is on track for meeting the September 2004 IDO date, even though software development fell behind schedule in fiscal year 2003. Program officials stated that they have not yet begun upgrading the early warning radar at Fylingdales, which they expect to complete by December 2005. The Cobra Dane radar, located at Eareckson Air Station on Shemya Island, Alaska, is currently being used to collect data on ICBM test launches out of Russia. Cobra Dane’s surveillance mission does not require real-time communications and data-processing capabilities; therefore, it is being upgraded to be capable of performing the missile defense mission as part of the Block 2004 architecture. Once upgraded, Cobra Dane is expected to operate much like the upgraded early warning radar at Beale AFB. Although its hardware needs only minor improvement, Cobra Dane’s mission software is being revised for its new application. The program plans to use existing software and develop new software to integrate Cobra Dane into the GMD architecture. It is also modifying the Cobra Dane facility to accommodate enhanced communication functions. In fiscal year 2003, the GMD program completed software development—testing is continuing, and finished the modification of the Cobra Dane facility. In general, the program made significant progress in upgrading the Cobra Dane radar during fiscal year 2003. According to program office documentation and our analysis of GMD’s master schedule, Cobra Dane is on track for meeting the September 2004 IDO date. The GMD program office is managing the development of a sea-based X-band radar (SBX) to be delivered and first tested by the end of Block 2004. SBX will consist of an X-band radar—much like the one located at Reagan Test Site that has been used in past flight tests— positioned on a sea-based platform, similar to those used for offshore oil drilling. The radar is designed to track enemy missiles with high accuracy; discriminate warheads from decoys and other objects; and if the intercept occurs within SBX coverage, assess whether it was successful. In fiscal year 2003, MDA initiated the acquisition of various SBX components, including the sea platform, operations and support equipment for the platform, the radar structure, and electronic components. In addition, design and development have continued on the X-Band radar to be positioned on the platform. MDA program officials stated that the SBX will be fielded as a test asset by the end of Block 2004 (December 2005), and MDA budget documentation indicates that it will be placed on alert as an operational asset during Block 2006. Modification of the platform and production of the SBX antenna is on schedule, and electronics production is ahead of schedule. The ground-based interceptor—the weapon component of the GMD element—consists of a kill vehicle mounted atop a three-stage booster. The booster, which is essentially an ICBM-class missile, delivers and deploys the kill vehicle into a trajectory to engage the threat. Once deployed, the kill vehicle uses its onboard guidance, navigation, and control subsystem to detect, track, and steer itself into the enemy warhead, destroying it above the atmosphere through a hit-to-kill collision. In fiscal year 2003, the program focused on the development and testing of boosters that will be produced for flight tests, IDO, and the Block 2004 inventory. Booster development actually began in 1998, but because of difficulty encountered by the prime contractor, MDA adopted a dual- booster approach as part of a risk reduction strategy. The development of the booster was transferred to Lockheed Martin, which is developing a variant of the original booster. The variant is referred to as “BV+.” MDA also authorized the GMD prime contractor to award Orbital Sciences Corporation (OSC) a contract to produce a second booster that is known as the “OSC booster.” On the basis of our review of fiscal year 2003 activities, booster development and production represent major challenges for the GMD program in meeting its Block 2004 goals, as shown below: Technical. For the most part, the OSC booster has not experienced technical issues preventing it from being tested and produced. However, the BV+ booster has had problems with its first stage attitude control system. In addition, GMD program officials stated that the BV+ booster is experiencing quality-related problems with its flight computers. Testing. The OSC booster successfully demonstrated the performance needed for the GMD mission through a series of flight tests. Beginning with integrated flight test 14, which is scheduled for 4Q FY 2004, the OSC booster will be used in all intercept attempts for the remainder of Block 2004. The Lockheed BV+ booster, however, was flight tested in its new configuration in January 2004 after an 11-month slip. According to MDA officials, its use in flight testing and fielding has been deferred to the end of fiscal year 2005. Production. Because delayed test events are often indicative of development problems, these delays increase the uncertainty of whether the contractors will be able to meet their production goals for IDO and Block 2004. Additionally, accidents at a subcontractor’s facility have jeopardized the delivery of Lockheed BV+ boosters for GMD’s initial deployment. The production facility responsible for propellant mixing for the BV+ upper-stage motors was temporarily shut down following two separate explosions. As a result, MDA is accelerating the production of OSC boosters to compensate for the undelivered Lockheed BV+ boosters. It is unclear, however, whether OSC has the capacity to produce the additional boosters necessary for IDO. Kill vehicle development is proceeding in parallel with development of the boosters. In fiscal year 2003, the program focused on developing and producing kill vehicles for flight tests scheduled in fiscal year 2004. Similar production-representative articles will be deployed as part of the IDO and the Block 2004 defensive capability. Kill vehicle development and production, however, represent challenges for the GMD program in meeting its Block 2004 goals. For example, the contractor has yet to demonstrate that it can increase the production rate of kill vehicles by 50 percent. As a result of developmental and production issues with the kill vehicle and boosters, the GMD program likely will not be able to meet its goal of delivering 20 interceptors required for the Block 2004 inventory or its upper-end goal of delivering 10 interceptors for IDO. Program documentation indicates that 5, rather than 10, interceptors will be fielded when IDO is declared at the end of September 2004; MDA expects that it will not have 10 interceptors until February 2005. MDA officials did not provide us with a schedule of interceptor deliveries for the remaining 10 interceptors that are to be fielded by the end of Block 2004 (December 2005). The GMD program conducts a variety of tests, the most visible being flight test events. Flight tests may be conducted at the component level. For example, the program has planned and conducted booster validation (BV) flight tests to ensure proper operation of GMD’s two booster designs. However, integrated flight tests (IFTs) are most reflective of the environment in which the various components will be required to operate as an integrated element. During fiscal year 2003, the GMD program office conducted four flight test events: IFT-9, IFT-10, a demonstration flight of the OSC Taurus missile, and one of two booster validation tests (BV-6). A summary of information pertaining to these key flight test events is provided in table 22. Of the two intercept tests conducted (IFT-9 and IFT-10), IFT-9 succeeded in intercepting the target while IFT-10 did not. Additionally, both OSC booster tests (OSC demo and BV-6) achieved their booster-related objectives. The table, however, does not reflect the extent of delays on the entire GMD flight test program caused by fiscal year 2003 developmental and delivery issues of the interceptor. As shown in table 23 below, the Block 2004 flight test program leading up to IDO (September 2004)—consisting of booster validation tests and integrated flight tests—has slipped throughout fiscal years 2003 and 2004. As a result, the test schedule leading up to IDO has become compressed. Indeed, the last integrated flight test to be conducted before IDO is declared, IFT-14, is scheduled to occur 1-2 months before this date; originally, the program had scheduled IFT-14 to occur 12 months before IDO and IFT-15 to occur 10 months before IDO. As a result, MDA has limited its opportunity to validate models and simulations of the interceptor’s expected performance, which, in turn, reduces its ability to confidently characterize GMD’s performance prior to the initial fielding. The GMD program, which is the primary portion of the Block 2004 defensive capability, has demonstrated the capability to intercept target warheads in flight tests since 1999. In fact, the program has achieved five successful intercepts out of eight attempts. However, because of range limitations, these flight tests were developmental in nature, and engagement conditions were limited to those with low closing velocities and short interceptor fly-out ranges. As noted in our recent report, none of the GMD components included in the initial defensive capability have been flight tested in their fielded configuration (i.e., with production-representative software and hardware). For example, the GMD interceptor—booster and kill vehicle— will not be tested in its Block 2004 configuration until the next intercept attempt, IFT-14, which the GMD program office plans to conduct in 4Q FY 2004. IFT-14 will also test, for the first time, battle management software that will be part of the September 2004 defensive capability. Finally, MDA does not plan to demonstrate the operation of the critical GMD radar, called Cobra Dane, in flight tests before IDO. Therefore, as noted in the Director, Operational Test and Evaluation (DOT&E) Fiscal Year 2003 Annual Report to Congress, assessments of operational effectiveness will be based on theoretical performance characteristics. Nonetheless, the program office told us that performance indicators predict that GMD is on track to meet operational performance goals. DOD budgeted about $12.8 billion during fiscal years 2004 through 2009 for research, development, and fielding of the GMD element. This is in addition to the $12.4 billion already expended between fiscal years 1996 and 2003. Most of the program’s budget is allocated to fund the element’s prime contract. In fiscal year 2003, the contractor overran its budgeted costs by $138 million and was unable to complete $51 million worth of work. MDA estimates that the GMD program will need approximately $12.8 billion over 6 fiscal years to continue developmental and fielding activities associated with Blocks 2004, 2006, and 2008. Table 24 shows the planned costs of the program by fiscal year through 2009, the last year for which MDA published its funding plans. The budget given in table 24 does not capture the full cost of the Block 2004 GMD capability, which we estimate is approximately $18.49 billion. As shown in table 25, our estimate includes the following: Developmental costs of approximately $12.37 billion, which cover funding from 1996 through 2003. Between 1996 and 2001, DOD expended $6.81 billion to develop the National Missile Defense program. The knowledge, software, and hardware gained from this program directly contribute to the development of the Block 2004 GMD element. In addition, $5.56 billion was expended in fiscal years 2002 and 2003 for the Block 2004 development of the GMD element. Block 2004 activities, scheduled for fiscal years 2004 and 2005, which are budgeted at $2.20 billion. Block 2006 funds amounting to $3.92 billion that are supporting activities planned for fiscal years 2004 and 2005. When the GMD program allocated its expected budget to planned blocks, it allocated funds earmarked to support Block 2004 activities to the Block 2006 budget. For example, the cost of flight tests conducted during Block 2004 was accounted for in the Block 2006 budget. GMD’s prime contract consumes the bulk of the program’s budget. For example, about 80 percent of the fiscal year 2004-2009 budget is allocated to the prime contractor team and 20 percent to the government. The January 2001 GMD contract, which ends in fiscal year 2007, covers activities performed in Block 2004 and Block 2006. It was awarded prior to major changes in the missile defense program and, accordingly, the block approach and the procurement of interceptors for a defensive capability were not part of the original contract. We used Cost Performance Reports to assess the prime contractor’s cost and schedule performance during fiscal year 2003. The government routinely uses such reports to independently evaluate these aspects of the prime contractor’s performance. Generally, the reports detail deviations in cost and schedule relative to expectations established under contract. Contractors refer to deviations as “variances.” Positive variances— activities costing less or completed ahead of schedule—are generally considered as good news and negative variances—activities costing more or falling behind schedule—as bad news. According to our analysis, the contractor’s cost performance in fiscal year 2003 has steadily declined but schedule performance has been mixed. As shown below in figure 8, the GMD contractor exceeded its budgeted costs by approximately $138 million, which equates to 7.1 percent of the contract value over the fiscal year. The contractor also was unable to complete $51 million worth of scheduled work; most of the decline occurred during the second half of the fiscal year. Developmental issues with the interceptor have been the leading contributor to fiscal year 2003 cost overruns and schedule slips. Our analysis shows that the development of the GMD interceptor cost $127.2 million more in fiscal year 2003 than budgeted, and that the kill vehicle accounted for approximately 25 percent of this overrun. Moreover, booster development resulted in a $38 million cost overrun; the Lockheed BV+ booster was responsible for 52 percent of all of the interceptor’s unfinished work. Based on the contractor’s cost and schedule performance in fiscal year 2003, we estimate that the current GMD contract—which ends in September 2007—will overrun its budget by between $237 million to $467 million, of which approximately 84 percent arising from the interceptor component. The contractor, in contrast, estimates no cost overrun at completion of the GMD contract. The contractor bases this assumption on the planned availability of $63 million in management reserve funds to offset cumulative cost overruns of approximately $128 million. The intended purpose of management reserve funds, however, is not to offset cost overruns; rather, management reserves are a part of the total project budget that should be used to fund undefined, but anticipated, work. Although programs may use management reserves to offset cost variances, most programs wait until the work is almost completed prior to allocating these funds. The GMD contractor, in contrast, has completed only about 50 percent of the work directed by the program office. Program officials stated that the contractor is investigating sources of potential savings to offset interceptor cost overruns. The cumulative schedule variance as of September 2003 was $128 million behind schedule. Therefore, to finish within budget and schedule, the contractor will have to improve its efficiency. According to our analysis, the GMD contractor has, effectively, been delivering $0.95 worth of scheduled work for every budgeted dollar that was spent to accomplish that scheduled work during fiscal year 2003. In order to complete all scheduled work at the budgeted cost, the GMD contractor will have to complete $1.01 worth of scheduled work for every dollar that will be spent to accomplish that scheduled work. On the basis of our assessment of fiscal year 2003 activities, we found that the GMD program faces key risks in fielding the planned initial capability by September 2004 and the Block 2004 defensive capability by December 2005. These risks include readiness of interceptor production for the September 2004 IDO, limited testing before the system becomes operational, and a technical risk associated with the kill vehicle. The principal components of the GMD interceptor—the booster and kill vehicle—are at risk for falling short of production goals. The GMD program office had intended to field both BV+ and OSC boosters as part of the September 2004 IDO. However, developmental setbacks and supplier issues associated with the Lockheed BV+ booster have forced MDA to rely solely on the OSC booster for IDO. OSC’s readiness to produce the additional boosters in the time remaining for IDO has not been established. Kill vehicle production is uncertain, as well. The contractor has yet to demonstrate that it can increase the production rate of kill vehicles by 50 percent—from 8 to 12 kill vehicles per year. GMD program officials characterized the schedule to meet the September 2004 deadline for initial operations as extremely aggressive, with no margin for delay. Should interceptor production fall behind, the program will either have to field fewer interceptors than planned or delay planned fielding dates. Limited Testing Before IDO The GMD test program has been in a state of flux. The test program under the National Missile Defense program scheduled 16 integrated flight tests (intercept attempts) to be carried out between fiscal years 1999 and 2004. The current GMD test program, however, consists of 10 intercept attempts over the same time period. The change stems from the cancellation of IFT-11, IFT-12, and IFT-16; the conversion of IFT-13 to boost validation tests (IFT-13A and IFT-13B); and the delay of IFT-17 and IFT-18 into fiscal year 2005. MDA had scheduled two flight tests—IFT-14 and IFT-15—to be conducted before September 2004, but only IFT-14 is now planned before then. IFT-14 is particularly relevant because it is planned to utilize production- representative hardware and operational software for the first time in an intercept attempt. The following firsts are expected to occur in IFT-14, which is scheduled for 4Q FY 2004: The new OSC booster will be used—all previous tests employed surrogate boosters. A production-representative kill vehicle, which incorporates new hardware and discrimination software, will be tested. A new, operational build of the fire control (battle management) software will be used to control the GMD engagement. While MDA will gain some confidence from the successful execution of IFT-14, this test provides only a single opportunity to demonstrate the components to be fielded as part of IDO and to resolve any problems uncovered during flight testing. The previous test program for the NMD system, the predecessor to GMD, also called for operational testing by the military services, a statutory requirement to characterize operational effectiveness and suitability of a deployed system for use by the warfighter. MDA does not plan to operationally test the GMD element before it is available for IDO or Block 2004. The fielding is not connected with a full-rate production decision that would clearly trigger statutory operational testing requirements. The Combined Test Force, a group of users and developers, plans tests to incorporate both developmental and operational test requirements in the test program. In addition, MDA is introducing some elements of operational testing into developmental tests, such as soldier participation during some developmental tests. However, GMD’s current test program does not include flight tests conducted under the unrehearsed and unscripted conditions characteristic of operational testing. A technical problem in the kill vehicle observed in earlier flight tests could affect the operational effectiveness of the GMD element. Although the program office indicated that the issue has been resolved, theories of and solutions for the anomaly have not been verified in flight. The next attempt for verification will occur in integrated flight test 13C (IFT-13C), which is scheduled for 3Q FY 2004. KEI program activities in fiscal year 2003 primarily revolved around the selection of a prime contractor for KEI’s development and testing. The program also continued with experimental work geared toward collecting the data of boosting missiles. Schedule: In December 2003, MDA awarded Northrop Grumman a $4.6 billion prime contract to develop and test the KEI element over the next 8 years. The award follows an 8-month concept design effort between competing contractor teams, each of which was awarded $10 million contracts to design concepts for KEI. In addition to contractual and source-selection activities completed in 2003, the KEI program office continued with activities designed to reduce technical risks in developing the KEI interceptor. In particular, the program office continued with technical work pertaining to an experiment for collecting data on boosting missiles, known as the Near Field Infrared Experiment. This work is expected to culminate with a satellite launch during the fall of 2005. Performance: Because this element is still in its infancy, data are not yet available to make a performance assessment. The Department of Defense (DOD) budgeted about $7.9 billion for KEI development during fiscal years 2004 through 2009. About $91.5 million was invested in KEI’s immediate predecessor program in fiscal year 2003. Cost: According to the KEI program manager, the prime contract incorporates various innovative acquisition initiatives, which are expected to encourage the contractor to develop a quality product on time and within the initially proposed price. Because the prime contract was awarded in December 2003 (fiscal year 2004), no fiscal year 2003 data existed for an assessment of the contractor’s cost and schedule performance. Key risks: The program office acknowledges that it faces general challenges in developing the first capability that uses a missile to destroy another missile in the boost phase of flight. From discussions with program officials, we also found that KEI software costs could be underestimated, putting the program at risk for cost growth and schedule delays. The Kinetic Energy Interceptors (KEI) element is a missile defense system designed to destroy ballistic missiles during the boost phase of flight, the period after launch when a missile’s rocket motors are thrusting. KEI also provides the opportunity to engage enemy missiles in the early-ascent phase, the period after booster burnout before the missile can release warheads and countermeasures. Initially, the program is focused on developing a mobile, land-based system—to be available in the Block 2010 time frame—that counters long-range ballistic missile threats. Subsequent efforts will include sea- and space-based efforts that provide protection against all classes of ballistic missile threats. The land-based system will be a deployable unit consisting of a command and control/battle management unit, mobile launchers, and interceptors. Program officials noted that because the KEI element has no sensor component such as radars, it would rely on Ballistic Missile Defense System (BMDS) sensors (space-based infrared sensors and forward- deployed radars) for detection and tracking functions. Like other existing hit-to-kill interceptors, the KEI interceptor is comprised of a booster and kill vehicle. The kill vehicle is expected to employ an infrared seeker derived from the Aegis BMD program and divert thrusters, which provide terminal guidance and control, derived from the Ground-based Midcourse Defense (GMD) program. In the summer of 2002, the Defense Science Board recommended that the Missile Defense Agency (MDA) initiate a program to develop a boost/ascent-phase interceptor capable of countering intermediate- and long-range ballistic missile threats. Work in this area was initiated in fiscal year 2003 under the Kinetic Energy Boost program as part of MDA’s Boost Defense Segment. Beginning with fiscal year 2004, this program has been budgeted under a new MDA area known as BMDS Interceptors, which includes the KEI element. KEI’s development is proceeding in a series of planned two-year blocks known as Blocks 2010, 2012, and 2014. Concurrently, the KEI program is conducting risk mitigation projects to determine whether a space-based platform, from which interceptors could be launched, is feasible and affordable. Other blocks may follow, but on the basis of recent budget documentation, MDA has not yet defined their content. Block 2010: The KEI program entered the Development and Test Phase in December 2003, after MDA selected Northrop Grumman as the prime contractor. The contractor has begun development activities leading to a Block 2010 capability, the first increment of land-based interceptors capable of destroying ballistic missiles during the boost or early-ascent phases of flight. MDA envisions that these first-generation interceptors will be built and launched from trucks that can be driven up close to the border of the threatening nation. Block 2012: This block increment expands KEI’s Block 2010 capabilities to include the capability to launch interceptors from sea-based platforms such as Aegis cruisers or submarines. A study is under way to select the platforms. The Block 2012 sea-based capability will use the interceptor developed for Block 2010. Block 2014: During this block, the interceptor is expected to evolve into a new, multiuse interceptor capable of performing boost, early-ascent, and midcourse-phase intercepts from platforms on land or sea. The KEI program office’s activities in fiscal year 2003 primarily revolved around the selection of a prime contractor for KEI development and testing. Activities involving the Near Field Infrared Experiment (NFIRE), which focus on reducing technical risk through experiments that collect data on the plume of boosting missiles, were also carried out in fiscal year 2003. In March 2003, two KEI concept design contracts worth $10 million each were awarded to competing teams headed by Northrop Grumman and Lockheed Martin. These contracts preceded MDA’s selection of Northrop Grumman in December 2003 as the element’s prime contractor. The Northrop Grumman $4.6 billion cost plus award fee contract employs a unique acquisition strategy that places mission assurance—the successful operation of the element to perform its mission—as a program priority. To implement this strategy, MDA based its source selection decision on the extent to which the contractor’s past performance produced successful results on programs of similar complexity, as well as on the performance of the proposed design. MDA also built incentives into the contract that require the prime contractor to achieve mission assurance through a disciplined execution of quality processes. For example, the contractor earns an award fee only if flight-tests are successful and the percentage of the award fee earned is determined by whether the tests are conducted on schedule. NFIRE, scheduled for a fall 2005 launch, is being funded under the KEI program as a risk-reduction activity to collect phenomenology data on boosting missiles. The experiment consists of launching an experimental satellite that is designed to record infrared imagery of a ballistic missile’s plume and the body of the missile itself. Data from NFIRE will help MDA develop algorithms and assess its kill vehicle design for boost-phase missile defenses. In addition to NFIRE, the KEI program is working on a variety of risk reduction activities. For example, work is being done in support of space-based KEI development, including miniaturization, weight reduction, and producibility of satellite and interceptor subcomponents. At this early stage of element development, data are not available to make a performance assessment. MDA expects to invest about $7.9 billion from fiscal year 2004 through 2009 to develop the KEI element. This is in addition to the approximately $91.5 million invested in the program’s immediate predecessor, the Kinetic Energy Boost program. According to the KEI Program Manager, the program is incorporating various innovative acquisition initiatives into the KEI development and testing contract. He told us that these initiatives are expected to encourage the contractor to develop a quality product on time and within the initially proposed price. Because the prime contract was awarded in December 2003 (fiscal year 2004), no fiscal year 2003 data existed for an assessment of the contractor’s cost and schedule performance. The KEI program’s planned costs for the next 6 fiscal years are expected to be around $7.9 billion. This covers land- and sea-based KEI development, ground-based risk mitigation projects to determine the feasibility of a space-based platform, and international cooperation projects. Of the $7.9 billion, approximately $4.8 billion is allocated to the land-based capability. Table 26 shows the expected costs of the program by fiscal year through 2009, the last year for which MDA published its funding plans. The immediate predecessor of the KEI element, Kinetic Energy Boost, was funded in fiscal year 2003 under the Boost Defense Segment and had a budget of $91.5 million. The prime contract awarded in December 2003 was based on a number of innovative acquisition strategies. First, the program gave competing contractors flexibility to design a system that met only one broad requirement—that the KEI element be capable of reliably intercepting missiles in their boost/ascent phase. MDA did not set cost or schedule requirements or specify how the contractors should design the system. Second, upon award of the development contract, the program locked the winning contractor into firm, fixed-price commitments for the production of a limited number of interceptor, launcher, and battle-management components. Third, the program office included an option in the contract for a commercial type “bumper-to-bumper warranty.” Finally, the contract stipulates that the contractor earns an award fee only if flight tests are successful. Additionally, the fee is reduced if the tests are not conducted on schedule. The Program Manager told us that the program’s goal was to provide the contractor with incentives to develop a quality product on schedule and at the originally proposed price. Additionally, consistent with the MDA acquisition approach, the KEI program plans to conduct annual continuation reviews to determine if the KEI program and its prime contract should continue. These reviews focus on contractor performance and external conditions, such as potential threats or MDA’s funding priorities. One initiative of the program’s acquisition strategy is the inclusion in Northrop Grumman’s development contract of a firm, fixed unit production price for all of the element’s components—launcher, interceptor, and battle management. This initiative is unique because the production price was agreed upon before the contractor developed the component’s design and because the price was a factor in MDA’s choice of Northrop Grumman as the KEI prime contractor. Program officials believe that the government benefited from this strategy, because competition encouraged Northrop Grumman and Lockheed Martin, which were competing for the contract, to offer MDA their best production price. According to program officials, Northrop Grumman could ask for a price increase, should it find, when production begins, that it cannot produce the components at the agreed-upon price. However, the price increase would come with a cost to the contractor. Northrop Grumman would have to provide data to support the new price, which would be time-consuming, and therefore, costly. Although this initiative appears to be beneficial to MDA, the agency could find when it reaches the production phase that it has not budgeted sufficient funds to support the production program. According to a study conducted by the Institute for Defense Analyses, requiring a binding price commitment during the development phase of an acquisition program provides the contractor with a significant incentive to underestimate production costs. The study goes on to explain that because of a similar initiative in the 1960s, a statistically significant number of contractors experienced production costs much greater than the firm fixed price agreed upon. Furthermore, the former head of the Defense Department’s independent cost estimating office stated that the only time it makes sense to request a fixed production unit price at this point in a weapon system’s development is when the weapon is a low-technology project whose requirements and funding are stable. These criteria do not apply to KEI. Rather, the KEI contractor is being asked to develop a technologically advanced system associated with the challenging mission of boost phase intercepts. The program office acknowledges that it faces challenges in developing the first operational boost phase intercept capability that employs hit-to-kill concepts. In addition, from discussions with program officials, we found that KEI’s software costs could be underestimated, putting the program at risk for cost growth and schedule delays. The scientific and missile defense communities recognize that the boost phase intercept mission is technically and operationally challenging, particularly because of the short timeline involved with engaging a boosting missile. For example, in its July 2003 report on boost phase intercept systems, the American Physical Society concluded that boost-phase defense of the entire United States against solid-propellant ICBMs is unlikely to be practical when all factors are considered, no matter where or how interceptors are based. According to the report, even with optimistic assumptions, a terrestrial-based system would require very large interceptors with extremely high speeds and accelerations to defeat a solid-propellant ICBM launched from even a small country such as North Korea. Furthermore, a scientific study on boost-phase defense commissioned by MDA focused on selected issues of high risk, including methods for early launch detection of missile launches, interceptor divert requirements, and discrimination of the missile’s body from its luminous exhaust plume. The study concluded that there are no fundamental reasons why an interceptor cannot hit a boosting target with sufficient accuracy to kill the warhead. However, the study identified several challenges, including understanding the plume phenomenology well enough to have confidence in the appropriate sensor combination chosen for the interceptor. Both studies highlighted the short timeline that the boost-phase system will have to detect and hit an enemy missile as a key area of concern. The KEI Program Office is uncertain of whether the negotiated cost of the prime contract includes sufficient funds to complete software development for the various KEI components, including the battle-management, interceptor, and launcher components. Northrop Grumman based its estimates of software development on comparisons with similar systems— such as GMD and Aegis BMD—and on a projection that existing software could be reused. MDA officials from the program office told us that they were somewhat concerned that Northrop Grumman underestimated the amount of software it could reuse from the GMD program for the KEI program. Software growth in weapon systems programs has traditionally been problematic. Historically, a contractor must develop twice as many lines of software code for the weapon system as it initially estimated. This growth has occurred when contractors underestimate the effort, make invalid assumptions regarding the extent to which existing software code can be reused, and make unrealistic assumptions about how quickly software can be produced. If software growth in the KEI program increases at the historical rate, the amount of software needed by the element will likely exceed the contractor’s initial estimate of 1 million lines of code, causing cost increases and schedule delays. According to program officials, MDA discussions with Northrop Grumman resulted in a reduction of its estimate of the amount of existing software code that could be reused in the KEI element. However, the officials told us that the program is still concerned that the contractor’s estimate is optimistic. Software estimates typically include an analysis of uncertainty, which indicate the reliability of the contractor’s estimates for the software development effort. KEI program officials noted that the contractor performed an uncertainty analysis for the interceptor component but not for the battle management component that includes the bulk of the KEI software code. If the KEI contractor cannot develop the software within the negotiated cost of the KEI contract, MDA could find itself in the position of having to locate funds to cover cost overruns. MDA would benefit from quickly recognizing this funding shortfall because, with time, it might be able to locate funding without causing significant perturbations in the KEI or other element’s programs. Also, if additional funding were needed, making the funds available to the contractor early in the development effort would allow the contractor to increase personnel so that the effort would not fall behind schedule. Completing uncertainty analyses for all components of the KEI element is the best means of determining if such a funding shortfall is likely. We recommend that MDA analyze the degree of risk associated with the KEI software components by performing an uncertainty analysis that quantifies the reliability of the proposed estimates. The Space Tracking and Surveillance System (STSS) will eventually comprise a constellation of low-orbiting satellites used to detect and track enemy missiles throughout all phases of flight. The Missile Defense Agency (MDA) manages STSS, which replaces the Air Force’s Space-Based Infrared System-Low (SBIRS-Low) program. The STSS program office is preparing to launch in 2007 two demonstration satellites that were built under the SBIRS-Low program. After launch, MDA plans to assess how well these demonstration satellites perform missile defense surveillance functions. On the basis of this assessment, the agency will determine capabilities and goals for next-generation STSS satellites. The STSS program office completed most activities on time and slightly over budget during fiscal year 2003. However, cost and schedule performance could potentially slip because of unforeseen problems arising during the process of preparing the satellites for launch. Schedule: Program activities completed in fiscal year 2003 were focused on the ground testing of existing hardware rather than on the design and development of future STSS satellites. Equipment built for the SBIRS-Low program was retrieved from storage and tested to determine whether individual components were still in good working order. Testing of the first demonstration satellite’s hardware—the spacecraft itself and infrared sensors—was completed on time, and testing of the second satellite is to be completed by August 2004, slightly behind schedule. Software development activities also have been completed. However, STSS program officials are closely monitoring the development of software for the satellites’ sensors because software requirements have not been finalized. Performance: STSS’s indicators show that the program is on track for meeting performance requirements. The Department of Defense (DOD) budgeted about $4.15 billion for STSS’s development during fiscal years 2004 through 2009. Earlier, MDA expended about $540 million in fiscal years 2002 and 2003. In addition, from program initiation through 1999, the SBIRS-Low program invested $686 million to develop the demonstration satellites that are now part of the STSS program. Cost: Our analysis of prime contractor cost performance reports shows that the contractor completed work in fiscal year 2003 at slightly more cost than budgeted. Specifically, the contractor overran budgeted costs by less than $1 million and could not complete about $6.4 million worth of work. Because of changes made to the contract during this time, more data are needed to determine whether the entire contract will exceed its projected cost and schedule. The contractor reported that sensor-related issues are among the problems that contributed to the cost overrun and schedule delays. These problems, the contractor said, could jeopardize the overall delivery of the satellites. Risks: On the basis of our assessment of fiscal year 2003 activities, we did not identify any evidence that the STSS program will be unable to launch the two demonstration satellites in 2007. However, MDA identified a number of risk areas that have the potential to increase the program’s cost and delay the 2007 launch of these satellites. Unforeseen problems could arise during the testing, assembling, and integration of hardware components of the satellites, which had been in storage for 4 years. Officials cannot predict which components will be found in nonworking order or the costs associated with fixing them. Also, software development and software and hardware integration are areas that historically have been responsible for affecting a program’s schedule. The Space Tracking and Surveillance System (STSS) is being developed as an integrated element of the Ballistic Missile Defense System (BMDS). The Missile Defense Agency (MDA) envisions that the STSS element will be comprised of a constellation of low-orbiting satellites to detect and track enemy missiles throughout all phases of flight—from launch through midcourse and into reentry. Any real operational capability, however, would not be realized until the next decade. The STSS program is currently working on the first increment of the STSS element, known as Block 2006. Schedule and technical performance objectives for the Block 2006 element are detailed in the MDA Director’s Guidance, which directs the STSS program office to prepare and launch two demonstration satellites that were partially built under the Air Force’s Space-Based Infrared System-Low (SBIRS-Low) program. The two satellites each contain two infrared sensors, one that would acquire targets by watching for bright missile plumes during the boost phase (an acquisition sensor), and one that would track the missile through midcourse and reentry (a tracking sensor). MDA plans to launch these satellites in 2007, in tandem, in an effort to assess how well they perform the missile defense surveillance and detection functions. Using data collected by the satellites, MDA will determine what capabilities are needed, and what goals should be set, for the next-generation of STSS satellites. Over the past two decades, the Department of Defense (DOD) initiated a number of programs and spent several billion dollars trying to develop a system for tracking missiles from space. Owing partially to the technical challenges associated with building such a system, DOD did not successfully launch any satellites or demonstrate any space-based midcourse tracking capabilities. Program managers did not fully understand the challenges in developing these systems and, accordingly, schedules were overly optimistic and program funding was set too low. For example, sensors aboard the satellites must be able to track deployed warheads in the midcourse phase of flight in contrast to the bright plume of boosting missiles. To perform this mission, onboard sensors must be cooled to low temperatures for long periods of time and be able to withstand the harsh environmental conditions of space. The last program under development for detecting and tracking missiles from low-earth orbits in space was SBIRS-Low, which DOD established in 1996 to support national and theater missile defense. Its mission was to track missile complexes over their entire flights and to discriminate warheads from decoys. The SBIRS-Low program experienced cost, schedule, and performance shortfalls. As a result, DOD cancelled the accompanying technology program in 1999—the two-satellite Flight Demonstration System—and put the partially constructed equipment into storage. In October 2000, Congress directed DOD to transfer the SBIRS-Low program to the Ballistic Missile Defense Organization (now MDA). When MDA inherited SBIRS-Low, the agency decided to make use of the equipment that was partially built under the SBIRS-Low technology program by completing the assembly of the equipment and launching the two satellites in 2007 to coincide with broader missile defense tests. At the end of 2002, the SBIRS-Low program became STSS. STSS’s development is proceeding in a series of planned 2-year blocks. Near-term blocks are known as Blocks 2006, 2008, and 2010. Other blocks may follow, but on the basis of recent budget documentation, MDA has not yet defined their content. Block 2006. Block 2006 involves the assembly, integration, testing, and launch of two demonstration satellites in 2007, as described above. Block 2008. Block 2008 is primarily an upgrade of Block 2006 ground stations, which are used to collect and analyze data from Block 2006 satellites. The software upgrades will benefit both the demonstration satellites as well as future satellites. Block 2010. The Block 2010 program is essentially a new phase of STSS development. Building upon lessons learned from the previous development efforts and blocks, Block 2010 involves the design and development of new-generation satellites, which are expected to include more robust technologies. MDA plans to launch the first of these in 2011. The STSS program office has completed most activities planned for fiscal year 2003. According to the program office, the contractor has been performing to an accelerated delivery schedule, and activities associated with testing and completing the two satellites have proceeded with fewer problems than anticipated. About 30 percent of Block 2006 activities have been completed, but the fiscal year 2003 activities were generally simple. For example, they involved taking the equipment out of storage and performing individual component testing to determine whether any degradation in the equipment had occurred over time. The program still has many more tasks before the satellites will be ready for launch, such as completing software development and integration activities. Block 2006 activities achieved during fiscal year 2003 can be divided into three categories. Specifically, the STSS program office worked to test hardware components of existing satellites; develop satellite software, as needed, not developed under the previous prepare for a design review to be held in early fiscal year 2004 to ensure the design’s adequacy to support its BMDS mission. At the beginning of the STSS program in 2002, MDA retrieved from storage the satellite components that were partially constructed under the SBIRS-Low program. STSS contractors retrieved these legacy components and are in the process of testing the satellite spacecraft (the space platform) and its payload (infrared sensors and supporting subsystems) to ensure that this hardware is still in working order. Testing of the first satellite’s components is complete: sensor hardware testing began in November 2002 and was completed in October 2003; the spacecraft’s hardware testing began in May 2003 and was completed in September 2003. Part of the testing of the component hardware of the second satellite is proceeding as planned. Although there was a delay in the start of the spacecraft testing, the second satellite’s component testing remained on schedule. For example, STSS contractors have visually inspected the satellite’s spacecraft hardware. Spacecraft hardware testing was originally scheduled to begin in September 2003 and be completed in November 2003. However, it did not begin until November 2003 and is now scheduled to be completed in May 2004. Payload hardware testing began in December 2003 but will not be finished until August 2004. Table 27 summarizes the activities and completion dates associated with hardware testing. Table 28 summarizes the principal software development activities completed in fiscal year 2003 pertaining to software development for the spacecraft and for the ground segments. Most activities completed to date have finished at or slightly behind schedule. However, the STSS program office is closely tracking the development of payload software, because there is significant cost, schedule, and performance risk associated with the effort. In particular, the program office has not fully established software requirements. Studies have shown that when operational needs are not well defined, the associated software effort tends to grow, resulting in large cost overruns, schedule slips, and reduced functionality. These risks are compounded by the fact that software from the SBIRS-Low program was not completed or sufficiently documented. STSS program officials are concerned that the extent of software reuse might have been optimistic and, consequently, software development costs could be more than double the originally proposed cost. The STSS program office conducted a single design review in fiscal year 2003—the System Preliminary Design Review. According to the program office, although it was delayed by 1 month, the outcome was successful. During the latter part of fiscal year 2003, the program office began preparing for the System Critical Design Review, which was successfully completed early in fiscal year 2004. The Block 2006 STSS satellites are built from legacy hardware and will be used as technology demonstrators (rather than for operational missions). The program considers that demonstration of STSS functionality as more critical than the demonstration of STSS effectiveness in performing the functions. The rationale is to keep costs within budget, especially for satellites that have an in-orbit life of 18 to 24 months. Nonetheless, data provided to us by MDA indicate that all STSS performance indicators, with the exception of the one pertaining to the visible sensor, are on track for meeting their respective requirements. MDA expects to invest about $4.15 billion from fiscal year 2004 through 2009 in the element’s development. This is in addition to the approximately $1.2 billion invested in the SBIRS-Low program from the program’s initiation in 1996 through fiscal year 1999 and in the STSS element from 2002 through 2003. In fiscal year 2003, the contractor reported that its work cost slightly more than budgeted and that it was somewhat behind schedule. We were unable to make an independent assessment of the contractor’s cost and schedule performance because of contract changes. The contractor was working toward a single-launch (tandem launch) strategy while measuring performance against a two-launch strategy. Also, the contractor was reporting against an accelerated schedule that was not required by the contract. STSS’s costs for the next 6 fiscal years are expected to be approximately $4.15 billion. These funds will finance activities for Block 2006, Block 2008, and the development of new-generation satellites planned for Block 2010. Table 29 shows the expected costs of the program by fiscal year through 2009, the most recent year for which MDA published its funding plans. Prior to fiscal year 2004, MDA spent approximately $250 million and $294 million in fiscal years 2002 and 2003, respectively, for this program. Furthermore, the SBIRS-Low program invested $686 million to develop the demonstration satellites that are now part of the STSS program. In fiscal year 2003, the contractor reported that its work cost slightly more than budgeted and that it was somewhat behind schedule. Although the contractor’s cost performance was positive through the first half of fiscal year 2003, it began to decline in March 2003 and continues to do so. Schedule performance began to decline in December 2002 and continued throughout fiscal year 2003. The government routinely uses contractor Cost Performance Reports to independently evaluate prime contractor performance relative to cost and schedule. Generally, the reports detail deviations in cost and schedule relative to expectations established under the contract. Contractors refer to deviations as “variances.” Positive variances—activities costing less or completed ahead of schedule—are generally considered as good news and negative variances—activities costing more or falling behind schedule—as bad news. Figures 9 and 10 show the STSS contractor’s cost and schedule performance during fiscal year 2003. According to Cost Performance Reports, work completed during fiscal year 2003 cost about $1 million more than estimated—as indicated by the September 2003 data point—and the contractor could not complete about $6.1 million worth of the work scheduled for the same time period. Because of contract changes, we could not fully rely upon the data reported in the contractor’s Cost Performance Reports to make our own analysis of the STSS contractor’s cost and schedule performance. In April 2003, the STSS program office altered its launch strategy in response to funding cuts. Rather than carrying out two separate launches, the program decided to launch the two satellites in tandem, which means one launch vehicle will place both satellites into orbit. The STSS program office notified the contractor in April 2003 of the change, but the contractor did not formally adjust its performance measurement baseline to reflect the tandem launch until September 2003. According to the program office, the tandem launch resulted in minimal changes to the contract’s overall cost and schedule. However, officials told us that it did result in changes in the content, budget, and schedule of individual work tasks. Therefore, throughout most of fiscal year 2003, the contractor was completing work tasks for the tandem launch. However, the contractor’s cost and schedule performance was being measured against work tasks reflected in the two- launch strategy. Because the baseline that the contractor used to measure its performance during most of fiscal year 2003 did not always reflect the actual work being done, Cost Performance Reports for April through September may not provide a clear picture of the contractor’s cost and schedule performance. In September 2003, the contractor adjusted the contract’s work tasks, along with their budgets and schedules, to reflect the change to a tandem launch. Another factor complicating our analysis is that the contractor established a performance measurement baseline on the basis of an accelerated schedule for completing the work. The contractor did this in response to a unique cost-control incentive in the STSS Award Fee plan. The plan allows the contractor to earn up to 50 percent of a potential cost under-run if it can deliver the two satellites (1) up to 6 months early, (2) for less than the negotiated cost, and (3) meeting all orbit performance requirements. As a direct result of this incentive, the contractor elected to implement a performance measurement baseline that reflected a 6-month accelerated schedule. This means that the contractor might be performing work on a schedule that would allow it to complete all work by the end of the contract, but Cost Performance Reports could show that work was falling behind schedule. All cost and schedule performance data for fiscal year 2003, as reported by the contractor, are illustrated in figures 9 and 10. We adjusted schedule data to reflect the accelerated schedule, but we could not adjust cost or schedule data to account for the change to a tandem launch. Because we could not make these adjustments, we also included Cost Performance Report data for October 2003 in the figures. The October report is the first report the contractor issued after adjusting its performance measurement baseline to account for the tandem launch. In our opinion, the October report is a better indicator of the contractor’s performance. However, we note that further data are needed before an estimate can be made of whether the cost and schedule of the entire contract are likely to be more than projected. In October 2003, the STSS contractor reported a cumulative cost overrun of approximately $3 million. It attributed this overrun to sensor issues, sensor payload test plan inefficiencies, more costly custom interface assembly, and tasks being more complex than planned. Also in October, the contractor reported it was approximately $11 million behind schedule. In our opinion, this might have an unfavorable impact on the program, because additional funding may be needed to make up the lost schedule. The contractor reported that schedule delays were attributed to sensor- testing problems with flight hardware, payload test procedures taking longer than expected, rigorous Failure Review Board reviews, and problems with the sensor- and payload-tracking algorithms. The contractor reported that these problems could jeopardize the overall delivery of the satellites. On the basis of our assessment of fiscal year 2003 activities, we did not identify any evidence that the STSS program would be unable to launch the two demonstration satellites in 2007. However, MDA identified a number of areas that have the potential to increase the program’s cost and delay the 2007 launch of these satellites. We recognize that unforeseen problems could be discovered through testing, assembling, and integrating the hardware and software components of the satellites. MDA cannot predict which components will be found in nonworking order or the costs associated with fixing them. A related issue is the availability of original suppliers. Because the equipment was in storage for several years, the original equipment manufacturers may not offer maintenance for some of the parts considered obsolete. If replacement parts are needed as a result of failures or redesigns, this could create schedule delays for the program. Finally, the STSS program has also identified a number of activities that have the potential to affect the program’s schedule, including completing software development and related integration activities. THAAD’s prime contractor performed less efficiently in fiscal year 2003 than in previous years. However, the contractor is, overall, under budget and ahead of schedule. Our analysis indicates that missile development was the principal cause of the decline in the contractor’s cost and schedule performance during fiscal year 2003. Schedule: Because THAAD previously was under Army management, the current program office re-planned THAAD’s primary research and development contract to accommodate the Missile Defense Agency’s (MDA’s) acquisition approach. The office also completed Block 2004 design reviews largely on schedule. In addition, the program conducted ground tests in preparation for initial flight testing, which is scheduled to begin at the end of 2004. However, explosions that occurred in 2003 at a propellant mixing facility could jeopardize deliveries of THAAD boosters and already have delayed the first flight test—a non-intercept test scheduled for the first quarter of fiscal year 2005—up to 3 months. Nevertheless, the program office expects to maintain the schedule for the first intercept attempt, currently scheduled for the fourth quarter of fiscal year 2005. The Department of Defense (DOD) budgeted about $4.3 billion for THAAD’s development during fiscal years 2004 through 2009. Earlier, DOD expended about $6.5 billion between the program’s inception in 1992 and 2003 for related developmental efforts. Performance: The program office told us that key indicators show that THAAD is on track to meet operational performance goals. However, data from flight testing are necessary to anchor end-to-end simulations of THAAD operations to confidently predict the element’s effectiveness. Cost: Our analysis of prime contractor cost performance reports shows that the contractor’s positive cost and schedule variance were somewhat eroded during fiscal year 2003, which was driven by the missile component but offset by other THAAD components. With 49 percent of the THAAD contract completed, the prime contractor is, overall, under budget and ahead of schedule. Risks: On the basis of our assessment of fiscal year 2003 activities, we did not find evidence of key risks that could affect MDA’s ability to develop, demonstrate, and field the THAAD element during the 2008-2009 time frame within scheduled and cost estimates. However, it is too early to state with confidence whether the element will or will not be ready for integration into the Ballistic Missile Defense System during this time. The Theater High Altitude Area Defense (THAAD) element is a ground- based missile defense system being developed to protect forward-deployed military forces, population centers, and civilian assets from short- and medium-range ballistic missile attacks. As an element of the Missile Defense Agency’s (MDA’s) Terminal Defense Segment, THAAD would provide the opportunity to engage ballistic missiles—outside or inside the earth’s atmosphere—that were not destroyed earlier in the boost or midcourse phases of flight by other planned Ballistic Missile Defense System (BMDS) elements, such as Aegis BMD. A THAAD unit consists of a command and control/battle management component for controlling and executing a mission, truck-mounted launchers, interceptors, an X-band radar, and ground support equipment. The THAAD interceptor is comprised of a single-stage booster and kill vehicle, which destroys enemy warheads through hit-to-kill collisions. The THAAD radar is a solid-state, phased-array, X-band radar that performs search, track, discrimination, and other fire-control functions. The THAAD radar also sends updated target information to the kill vehicle while in- flight. The THAAD demonstration program began in 1992 but was plagued by a string of flight-test failures from 1995 to 1999. As noted in an earlier report, THAAD’s early failures were caused by a combination of a compressed test schedule and quality control problems. Also, as reported in the Director, Operational Test and Evaluation (DOT&E) Fiscal Year 1999 Annual Report to Congress, the sense of urgency to deploy a prototype system resulted in an overly optimistic development schedule. Rather than being event driven—proceeding with development only after technical milestones were met—the program tried to keep pace with the planned schedule. Schedule forces and budget cuts contributed to deficient manufacturing processes, quality control, product assurance, and ground-testing procedures, which in turn, resulted in poor design, lack of quality, and failed flight tests. The ultimate result was a schedule slip of 6 years for the deployment of the objective THAAD system. After devoting substantial time to pretest activities, the THAAD program conducted two successful flight tests in 1999. The program then transitioned to the product development phase of acquisition, in which developmental activities shifted from technology development and demonstration to missile redesign and engineering. The Department of Defense (DOD) transferred the THAAD program from the Army to the Ballistic Missile Defense Organization (now MDA) on October 1, 2001. The overarching goal of the THAAD program is to field an operational capability consisting of tens of missiles during the Block 2008 time frame. Although THAAD’s development is broken out by block—2004, 2006, and 2008—each is a stepping-stone leading to the Block 2008 capability. The development efforts of each block incrementally increase element capability by maturing the hardware’s design and upgrading software. Block 2004. Block 2004 activities are expected to focus on developing and ground testing THAAD components. These tests lead to the demonstration of a rudimentary capability—an intercept capability against a short-range, threat-representative target (Flight Test 5)—at the end of Block 2004. At the end of the block, the THAAD “missile inventory” will consist of one spare missile. Blocks 2006. By the end of Block 2006, the THAAD program will have conducted six more flight tests, five of which are intercept attempts. The flight tests scenarios are expected to include intercepts inside and outside the Earth’s atmosphere. One of the five intercept attempts will be conducted employing a salvo-firing doctrine, that is, two THAAD interceptors will be launched against a single target. Blocks 2008. By the end of Block 2008, the THAAD program plans to demonstrate that the THAAD element is ready for fielding with tactical missiles, demonstrate that the element can intercept threat-representative targets (short-range and medium-range ballistic missiles), and show that THAAD can interoperate with other elements as part of the BMDS. The THAAD program completed most activities planned for fiscal year 2003, which were focused on contractual activities, design reviews, and subcomponent-level development and testing, leading up to flight testing beginning in fiscal year 2005. During 2003, the THAAD Project Office aligned its primary research and development contract with MDA’s block acquisition approach. For example, officials re-planned the contract to accommodate MDA’s block strategy for developing missile defense capabilities. Because of changes in the fiscal year 2003 budget, including a funding cut of $117 million, THAAD completed its contract alignment activities slightly behind schedule. However, these activities were completed by the first quarter of fiscal year 2004. Table 30 summarizes the principal contractual activities planned for fiscal year 2003 and their actual completion date. Since 1999, the program has conducted a number of reviews to evaluate the designs of THAAD’s various components and of the element as a whole. Early reviews, known as Preliminary Design Reviews (PDRs), were conducted to evaluate the progress, technical adequacy, and risk resolution of the selected design approach; to determine their compatibility with the performance and engineering requirements of the development specification; and to establish the existence and compatibility of interfaces among other items of equipment facilities, computer programs, and personnel. Later reviews—Critical Design Reviews (CDRs)—determined that the designs satisfied the performance and engineering requirements of the development specification; established the design compatibility between the component and other items of equipment, facilities, computer programs, and personnel; assessed the component’s producibility and areas of risk; and reviewed preliminary product specifications. The program successfully completed two design reviews scheduled in fiscal year 2003; the THAAD missile was the subject of both of these reviews. Tables 31 and 32 summarize all principal activities related to the verification of THAAD’s Block 2004 design. The THAAD program completed a number of ground tests in the fiscal year 2003 time frame. These events are listed in table 33. The program office characterized these tests as key events in preparation for Block 2004 flight testing. among Blocks 2004, 2006, and 2008. The first two of the five planned Block 2004 flight tests are referred to as control test flights (CTF)—non-intercept tests that focus on how the missile operates under stressful environmental conditions. The third flight test is a seeker characterization flight (SCF), which ensures proper functioning of the seeker in a live intercept environment. This is a non-intercept test as well, but targets are involved. The fourth test, flight test 4 (FT-4), is the first intercept attempt at White Sands Missile Range (WSMR) with a configuration—target and engagement geometry—comparable to the flight tests during the Program Definition and Risk Reduction phase of development. Block 2004 flight test activities end with a second intercept attempt (FT-5), conducted at Pacific Missile Range Facility (PMRF), against a threat-representative target. The program office plans to consume all procured missiles in flight tests. However, because there will be five flight tests in Block 2004 and THAAD has plans to procure six test missiles, one missile will be available as a spare. THAAD program office officials also noted that test missiles could be used for emergency operational use, rather than as test assets, if needed. Table 34 summarizes Block 2004 flight test events, including dates and objectives. Flight-test conditions are grouped by block. For example, Block 2004 tests focus on engagements outside the atmosphere (exoatmospheric), whereas the first intercept attempt inside the atmosphere (endoatmospheric) occurs in Block 2006. The level and sophistication of testing achieved to that point defines the capability of the THAAD element at a given time. Finally, deliveries of THAAD boosters could be jeopardized by explosions at Pratt & Whitney’s propellant mixing facility that occurred during the summer of 2003. According to updated test schedules, these incidents have already delayed the first non-intercept flight test, Control Test Flight 1, by 3 months. However, the program office expects to maintain the schedule for the first intercept attempt, FT-4, currently scheduled for the fourth quarter of fiscal year 2005. To mitigate schedule risk, the program office enlisted Aerojet as the replacement vendor for Pratt and Whitney’s propellant mix and cast operations. We note that this Pratt & Whitney facility also provides rocket motors for the Aegis BMD and GMD programs. Data collected during element-level flight testing will be used to “anchor” end-to-end simulations of THAAD operation. Until these simulations are properly validated and verified, one cannot be confident of any quantitative assessment of the element’s effectiveness for terminal defense. Nonetheless, the program office told us that all performance indicators predict that THAAD is on track to meet operational performance goals. MDA expects to invest about $4.3 billion from fiscal year 2004 through 2009 in the development and enhancement of the THAAD element. This is in addition to the $1.47 billion expended in fiscal years 2002 and 2003. Most of the THAAD budget goes to fund the element’s prime contract. The contractor reported that its fiscal year 2003 work cost slightly more than budgeted and that it was somewhat behind schedule. Specifically, the work cost about $12 million more than expected, and the contractor could not complete approximately $12.2 million of the work scheduled for the fiscal year. The program estimates that it will need about $4.3 billion over the next 6 years to continue THAAD’s development. This includes funds for Blocks 2004, 2006, and Block 2008. Program costs prior to THAAD’s transfer to MDA at the beginning of fiscal year 2002 amounted to approximately $4.9 billion. In fiscal years 2002 and 2003, the program expended an additional $1.6 billion, bringing the total investment in THAAD between the program’s inception and 2003 to about $6.5 billion. Table 35 shows the expected THAAD program costs by fiscal year from 2004 through 2009, the last year for which MDA published its funding plans. The THAAD prime contract consumes the bulk of the program budget: an average of 70 percent is allocated to the prime contractor team and 30 percent is allocated to the government for Block 2004 efforts. The contract has undergone re-planning to re-phase the work according to blocks. As indicated in table 30, the re-planning was completed in November 2003, and contract negotiations were finalized in December 2003. THAAD’s prime contract is held by Lockheed Martin Space Systems in Sunnyvale, California; Lockheed also manages the missile’s development. The government routinely uses contractor Cost Performance Reports to independently evaluate prime contractor performance relative to cost and schedule. Generally, the reports detail deviations in cost and schedule relative to expectations established under the contract. Contractors refer to deviations as “variances.” Positive variances—activities costing less or completed ahead of schedule—are generally considered as good news and negative variances—activities costing more or falling behind schedule—as bad news. The THAAD prime contract continued to carry a positive cost and schedule variance during fiscal year 2003. However, as figure 11 shows, the contractor’s positive cost and schedule variance eroded somewhat during fiscal year 2003: the contractor’s work cost about $12.0 million more than budgeted, and the contractor could not complete approximately $12.2 million worth of work scheduled during this time. The unfavorable cost variance was driven by the missile component but partially offset by other components. However, with 49% of the THAAD contract completed, the prime contractor is, overall, under budget and ahead of schedule. The contractor experienced difficulties with missile development, which accounts for 35 percent of the contract’s total cost. In fiscal year 2003, work on missile development cost approximately $11 million more than budgeted. According to MDA’s analysis, propulsion subsystem development, particularly problems with the development of the Divert and Attitude Control System, has been the driver for missile development cost overruns. The prime contractor estimates that the entire contract will be completed slightly under budget. However, in order to finish the work effort within budget, the contractor needs to work as efficiently as it did in the previous fiscal years. In our opinion, the contractor’s estimate is somewhat optimistic, considering the contractor’s trend of declining performance and because approximately 5 years of work remain on this contract. According to our analysis of the contractor’s data, the contractor has been completing, on average, $0.97 worth of scheduled work for every budgeted dollar spent to accomplish that scheduled work during fiscal year 2003. On the basis of this efficiency rate, we estimate that the contract will overrun its budget at completion by between $23 million and $65 million. On the basis of our assessment of fiscal year 2003 activities, we did not find any evidence of key risks that could affect MDA’s ability to develop, demonstrate, and field the THAAD element within schedule estimates. However, it is too early to state with confidence whether the element will or will not be ready for integration into the BMDS during the Block 2008 time frame, especially since flight testing has not yet begun. Unsuccessful intercept attempts could delay the program and increase its cost, as was the case during THAAD’s Program Definition and Risk Reduction phase of the 1990s. The National Defense Authorization Act for Fiscal Year 2003 directed the Department of Defense (DOD) to establish cost, schedule, testing, and performance goals for its ballistic missile defense programs for the years covered by the Future Years Defense Plan. In the act, Congress also directed us to assess the extent to which the Missile Defense Agency (MDA) achieved these goals each in fiscal years 2002 and 2003. We were unable to fulfill this mandate in fiscal year 2002 because MDA had not established such goals. As an alternative, we began to assess the tools that MDA uses as part of the agency’s management process to monitor cost, schedule, and performance progress. In February 2003, we briefed the staff of the Congressional addressees of this report on our initial findings. However, we were unable to complete this assessment, because some of the tools were evolving and others had been only partially implemented. MDA identified four tools it uses to monitor progress: the Integrated Master Plan (IMP), the Integrated Master Schedule (IMS), the Earned Value Management System (EVMS), and Technical Performance Measures (TPM). The IMP identifies essential actions that must be completed to successfully deliver a block of BMDS capability. Between our review in September 2002 and June 2003, the document remained in draft form and evolved from a generic checklist of activities into a template focused on the specific activities needed to deliver a particular block. In June 2003, MDA amended the draft BMDS IMP to reflect the President’s direction of December 2002 to begin fielding the Block 2004 system. Similarly, the IMS was evolving. The purpose of the IMS is to plot the expected date of activities that must be completed to achieve a block of capability. MDA altered the IMS because the capability being developed in Block 2004 changed from the delivery of a test bed to the delivery of a fielded capability. The EVMS, which tracks whether the contractor is performing work within budgeted cost and schedule, was only partially implemented at the time of our fiscal year 2002 review. Many of the element prime contracts were being modified to reflect MDA’s new block strategy, and the contractors could not report progress toward Block 2004 until the contractor could develop a program performance baseline against which cost and schedule performance could be measured. Finally, MDA had only partially implemented the tracking of TPMs— parameters of system, element, and component effectiveness—as part of its program management process. Specific elements such as GMD had tracked TPMs, but as noted by program officials, MDA had just begun to develop system-level TPMs. In addition to the individual named above, Lily Chin, Tana Davis, Diana Dinkelacker, David Hand, David Hubbell, Sigrid McGinty, Madhav Panwar, Karen Richey, Adam Vodraska, Carrie Wilson, and Randy Zounes (Analyst- in-Charge) made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
The Department of Defense (DOD) has treated ballistic missile defense as a priority since the mid-1980s and has invested tens of billions of dollars to research and develop such capabilities. In 2002 two key events transformed DOD's approach in this area: (1) the Secretary of Defense consolidated existing missile defense elements into a single acquisition program and placed them under the management of the Missile Defense Agency (MDA) and (2) the President directed MDA to begin fielding an initial configuration, or block, of missile defense capabilities in 2004. MDA estimates it will need $53 billion between fiscal years 2004 and 2009 to continue the development, fielding, and evolution of ballistic missile defenses. To fulfill a congressional mandate, GAO assessed the extent to which MDA achieved program goals in fiscal year 2003. While conducting this review, GAO also observed shortcomings in how MDA defines its goals. MDA accomplished many activities in fiscal year 2003--such as software development, ground and flight testing, and the construction of facilities at Fort Greely, Alaska--leading up to the fielding of the initial block of the Ballistic Missile Defense System. During this time, however, MDA experienced schedule delays and testing setbacks, resulting in the fielding of fewer components than planned in the 2004-2005 time frame. For example, delays in interceptor development and delivery have caused flight tests (intercept attempts) of the Ground-based Midcourse Defense (GMD) element to slip over 10 months. In flight tests conducted during fiscal year 2003, MDA achieved a 50 percent success rate in intercepting target missiles. While MDA is increasing the operational realism of its developmental flight tests--e.g., employing an operational crew during its late 2003 ship-based intercept attempt--the GMD element has not been tested under unscripted, operationally realistic conditions. Therefore, MDA faces the challenge of demonstrating whether the capabilities being fielded, consisting primarily of the GMD element, will perform as intended when the system becomes operational in 2004. Finally, MDA's cost performance during fiscal year 2003 was mixed. The prime contractors of four system elements completed work at or near budgeted costs during this time, but prime contractors for two system elements overran budgeted costs by a total of about $380 million. GAO found that program goals do not serve as a reliable and complete baseline for accountability purposes and investment decision making because they can vary year to year, do not include all costs, and are based on assumptions about performance not explicitly stated. For example, between its budget requests for fiscal years 2004 and 2005, MDA revised its estimated cost for the first fielded block of missile defense capability. This first block is costing $1.12 billion more and consists of fewer fielded components than that planned a year earlier. In addition, MDA's acquisition reports for Congress do not include life-cycle costs, which normally provide explicit estimates for inventory procurement, military construction, operations, and maintenance. Finally, MDA does not explain some critical assumptions-- such as an enemy's type and number of decoys--underlying its performance goals. As a result, decision makers in DOD and Congress do not have a full understanding of the overall cost of developing and fielding the Ballistic Missile Defense System and what the system's true capabilities will be.
The U.S. government’s economic assistance in Egypt focuses primarily on partnering with the Egyptian government to promote economic growth and development. This support has three core components: Traditional project assistance, managed by USAID, focuses on, among other things, private sector development, health and education, and the environment. The Development Support Program, or “cash transfer program,” provides assistance funding conditioned on the Egyptian government’s achievement of specific reform goals. The CIP supplies financing to Egyptian private sector importers of U.S. goods and funding to the Egyptian government that is not specifically conditioned on any reforms. Between 1975 and 1986, the CIP funded only public sector imports. In 1986, USAID established a private sector CIP, providing foreign exchange to finance imports of capital and noncapital goods from the United States. Since 1986, the CIP has facilitated more than $3.1 billion in loans to the private sector for the purchase of U.S. exports. In 1991, USAID ended the public sector CIP. In 1998, the U.S. and Egyptian governments agreed to reduce U.S. economic support from $815 million to $407 million per year in fiscal year 2009. Annual CIP appropriations are projected to remain constant until fiscal year 2007 and decline to $150 million by fiscal year 2009 (see fig. 1). CIP transactions have two main components (see fig. 2 for a depiction of the CIP transaction flow). First, USAID issues letters of commitment to participating U.S. banks (nine as of 2004). These letters authorize the banks to pay U.S. exporters that sell goods through the CIP. After the goods are shipped and the exporter provides the required documentation, the U.S. bank pays the exporter and requests reimbursement from USAID. Second, the Egyptian importer seeks a loan, denominated in Egyptian pounds, from 1 of 31 participating local banks (27 private and 4 public), which assumes the credit risk for the loan amount. The importer must document a reasonable number of bids and certify that the goods are new and unused; made in, and shipped from, the United States; and consistent with the U.S. government’s list of eligible commodities. Before the Egyptian bank issues a letter of credit authorizing the transaction, USAID again reviews the application. Regardless of whether the importer repays the loan, the local bank is required to send the net proceeds in Egyptian pounds to a special account at the Central Bank of Egypt. The CIP provides favorable financing to importers of U.S. goods and, through the loan repayments, supplies funds to the Egyptian government. From fiscal years 1999-2003, about 650 Egyptian firms used the CIP to import just over $1 billion in U.S. products from approximately 670 U.S exporters. The program gives Egyptian importers access to foreign currency at fixed exchange rates and offers varying interest-free grace periods and repayment periods, as well as incentive programs that extend the grace periods. To ensure that all transactions comply with CIP rules and regulations, USAID has established several management controls. USAID and the Egyptian government mutually determine the uses of the local currency from CIP loan repayments, which are held in a special account at Egypt’s Central Bank. In fiscal years 1999-2003, approximately 650 Egyptian firms used the CIP to import $1.1 billion worth of U.S. products. Midsized to large firms accounted for 75 percent, or about $850 million, of CIP transactions. During this period, an average of 90 new Egyptian importers used the CIP each year; the average and median loan values were $300,000 and $153,000, respectively (CIP loans can range from $10,000 to $8 million). Egypt’s industrial sector accounted for about two-thirds of CIP loans, with most of the remaining loans used for agriculture, construction, and health care equipment imports. During fiscal years 1999-2003, commodities imported by Egyptian businesses included items such as computer systems, diesel engines, hydraulic pumps, irrigation equipment, and chick incubation systems. In addition, according to USAID, approximately 670 U.S. exporters from 43 states, plus the District of Columbia and Puerto Rico, used the CIP to export to Egypt in fiscal years 1999-2003. In a 2003 USAID-sponsored survey, 66 percent of Egyptian importers surveyed said that they would have imported U.S. goods without the CIP. However, 49 percent of survey respondents said that the CIP helped increase their firm’s production capacity and 32 percent said that the program helped increase their firm’s employment levels. The importers surveyed reported that they used the CIP chiefly because of three program features—the fixed exchange rate, interest-free grace periods, and the ability to repay loans in Egyptian pounds. Although three-quarters of the U.S. exporters surveyed indicated that they would have exported goods to Egypt without the CIP, almost half said that the CIP helped their firm increase its exports to Egypt. CIP financing helps Egyptian firms obtain from Egyptian banks the foreign currency loans needed to import goods. Representatives of several Egyptian firms told us that the CIP had helped them procure part or, in some cases, all of the foreign currency they needed for U.S. imports. Foreign currency can be difficult to obtain because, according to bank officials we interviewed, Egyptian banks often receive more requests for foreign currency loans than they can accommodate. In addition, Egypt’s Central Bank instructed banks in 2003 not to make foreign currency loans unless their clients are able to repay the loans in foreign currency. The financing terms that the CIP offers Egyptian importers depend on the type of commodity and how and where it will be used. Under the program’s standard terms, USAID allows participating Egyptian banks to extend the interest-free grace period to traders and end-users for noncapital goods for up to 2 and 4 months, respectively; for capital goods, the grace period may be extended for 9 and 18 months, respectively. Egyptian importers can take 6 months to 8 years to repay their loans after the grace period ends. The terms of CIP loans have been adjusted in response to changes in demand for the CIP. For example, when demand for the program has been high, USAID shortened the duration of the interest-free grace period to reduce distortions of the commercial trade finance market. USAID also offers three incentive programs extending the interest-free grace period to Egyptian firms that (1) are increasing their exports, (2) invest in Upper Egypt, or (3) invest in environmentally friendly equipment. According to USAID, during calendar years 1999-2003, about 12 percent of CIP’s resources ($133 million) supported imports by firms that qualified for these programs. Over the last 5 years, nearly half of all loans related to the special incentive programs, or $60 million, went to importers who increased their exports, $45 million went to Upper Egyptian importers, and $28 million went to importers of environmentally friendly equipment. Officials from USAID’s Office of the Inspector General told us that the percentage of fraud in the CIP is relatively low given the high volume of transactions in the program. To ensure that the CIP complies with the agency’s rules and regulations, USAID uses a series of management controls. These include site visits and physical checks to ensure that goods are used for their intended purpose, as well as posttransaction reviews to detect overpayment for imported goods and noncompliance with program requirements. USAID conducts 25 end-use checks in Egypt annually to ensure that commodities purchased through the program meet these requirements—for example, that goods are used promptly for their intended purpose. Importers who have not complied with CIP requirements have been debarred from the program for 3 months to 3 years. According to USAID officials, seven importers have been debarred from the CIP since 1999. In addition, USAID requires that U.S. suppliers refund overcharges for transactions in which goods were not made in and shipped from the United States. From 1999 to 2003, USAID obtained 120 refunds totaling about $4.7 million. In an annual memorandum of understanding, USAID and Egypt’s Ministry of Foreign Affairs jointly determine how much of the local currency from the repayment of loans in the special account will support Egypt’s general and sector budgets and USAID’s activities. (See fig. 3 for a depiction of the account’s funding flow). The special account comprises multiple discrete accounts for the CIP as well as for the cash transfer program. For planning purposes, these are considered one large account, but USAID and the Egyptian Foreign Affairs Ministry can track the funding to a CIP or cash transfer deposit from a prior year. Although the Foreign Assistance Act and the annual memorandum give USAID a role in determining the uses of the funds in the account, the local currency belongs to the Egyptian government. For fiscal years 1999-2003, about three-quarters of the CIP-generated funds from the special account were used for general and sector budget support to help reduce Egypt’s budget deficit. In addition, USAID used about 6 percent of CIP-generated funds in the special account for some of its operating expenses. USAID also used about 9 percent of this local currency to finance various projects, technical and feasibility studies, evaluations, and assessments, among other things; the remaining 8 percent covered other disbursements such as refunds for cancelled transactions. Over the years, congressional committee reports have encouraged USAID to use funds from the account to support specific projects, such as the construction of a new campus for the American University in Cairo. Table 1 lists examples of activities funded with CIP-generated funds from the special account during fiscal years 1999-2003. Various factors have limited the CIP’s ability to foster a competitive private sector in Egypt. First, the CIP has been operating in a policy and economic climate not conducive to business activity. Although the government of Egypt took steps, beginning in 1991, to shift from a centrally planned economy to one more hospitable to private enterprise, the pace of reforms slowed in the late 1990s. For example: Subsidies and government spending. The budget deficit as a percentage of gross domestic product declined from more than 17 percent in the early 1990s to 3 percent at the end of the decade. However, the deficit subsequently increased steadily, reaching 6.3 percent in 2002-2003. The Economist Intelligence Unit forecasts that Egypt’s budget deficit will widen to about 7 percent in fiscal years 2004 and 2005, mainly because of subsidies to protect citizens from price increases and slow private sector economic activity. According to the State Department, Egypt’s real gross domestic product growth slowed from nearly 6 percent in fiscal year 1999 to roughly 3 percent in fiscal year 2003, and the private sector’s share of this growth fell. Tariffs and custom duties. In the early 1990s, Egypt agreed with the World Trade Organization (WTO) that it would abide by multilateral trade rules and liberalize its trade policies. Accordingly, by the end of the 1990s, Egypt reduced the maximum tariffs for most imports from 50 percent to 40 percent and lifted a ban on fabric imports, among other actions. However, many high tariffs persist—for example, on products related to the automobile and poultry industries and on some textiles. The full implementation of the Egyptian government’s WTO commitments is expected to take several more years. State-owned enterprises. The Egyptian government’s pace in privatizing government-owned enterprises also slowed. According to Egypt’s Ministry of Public Enterprise, 191 of more than 300 state-owned enterprises were privatized between 1993 and 2002. Although the number of entities privatized each year increased from 6 in 1993 to a high of 32 in 1998, it steadily declined to 6 in 2002. According to a September 2003 U.S. Embassy report, two privatization transactions took place in the first quarter of 2003. Further, according to a senior USAID official, there are concerns that the CIP may have eased pressure on the Egyptian government to speed the pace of economic reforms. Although the $200 million that the CIP brings into the country is relatively small—roughly 0.3 percent of the gross domestic product—the funds generated by the program represent, on average, 4.2 percent of the government’s budget deficit in the last 5 years. Because CIP funding is not tied to specific conditions, the funding may ease the government’s resource constraints without requiring it to reform. A second factor affecting the CIP’s ability to strengthen the private sector has been the perceived inconsistency in the government’s foreign exchange policy, according to several U.S. government studies and a senior Egyptian economist. For example, between 2000 and 2003, the government devalued the Egyptian pound several times; in 2003, it announced that it was adopting a free market exchange rate but subsequently continued to try to support the value of the pound. These actions have undermined the confidence of foreign and domestic investors and contributed to the persistence of a parallel “black” market for foreign currency and to foreign currency shortages, hampering firms’ ability to do business in Egypt. In this context, the CIP can provide only limited relief to the country’s foreign currency needs. A representative from the Egyptian Chamber of Commerce stated that the private sector requires about $15 billion in foreign exchange annually, but the CIP supplies less than 2 percent of this amount. A third factor limiting the CIP’s effect on the private sector has been Egyptian banks’ hesitancy to provide financing. Because of experience with bad loans, the recent economic slowdown, and the resulting increased risk of nonrepayment, Egyptian banks are reluctant to finance entrepreneurial activity, according to the Economist Intelligence Unit. Egyptian bank officials told us that they generally provide CIP funds to firms they deem creditworthy, usually well-established customers with proven credit records. Further, officials at one bank indicated that the bank is moving away from corporate lending in general, including use of the CIP, to concentrate on “less risky” activities such as consumer lending. Finally, the CIP’s impact on the private sector has been constrained by Egypt’s large number of informal businesses, which the program is not designed to reach. These businesses, which make up more than 80 percent of the country’s 1.4 million firms, generally have no access to formal sources of credit such as the CIP, because they are unable to use their assets as collateral for loans. Until broader reforms bring the informal sector into the legal and economic mainstream, the CIP’s ability to foster a competitive private sector in Egypt will likely remain limited. In conclusion, Mr. Chairman, while the CIP provides benefits to program participants and supports the Egyptian government’s budget, several factors have affected its ability to foster a competitive private sector in Egypt. In this context, it is important that policymakers continue to evaluate whether this program offers the most effective means to achieve U.S. policy goals in Egypt. This completes my prepared statement. I would be happy to respond to any questions you or other Members of the Committee may have at this time. For questions regarding this testimony, please contact David Gootnick at (202) 512-3149 or Phillip Herr at (202) 512-8509. Other key contributors to this statement were Martin De Alteriis, Kathryn Hartsburg, Julie Hirshen, Simin Ho, Reid Lowe, Seyda Wentworth, and Monica Wolford. At the request of the Chairman of the House International Relations Committee, we examined the Commodity Import Program (CIP) in Egypt. For fiscal years 1999-2003, we analyzed (1) program participants’ use of the CIP and the Egyptian government and USAID’s use of program funds and (2) factors that have affected the CIP’s ability to foster a competitive private sector in Egypt. To determine the CIP’s goals, we examined the U.S. Agency for International Development’s (USAID) Congressional Budget Justifications for this timeframe. We reviewed various laws and congressional reports that mentioned the CIP as part of the overall mandate for economic support funds to Egypt, and we also reviewed applicable international agreements. We spoke with representatives from the Department of State, the Department of Agriculture’s Foreign Agricultural Service, and the Department of Commerce’s Foreign Commercial Service. We also reviewed and analyzed applicable USAID regulations, program documentation and descriptions, as well as USAID-sponsored reports and analyses. In addition, we interviewed USAID officials in Washington, D.C., and Cairo and Alexandria, Egypt, and officials of the Egyptian ministries of Foreign Affairs and Finance. We obtained from the Egyptian Ministry of Foreign Affairs data on Egyptian government projects and activities supported by CIP-generated local currency. To determine the reliability of the data provided by the Ministry of Foreign Affairs, we questioned officials at USAID in Egypt, who informed us that they had seen bank statements confirming deposits and releases of funds and that they had a sufficient level of confidence in the data. We determined that the data were sufficiently reliable to indicate the general purposes for which special account funds were used and to provide illustrations of the sums allotted to particular types of projects. We also interviewed eight Egyptian companies from various sectors (e.g., industry and agriculture) and 6 of the 31 participating Egyptian banks that used the CIP during fiscal years 1999-2003. Finally, we spoke with industry and bank representatives from the Egyptian Chamber of Commerce in Cairo who are familiar with the program. Specifically, to determine trends of the program’s users and uses, we analyzed USAID data on CIP transactions during these 5 fiscal years. In addition, to obtain information about participants’ experiences with, and opinions of, the CIP, we analyzed data from surveys, conducted by a USAID contractor, of (1) firms that export to Egypt from the United States and (2) Egyptian firms that import from the United States under the CIP. To calculate the number of firms that used the CIP in fiscal years 1999- 2003, the average and median value of the transactions, and the annual number of first-time CIP users, we analyzed USAID data on individual export and import transactions. To examine the internal controls that USAID uses to manage the CIP in Egypt, we reviewed reports of USAID’s Office of the Inspector General from 1999 through 2003. We also interviewed officials from the Inspector General’s office in Washington, D.C., and the Regional Inspector General’s office in Cairo. In addition, we spoke with officials from USAID’s Office of Management Planning and Innovation in Washington, D.C., regarding the actions that USAID had taken to address recommendations from the Inspector General’s office during this time frame. To assess the reliability of the survey data, we reviewed the contractor’s description of the methodology, queried the contractor and USAID officials in Egypt, and examined the data electronically. We determined that most of the survey responses were sufficiently reliable to report on respondents’ opinions and experiences; however, we noted that we could not generalize from the survey respondents to all CIP participants. Furthermore, because the survey was designed to collect the opinions of firms that participated in fiscal years 1994-2002, we could not focus our analysis exclusively on 1999-2003. To assess the reliability of the transactions data, we performed basic reasonableness tests and queried USAID officials in Egypt. In the course of our assessment, we found a relatively small number of data entry errors. We were able to correct these errors in the importers’ transaction data, and we were also able to combine data for firms that were clearly linked, such as firms with a parent-subsidiary relationship. However, we were not able to make these corrections for the exporters’ database and, as a result, the figure reported likely includes a small number of duplicate firms. Nevertheless, we determined that the importers’ and exporters’ transactions data were sufficiently reliable for the purposes of this report. To gain a better understanding of Egypt’s macroeconomic environment during fiscal years 1991-2003, we conducted a literature review and interviewed researchers in Egypt, Egyptian government officials from the Ministry of Finance, and officials from Egypt’s private and public banks. For the statistical analysis, we used data from Egypt’s Central Bank and other official sources, as well as country reports provided by the U.S. Embassy in Cairo and independent economic forecasting agencies. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Commodity Import Program (CIP), managed by the U.S. Agency for International Development (USAID), is intended to foster a competitive private sector in Egypt, in addition to assisting U.S. exporters. The program also supports the government of Egypt and USAID activities and expenses in Egypt. Since 1992, Congress has appropriated at least $200 million per year for the CIP. In 1998, the United States negotiated a reduction in its economic assistance to Egypt, including the CIP, through fiscal year 2009. In this context, GAO was asked to discuss its ongoing analysis of (1) program participants' use of the CIP and the Egyptian government's and USAID's use of program funds and (2) factors that have affected the CIP's ability to foster a competitive private sector in Egypt. We received comments on a draft of this statement from USAID, which we incorporated where appropriate. In general, USAID agreed with our observations. The CIP provides loans to Egyptian importers of U.S. goods and, through loan repayments, supplies funds to the government of Egypt. During fiscal years 1999-2003, about 650 Egyptian firms used the CIP to import $1.1 billion in U.S. products from approximately 670 U.S exporters. In a 2003 USAID survey, about two-thirds of CIP importers said that they would have imported U.S. goods without the program, but half said that it helped increase their firm's production capacity and one-third said that it helped increase their firm's employment levels. The Egyptian government and USAID jointly determine the uses of the funds from loan repayments. In fiscal years 1999-2003, about three-quarters of these funds supported Egypt's general and sector budgets and about 15 percent supported USAIDadministered activities and operating expenses in Egypt. Despite the positive results reported by some CIP users, various factors have limited the program's ability to foster a competitive private sector in Egypt. According to the State Department, the slow pace of Egypt's economic reforms has created a climate not conducive to private enterprise. Further, according to several U.S. government studies, the Egyptian government's inconsistent foreign exchange policies have hampered firms' ability to do business in Egypt, limiting the extent to which the CIP can relieve the country's foreign currency needs. In addition, because of experience with bad loans, the recent economic slowdown, and the resulting increased risk of nonrepayment, bank officials told us that they are generally reluctant to provide loans to entrepreneurs. Finally, because the CIP is not designed to reach firms in Egypt's large informal economy, the program's ability to foster a competitive private sector is necessarily limited.
As computer technology has advanced, both government and private entities have become increasingly dependent on computerized information systems to carry out operations and to process, maintain, and report essential information. Public and private organizations rely on computer systems to transmit proprietary and other sensitive information, develop and maintain intellectual capital, conduct operations, process business transactions, transfer funds, and deliver services. In addition, the Internet has grown increasingly important to American businesses and consumers, serving as a medium for hundreds of billions of dollars of commerce each year, and has developed into an extended information and communications infrastructure that supports vital services such as power distribution, health care, law enforcement, and national defense. Ineffective protection of these information systems and networks can result in a failure to deliver these vital services, and result in loss or theft of computer resources, assets, and funds; inappropriate access to and disclosure, modification, or destruction of sensitive information, such as national security information, PII, and proprietary business information; disruption of essential operations supporting critical infrastructure, national defense, or emergency services; undermining of agency missions due to embarrassing incidents that erode the public’s confidence in government; use of computer resources for unauthorized purposes or to launch attacks on other systems; damage to networks and equipment; and high costs for remediation. Recognizing the importance of these issues, Congress enacted laws intended to improve the protection of federal information and systems. These laws include the Federal Information Security Modernization Act of 2014 (FISMA), which, among other things, authorizes DHS to (1) assist the Office of Management and Budget (OMB) with overseeing and monitoring agencies’ implementation of security requirements; (2) operate the federal information security incident center; and (3) provide agencies with operational and technical assistance, such as that for continuously diagnosing and mitigating cyber threats and vulnerabilities. The act also reiterated the 2002 FISMA requirement for the head of each agency to provide information security protections commensurate with the risk and magnitude of the harm resulting from unauthorized access, use, disclosure, disruption, modification, or destruction of the agency’s information or information systems. In addition, the act continues the requirement for federal agencies to develop, document, and implement an agency-wide information security program. The program is to provide security for the information and information systems that support the operations and assets of the agency, including those provided or managed by another agency, contractor, or other source. Risks to cyber-based assets can originate from unintentional or intentional threats. Unintentional threats can be caused by, among other things, natural disasters, defective computer or network equipment, and careless or poorly trained employees. Intentional threats include both targeted and untargeted attacks from a variety of sources, including criminal groups, hackers, disgruntled employees, foreign nations engaged in espionage and information warfare, and terrorists. These adversaries vary in terms of their capabilities, willingness to act, and motives, which can include seeking monetary gain or a political, economic, or military advantage. For example, adversaries possessing sophisticated levels of expertise and significant resources to pursue their objectives—sometimes referred to as “advanced persistent threats”— pose increasing risks. Table 1 describes common cyber adversaries. These adversaries make use of various techniques— or exploits—that may adversely affect federal information, computers, software, networks, and operations. Table 2 describes common types of cyber exploits. An adversary may employ multiple tactics, techniques, and exploits to conduct a cyber attack. The National Institute of Standards and Technology (NIST) has identified several representative events that may constitute a cyber attack: Perform reconnaissance and gather information: An adversary may gather information on a target by, for example, scanning its network perimeters or using publicly available information. Craft or create attack tools: An adversary prepares its means of attack by, for example, crafting a phishing attack or creating a counterfeit (“spoof”) website. Deliver, insert, or install malicious capabilities: An adversary can use common delivery mechanisms, such as e-mail or downloadable software, to insert or install malware into its target’s systems. Exploit and compromise: An adversary may exploit poorly configured, unauthorized, or otherwise vulnerable information systems to gain access. Conduct an attack: Attacks can include efforts to intercept information or disrupt operations (e.g., denial of service or physical attacks). Achieve results: Desired results include obtaining sensitive information via network “sniffing” or exfiltration, causing degradation or destruction of the target’s capabilities; damaging the integrity of information through creating, deleting, or modifying data; or causing unauthorized disclosure of sensitive information. Maintain a presence or set of capabilities: An adversary may try to maintain an undetected presence on its target’s systems by inhibiting the effectiveness of intrusion-detection capabilities or adapting behavior in response to the organization’s surveillance and security measures. More generally, the nature of cyber-based attacks can vastly enhance their reach and impact. For example, cyber attacks do not require physical proximity to their victims, can be carried out at high speeds and directed at multiple victims simultaneously, and can more easily allow attackers to remain anonymous. These inherent advantages, combined with the increasing sophistication of cyber tools and techniques, allow threat actors to target government agencies and their contractors, potentially resulting in the disclosure, alteration, or loss of sensitive information, including PII; theft of intellectual property; destruction or disruption of critical systems; and damage to economic and national security. Since fiscal year 2006, the number of information security incidents affecting systems supporting the federal government has steadily increased each year: rising from 5,503 in fiscal year 2006 to 67,168 in fiscal year 2014, an increase of 1,121 percent. (See fig. 1.) Furthermore, the number of reported security incidents involving PII at federal agencies has more than doubled in recent years—from 10,481 incidents in fiscal year 2009 to 27,624 incidents in fiscal year 2014. Figure 2 shows the different types of incidents reported in fiscal year 2014. These incidents and others like them can adversely affect national security; damage public health and safety; and lead to inappropriate access to and disclosure, modification, or destruction of sensitive information. Recent examples highlight the impact of such incidents: In June 2015, OPM reported that an intrusion into its systems affected personnel records of about 4 million current and former federal employees. The Director of OPM also stated that a separate incident may have compromised OPM systems related to background investigations, but its scope and impact have not yet been determined. In June 2015, the Commissioner of the Internal Revenue Service (IRS) testified that unauthorized third parties had gained access to taxpayer information from its “Get Transcript” application. According to IRS, criminals used taxpayer-specific data acquired from non-IRS sources to gain unauthorized access to information on approximately 100,000 tax accounts. These data included Social Security information, dates of birth, and street addresses. In April 2015, the Department of Veterans Affairs (VA) Office of Inspector General reported that two VA contractors had improperly accessed the VA network from foreign countries using personally owned equipment. In February 2015, the Director of National Intelligence stated that unauthorized computer intrusions were detected in 2014 on OPM’s networks and those of two of its contractors. The two contractors were involved in processing sensitive PII related to national security clearances for federal employees. In September 2014, a cyber-intrusion into the United States Postal Service’s information systems may have compromised PII for more than 800,000 of its employees. Given the risks posed by cyber threats and the increasing number of incidents, it is crucial that federal agencies take appropriate steps to secure their systems and information. We and agency inspectors general have identified challenges in protecting federal information and systems, including those in the following key areas: Designing and implementing risk-based cybersecurity programs at federal agencies. Agencies continue to have shortcomings in assessing risks, developing and implementing security controls, and monitoring results. Specifically, for fiscal year 2014, 19 of the 24 federal agencies covered by the Chief Financial Officers (CFO) Actreported that information security control deficiencies were either a material weakness or a significant deficiency in internal controls over Moreover, inspectors general at 23 of the 24 their financial reporting.agencies cited information security as a major management challenge for their agency. As we testified in April 2015, for fiscal year 2014, most of the agencies had weaknesses in the five key security control categories. These control categories are (1) limiting, preventing, and detecting inappropriate access to computer resources; (2) managing the configuration of software and hardware; (3) segregating duties to ensure that a single individual does not have control over all key aspects of a computer-related operation; (4) planning for continuity of operations in the event of a disaster or disruption; and (5) implementing agency-wide security management programs that are critical to identifying control deficiencies, resolving problems, and managing risks on an ongoing basis. (See fig. 3.) Examples of these weaknesses include: (1) granting users access permissions that exceed the level required to perform their legitimate job-related functions; (2) not ensuring that only authorized users can access an agency’s systems; (3) not using encryption to protect sensitive data from being intercepted and compromised; (4) not updating software with the current versions and latest security patches to protect against known vulnerabilities; and (5) not ensuring employees were trained commensurate with their responsibilities. We and agency inspectors general have made hundreds of recommendations to agencies aimed at improving their implementation of these information security controls. Enhancing oversight of contractors providing IT services. In August 2014, we reported that five of six agencies we reviewed were inconsistent in overseeing assessments of contractors’ implementation of security controls. This was partly because agencies had not documented IT security procedures for effectively overseeing contractor performance. In addition, according to OMB, 16 of 24 agency inspectors general determined that their agency’s program for managing contractor systems lacked at least one required element. We recommended that the reviewed agencies establish and implement IT security oversight procedures for such systems. The agencies generally concurred with our recommendations. We also made one recommendation to OPM and the agency concurred, but has not yet implemented this recommendation. Improving security incident response activities. In April 2014, we reported that the 24 agencies did not consistently demonstrate that they had effectively responded to cyber incidents. Specifically, we estimated that agencies had not completely documented actions taken in response to detected incidents reported in fiscal year 2012 in about 65 percent of cases. In addition, the 6 agencies we reviewed had not fully developed comprehensive policies, plans, and procedures to guide their incident response activities. We recommended that OMB address agency incident response practices government-wide and that the 6 agencies improve the effectiveness of their cyber incident response programs. The agencies generally agreed with these recommendations. Responding to breaches of PII. In December 2013, we reported that eight federal agencies had inconsistently implemented policies and procedures for responding to data breaches involving PII. In addition, OMB requirements for reporting PII-related data breaches were not always feasible or necessary. Thus, we concluded that agencies may not be consistently taking actions to limit the risk to individuals from PII-related data breaches and may be expending resources to meet OMB reporting requirements that provide little value. We recommended that OMB revise its guidance to agencies on responding to a PII-related data breach and that the reviewed agencies take specific actions to improve their response to PII-related data breaches. OMB neither agreed nor disagreed with our recommendation; four of the reviewed agencies agreed, two partially agreed, and two neither agreed nor disagreed. Implementing security programs at small agencies. In June 2014, we reported that six small agencies (i.e., agencies with 6,000 or fewer employees) had not implemented or not fully implemented their information security programs. For example, key elements of their plans, policies, and procedures were outdated, incomplete, or did not exist, and two of the agencies had not developed an information security program with the required elements. We recommended that OMB include a list of agencies that did not report on the implementation of their information security programs in its annual report to Congress on compliance with the requirements of FISMA, and include information on small agencies’ programs. OMB generally concurred with our recommendations. We also recommended that DHS develop guidance and services targeted at small agencies. DHS agreed and has implemented this recommendation. Until federal agencies take actions to address these challenges— including implementing the hundreds of recommendations we and inspectors general have made—federal systems and information will be at an increased risk of compromise from cyber-based attacks and other threats. In addition to the efforts of individual agencies, DHS and OMB have several initiatives under way to enhance cybersecurity across the federal government. While these initiatives all have potential benefits, they also have limitations. Personal Identity Verification: In August 2004, Homeland Security Presidential Directive 12 ordered the establishment of a mandatory, government-wide standard for secure and reliable forms of identification for federal government employees and contractor personnel who access government-controlled facilities and information systems. Subsequently, NIST defined requirements for such personal identity verification (PIV) credentials based on “smart cards”—plastic cards with integrated circuit chips to store and process data—and OMB directed federal agencies to issue and use PIV credentials to control access to federal facilities and systems. In September 2011, we reported that OMB and the eight agencies in our review had made mixed progress for using PIV credentials for controlling access to federal facilities and information systems.mixed progress to a number of obstacles, including logistical problems in issuing PIV credentials to all agency personnel and agencies not making this effort a priority. We made several recommendations to the eight agencies and to OMB to more fully implement PIV card capabilities. Although two agencies did not comment, seven agencies agreed with our recommendations or discussed actions they were taking to address them. For example, we made four recommendations to DHS. The department concurred and has taken action to implement them. In February 2015, OMB reported that, as of the end of fiscal year 2014, only 41 percent of agency user accounts at the 23 civilian CFO Act agencies required PIV cards for accessing agency systems. only 1 percent of user accounts required PIV cards for such access. Continuous Diagnostics and Mitigation (CDM): According to DHS, this program is intended to provide federal departments and agencies with capabilities and tools that identify cybersecurity risks on an ongoing basis, prioritize these risks based on potential impacts, and enable cybersecurity personnel to mitigate the most significant problems first. These tools include sensors that perform automated searches for known cyber vulnerabilities, the results of which feed into a dashboard that alerts network managers. These alerts can be prioritized, enabling agencies to allocate resources based on risk. DHS, in partnership with the General Services Administration, has established a government-wide contract that is intended to allow federal agencies (as well as state, local, and tribal governmental agencies) to acquire CDM tools at discounted rates. OMB, Annual Report to Congress: Federal Information Security Management Act (Washington, D.C.: Feb. 27, 2015). visibility over information security at the department and helped IT administrators identify, monitor, and mitigate information security weaknesses. However, we also noted limitations and challenges with State’s approach, including ensuring that its risk-scoring program identified relevant risks and that iPost data were timely, complete, and accurate. We made several recommendations to improve the implementation of the iPost program, and State partially agreed. National Cybersecurity Protection System (NCPS): The National Cybersecurity Protection System, operationally known as “EINSTEIN,” is a suite of capabilities intended to detect and prevent malicious network traffic from entering and exiting federal civilian government networks. The EINSTEIN capabilities of NCPS are described in table 3. In March 2010, we reported that while agencies that participated in EINSTEIN 1 improved their identification of incidents and mitigation of attacks, DHS lacked performance measures to understand if the initiative was meeting its objectives.the management of the EINSTEIN program, and DHS has since taken action to address them. We made four recommendations regarding Currently, we are reviewing NCPS in response to provisions of the Senate and House reports accompanying the Consolidated Appropriations Act, 2014. The objectives of our review are to determine the extent to which (1) NCPS meets stated objectives, (2) DHS has designed requirements for future stages of the system, and (3) federal agencies have adopted the system. Our final report is expected to be released later this year, and our preliminary observations include the following: DHS appears to have developed and deployed aspects of the intrusion detection and intrusion prevention capabilities, but potential weaknesses may limit their ability to detect and prevent computer intrusions. For example, NCPS detects signature anomalies using only one of three detection methodologies identified by NIST: signature-based, anomaly-based, and stateful protocol analysis. Further, the system has the ability to prevent intrusions, but is currently only able to proactively mitigate threats across a limited subset of network traffic (i.e., Domain Name System traffic and e- mail). DHS has identified a set of NCPS capabilities that are planned to be implemented in fiscal year 2016, but it does not appear to have developed formalized requirements for capabilities planned through fiscal year 2018. The NCPS intrusion detection capability appears to have been implemented at 23 CFO Act agencies.capability appears to have limited deployment at portions of only 5 of these agencies. Deployment may have been hampered by various implementation and policy challenges. In conclusion, the danger posed by the wide array of cyber threats facing the nation is heightened by weaknesses in the federal government’s approach to protecting its systems and information. While recent government-wide initiatives hold promise for bolstering the federal cybersecurity posture, it is important to note that no single technology or set of practices is sufficient to protect against all these threats. A “defense in depth” strategy is required that includes well-trained personnel, effective and consistently applied processes, and appropriately implemented technologies. While agencies have elements of such a strategy in place, more needs to be done to fully implement it and to address existing weaknesses. In particular, implementing GAO and inspector general recommendations will strengthen agencies’ ability to protect their systems and information, reducing the risk of a potentially devastating cyber attack. Chairwoman Comstock, Chairman Loudermilk, Ranking Members Lipinski and Beyer, and Members of the Subcommittees, this concludes my statement. I would be happy to answer your questions. If you have any questions about this statement, please contact Gregory C. Wilshusen at (202) 512-6244 or wilshuseng@gao.gov. Other staff members who contributed to this statement include Larry Crosland and Michael Gilmore (assistant directors), Bradley Becker, Christopher Businsky, Nancy Glover, Rosanna Guerrero, Kush Malhotra, and Lee McCracken. Cybersecurity: Recent Data Breaches Illustrate Need for Strong Controls across Federal Agencies. GAO-15-725T. June, 24, 2015. Cybersecurity: Actions Needed to Address Challenges Facing Federal Systems. GAO-15-573T. April 22, 2015. Information Security: IRS Needs to Continue Improving Controls over Financial and Taxpayer Data. GAO-15-337. March 19, 2015. Information Security: FAA Needs to Address Weaknesses in Air Traffic Control Systems. GAO-15-221. January 29, 2015. Information Security: Additional Actions Needed to Address Vulnerabilities That Put VA Data at Risk. GAO-15-220T. November 18, 2014. Information Security: VA Needs to Address Identified Vulnerabilities. GAO-15-117. November 13, 2014. Federal Facility Cybersecurity: DHS and GSA Should Address Cyber Risk to Building and Access Control Systems. GAO-15-6. December 12, 2014. Consumer Financial Protection Bureau: Some Privacy and Security Procedures for Data Collections Should Continue Being Enhanced. GAO-14-758. September 22, 2014. Healthcare.Gov: Information Security and Privacy Controls Should Be Enhanced to Address Weaknesses. GAO-14-871T. September 18, 2014. Healthcare.Gov: Actions Needed to Address Weaknesses in Information Security and Privacy Controls. GAO-14-730. September 16, 2014. Information Security: Agencies Need to Improve Oversight of Contractor Controls. GAO-14-612. August 8, 2014. Information Security: FDIC Made Progress in Securing Key Financial Systems, but Weaknesses Remain. GAO-14-674. July 17, 2014. Information Security: Additional Oversight Needed to Improve Programs at Small Agencies. GAO-14-344. June 25, 2014. Maritime Critical Infrastructure Protection: DHS Needs to Better Address Port Cybersecurity. GAO-14-459. June 5, 2014. Information Security: Agencies Need to Improve Cyber Incident Response Practices. GAO-14-354. April 30, 2014. Information Security: SEC Needs to Improve Controls over Financial Systems and Data. GAO-14-419. April 17, 2014. Information Security: IRS Needs to Address Control Weaknesses That Place Financial and Taxpayer Data at Risk. GAO-14-405. April 8, 2014. Information Security: Federal Agencies Need to Enhance Responses to Data Breaches. GAO-14-487T. April 2, 2014. Critical Infrastructure Protection: Observations on Key Factors in DHS’s Implementation of Its Partnership Model. GAO-14-464T. March 26, 2014. Information Security: VA Needs to Address Long-Standing Challenges. GAO-14-469T. March 25, 2014. Critical Infrastructure Protection: More Comprehensive Planning Would Enhance the Cybersecurity of Public Safety Entities’ Emerging Technology. GAO-14-125. January 28, 2014. Computer Matching Act: OMB and Selected Agencies Need to Ensure Consistent Implementation. GAO-14-44. January 13, 2014. Information Security: Agency Responses to Breaches of Personally Identifiable Information Need to Be More Consistent. GAO-14-34. December 9, 2013. Federal Information Security: Mixed Progress in Implementing Program Components; Improved Metrics Needed to Measure Effectiveness. GAO-13-776. September 26, 2013. Communications Networks: Outcome-Based Measures Would Assist DHS in Assessing Effectiveness of Cybersecurity Efforts. GAO-13-275. April 10, 2013. Information Security: IRS Has Improved Controls but Needs to Resolve Weaknesses. GAO-13-350. March 15, 2013. Cybersecurity: A Better Defined and Implemented National Strategy is Needed to Address Persistent Challenges. GAO-13-462T. March 7, 2013. Cybersecurity: National Strategy, Roles, and Responsibilities Need to Be Better Defined and More Effectively Implemented. GAO-13-187. February 14, 2013. Information Security: Federal Communications Commission Needs to Strengthen Controls over Enhanced Secured Network Project. GAO-13-155. January 25, 2013. Information Security: Actions Needed by Census Bureau to Address Weaknesses. GAO-13-63. January 22, 2013. Information Security: Better Implementation of Controls for Mobile Devices Should Be Encouraged. GAO-12-757. September 18, 2012. Mobile Device Location Data: Additional Federal Actions Could Help Protect Consumer Privacy. GAO-12-903. September 11, 2012. Medical Devices: FDA Should Expand Its Consideration of Information Security for Certain Types of Devices. GAO-12-816. August 31, 2012. Privacy: Federal Law Should Be Updated to Address Changing Technology Landscape. GAO-12-961T. July 31, 2012. Information Security: Environmental Protection Agency Needs to Resolve Weaknesses. GAO-12-696. July 19, 2012. Cybersecurity: Challenges in Securing the Electricity Grid. GAO-12-926T. July 17, 2012. Electronic Warfare: DOD Actions Needed to Strengthen Management and Oversight. GAO-12-479. July 9, 2012. Information Security: Cyber Threats Facilitate Ability to Commit Economic Espionage. GAO-12-876T. June 28, 2012. Prescription Drug Data: HHS Has Issued Health Privacy and Security Regulations but Needs to Improve Guidance and Oversight. GAO-12-605. June 22, 2012. Cybersecurity: Threats Impacting the Nation. GAO-12-666T. April 24, 2012. Management Report: Improvements Needed in SEC’s Internal Control and Accounting Procedure. GAO-12-424R. April 13, 2012. IT Supply Chain: National Security-Related Agencies Need to Better Address Risks. GAO-12-361. March 23, 2012. Information Security: IRS Needs to Further Enhance Internal Control over Financial Reporting and Taxpayer Data. GAO-12-393. March 16, 2012. Cybersecurity: Challenges in Securing the Modernized Electricity Grid. GAO-12-507T. February 28, 2012. Critical Infrastructure Protection: Cybersecurity Guidance is Available, but More Can Be Done to Promote Its Use. GAO-12-92. December 9, 2011. Cybersecurity Human Capital: Initiatives Need Better Planning and Coordination. GAO-12-8. November 29, 2011. Information Security: Additional Guidance Needed to Address Cloud Computing Concerns. GAO-12-130T. October 6, 2011. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Effective cybersecurity for federal information systems is essential to preventing the loss of resources, the compromise of sensitive information, and the disruption of government operations. Since 1997, GAO has designated federal information security as a government-wide high-risk area, and in 2003 expanded this area to include computerized systems supporting the nation's critical infrastructure. Earlier this year, in GAO's high-risk update, the area was further expanded to include protecting the privacy of personal information that is collected, maintained, and shared by both federal and nonfederal entities. This statement summarizes (1) cyber threats to federal systems, (2) challenges facing federal agencies in securing their systems and information, and (3) government-wide initiatives aimed at improving cybersecurity. In preparing this statement, GAO relied on its previously published and ongoing work in this area. Federal systems face an evolving array of cyber-based threats. These threats can be unintentional—for example, from equipment failure or careless or poorly trained employees; or intentional—targeted or untargeted attacks from criminals, hackers, adversarial nations, or terrorists, among others. Threat actors use a variety of attack techniques that can adversely affect federal information, computers, software, networks, or operations, potentially resulting in the disclosure, alteration, or loss of sensitive information; destruction or disruption of critical systems; or damage to economic and national security. These concerns are further highlighted by recent incidents involving breaches of sensitive data and the sharp increase in information security incidents reported by federal agencies over the last several years, which have risen from 5,503 in fiscal year 2006 to 67,168 in fiscal year 2014. GAO has identified a number of challenges federal agencies face in addressing threats to their cybersecurity. For example, agencies have been challenged with designing and implementing risk-based cybersecurity programs, as illustrated by 19 of 24 major agencies declaring cybersecurity as a significant deficiency or material weakness for financial reporting purposes. Other challenges include: enhancing oversight of contractors providing IT services, improving security incident response activities, responding to breaches of personal information, and implementing cybersecurity programs at small agencies. Until federal agencies take actions to address these challenges—including implementing the hundreds of recommendations GAO and agency inspectors general have made—federal systems and information will be at an increased risk of compromise from cyber-based attacks and other threats. Several government-wide initiatives are under way to bolster cybersecurity. Personal Identity Verification: The President and the Office of Management and Budget (OMB) directed agencies to issue credentials with enhanced security features to control access to federal facilities and systems. OMB recently reported that only 41 percent of user accounts at 23 civilian agencies had required these credentials to access agency systems. Continuous Diagnostics and Mitigation: This program is to provide agencies with tools for continuously monitoring cybersecurity risks. The Department of State adopted a continuous monitoring program, and GAO reported on the benefits and challenges in implementing the program. National Cybersecurity Protection System: This system is to provide capabilities for monitoring network traffic and detecting and preventing intrusions. GAO has ongoing work reviewing the system's implementation. Preliminary observations indicate that implementation of the intrusion detection and prevention capabilities may be limited and requirements for future capabilities appear to have not been fully defined. While these initiatives are intended to improve security, no single technology or tool is sufficient to protect against all cyber threats. Rather, agencies need to employ a multi-layered approach to security that includes well-trained personnel, effective and consistently applied processes, and appropriate technologies. In previous work, GAO and agency inspectors general have made hundreds of recommendations to assist agencies in addressing cybersecurity challenges. GAO has also made recommendations to improve government-wide initiatives.
Generally, DOD’s S&T community (which includes DOD laboratories and testing facilities as well as contractors and academic institutions that support these facilities) conducts research and develops technologies to support military applications, such as satellites or weapon systems. Like the acquisition community in DOD, the S&T community uses RDT&E funds, but the S&T community’s work precedes the acquisition cycle. Weapon system program managers, who receive most of DOD’s RDT&E budget, apply generic technologies to specific systems. Figure 1 highlights activities the S&T community is involved in along with the RDT&E budget categories, or “activities,” which are used to fund these efforts. More details on both are provided in appendixes I and II. The S&T community carries out its work within the first three categories of research and development listed above. DOD has specified that the work within the fourth category—testing and evaluation of prototypes of systems or subsystems in a high fidelity or realistic environment—involves efforts before an acquisition program starts product development. However, according to DOD officials, it is assumed that either the S&T community or an acquisition program may carry out this work, and traditionally, weapon system acquisition programs have taken on technology development within this stage. After this point, any additional development is to be completed as part of a formal acquisition or product development phase under the authority of the weapon system manager and apart from the S&T community. The DOD DDR&E is responsible for the overall direction, quality, and content of the agency’s S&T efforts. Each of the military departments— Army, Air Force, and Navy—has its own S&T programs, as do DOD organizations such as DARPA, Defense Threat Reduction Agency, MDA, and the National Reconnaissance Office (NRO). The DOD Executive Agent for Space—who is also the space milestone decision authority for all space major defense acquisition programs, the Under Secretary of the Air Force, and the Director of the NRO—also influences S&T efforts for space since he decides whether significant investments in space systems are to move forward in the development process. There are mechanisms within the space community and DOD designed to ensure S&T efforts are coordinated and are focused on achieving broader goals and that redundancy is minimized. Within the space community, a forum called the Space Technology Alliance was established in 1997 to coordinate the development of space technologies with an eye toward achieving the greatest return on investment. Its membership includes the Air Force, the Army, the Navy, MDA, DARPA, and NRO. At the DOD-wide level, there is a Defense Science and Technology Strategy, which lays out goals for DOD-wide S&T efforts based on goals set by higher-level documents, such as the Quadrennial Defense Review. This strategy is used, in turn, to develop a DOD-wide basic research plan, which reflects DOD’s objectives and planned investments for basic research conducted by universities, industry, and laboratories and a DOD-wide technology area plan, which does the same for applied research and advanced technology development. There is also a Joint Warfighting S&T Plan, which ties S&T projects to priority future joint warfighting capabilities identified by higher-level documents. These overall plans, in turn, are used by DOD laboratories to direct investments in S&T. They are also used by the Office of the Secretary of Defense to provide guidance to the military departments and the defense agencies as they develop and vet their proposed budgets. In addition, DOD puts together teams of outside experts in 12 technology areas to assess whether particular investments across DOD’s S&T community are redundant or unnecessary. These are known as Technology Area Reviews and Assessments. The teams make recommendations to a board comprised of senior DOD S&T officials and chaired by the DDR&E for action to terminate, adjust, and/or enhance investments to better align the S&T program to comply with the planning document guidance. The DDR&E, which reports to the Under Secretary of Defense (Acquisition, Technology and Logistics), has oversight of the RDT&E budget activities used to research and develop new technologies, specifically, RDT&E budget activities 1 (basic research), 2 (applied research), and 3 (advanced technology development). Recently, the DDR&E was given oversight of RDT&E budget activity 4 (advanced component development and prototypes) in an effort to ensure this development had sufficient oversight from the S&T community. The act required DOD to develop a strategy for its space S&T efforts that identified short- and long-term space S&T goals; a process for achieving the goals, including an implementation plan; and a process for assessing progress made toward achieving the goals. The act also required DOD to coordinate its strategy development efforts. The strategy, yet to be delivered to the Congress at the time of our review, met four of nine requirements, and plans are in place to meet the remaining five. We found that the strategy provides a foundation for enhancing coordination among space S&T efforts since it does specify overall goals and that it establishes several mechanisms to help senior leaders gauge whether investments are focusing on those goals. However, since the strategy has only recently been issued, it is too early to assess whether the direction and processes outlined in the strategy will be effective in supporting and guiding future space S&T efforts. The strategy identified goals for space S&T along six main areas—assured access to space, responsive space capability, assured space operations, spacecraft technology, information superiority, and S&T workforce. Except for the goal of enhancing the workforce, the strategy laid out short- term goals (within 5 years) and long-term goals (in the year 2020 or beyond). Under spacecraft technology, for example, the strategy identified a short-term goal of on-orbit assessment of satellite servicing and repair and long-term goals of on-orbit assembly, deployment, repair, and upgrades. Under assured space operations, the strategy identified a short- term goal of detecting, identifying, and characterizing natural and man- made objects in space and a long-term goal of complete space situational awareness. According to S&T community officials we spoke with, the mere identification of goals should be useful in helping DOD laboratories and other S&T facilities to direct their investment as this type of guidance had not been provided for space previously. The strategy also establishes several mechanisms for implementation. Primarily, it calls for semiannual space S&T summit meetings to coordinate user expectations, highlight technologies, provide guidance, and establish priorities. DDR&E officials, agency S&T executives as well as Service Program Executive Officers for Space who will ultimately transition new capabilities, and major command leadership will attend these meetings. The strategy also implements an Industry Independent Research and Development coordination conference, where industry and government officials can come together to collaborate in their S&T planning activities. Details on both of these mechanisms are still being worked out, according to the developers of the strategy. The strategy also identifies some tools and measures that will be used to track progress in meeting goals. These tools and measures include “technology roadmaps,” which identify timelines, milestones, and transition dates for specific projects as well as interdependencies with other projects and “technology readiness level” (TRL), an analytical tool that assesses the maturity level of technology. Our prior work has found TRLs to be a valuable decision-making tool since it can presage the likely consequences of incorporating a technology at a given level of maturity into a product development. Appendix III details criteria for each TRL. In addition, DOD has plans in place to ensure that the strategy is reviewed and revised, as necessary, annually and that it be made publicly available for review by congressional defense committees. Other DOD S&T entities will be provided the strategy to support the planning, programming, and budgeting processes. DOD also plans to include the strategy as an annex to the National Security Space Plan, even though the plan is thought to be a lower-level tactical document and not a strategic document. The developers of the strategy worked with a wide range of organizations in establishing goals, measures, and implementation plans. These include military department laboratories, DARPA, intelligence agencies, MDA, the Air Force Space Command, NASA, the Space and Missile Systems Center, the U.S. Strategic Command, the National Security Space Office, and others. Officials within the space community we spoke with commented that it has historically been difficult to gain agreement from these organizations. Even though they all have ties to space, these organizations have different views as to what overall goals the space community should strive for and how they should be achieved. According to officials within the space community we spoke with, just getting these organizations to work together and to gain agreement was a significant benefit to the community at large since it helped foster more collaborative working relationships and greater knowledge sharing. In addition to the requirements specified by the act, we found that optimizing space S&T efforts also depends on whether (1) the strategy is clearly linked to other strategies and plans; (2) all DOD space S&T efforts are covered by the strategy; and (3) the strategy identifies metrics beyond TRLs that focus on success. Linkage to other strategies and plans is important to providing clear guidance to S&T laboratories and other organizations making investments since there are a number of DOD-wide “strategies” for S&T as well as a number of space-related higher level strategic plans as well as tactical plans relating to S&T. Coverage of all space S&T efforts is important since S&T is carried out not only by DOD laboratories but also by large acquisition programs and other agencies that have a large stake or investment in space S&T. For example, NRO develops new satellites for the intelligence community and could potentially leverage its S&T efforts with DOD’s. Lastly, having additional measures beyond TRLs is important to gauging the success of the implementation of the strategy as well as the relevancy and feasibility of specific progress toward achieving DOD’s overall goals for space. We found that the strategy clearly identified linkages to some, but not all, key plans and strategies, and it did not provide coverage over all S&T efforts or establish additional measures. The space S&T strategy identifies links to higher-level documents, such as the National Security Space Strategy, which sets overall strategic goals for DOD space and identifies capabilities to be pursued, and the Defense S&T Strategy, which provides overall goals for DOD S&T based on higher-level strategic documents. The strategy also references lower-level plans including the National Security Space Plan discussed earlier and DOD- wide S&T plans, such as the Basic Research Plan, the Defense Technology Area Plan, and the Joint Warfighting S&T Plan. However, the strategy did not provide links to other documents and assessments that impact the space S&T community. For example, it is unclear how the document will link to DOD’s Space Technology Guide, which describes the current state of space and space-related technology activities underway, including key enabling technologies, that is, those that “must be done right” since they play a pivotal role in making revolutionary advancements in space applications. The guide is being revised and could serve as a useful implementation tool for the new space S&T strategy. It is also unclear how the strategy links to architectures in areas such as responsive space operations, protection for space mission assurance, and integrated intelligence, surveillance and reconnaissance being developed by the National Security Space Office. These architectures are to define the future desired state for DOD’s space assets. It is important that DOD reflect these other documents in the new space S&T strategy so that the space community clearly understands where the strategy fits in relation to other plans and guides and can ensure decision making is consistent. Moreover, by establishing closer links with the Space Technology Guide and architectures under development, DOD may have more avenues to implement its short- and long-term goals. In addition, the Joint Chiefs of Staff did not participate in the development of the strategy, including offices responsible for DOD’s new Joint Capabilities Integration and Development System (JCIDS). JCIDS is replacing DOD’s requirements generation process for major acquisitions in an effort to shift the focus to a more capabilities-based approach for determining joint warfighting needs rather than a threat-based approach focused on individual systems and platforms. Under JCIDS, boards comprised of high-level DOD civilians and military officials are to identify future capabilities needed around key functional concepts and areas, such as command and control, force application, and battlespace awareness, and to make trade-offs among air, space, land, and sea platforms in doing so. Although the JCIDS officials were not required to participate in developing the strategy, it is important that they do so in the future since their work could have a significant impact on the direction of investments for space S&T projects. The space S&T strategy does not explicitly address technology development efforts within DOD acquisition programs. According to DOD officials, space acquisition programs are typically using RDT&E funds from budget activity 4 to mature technology and build the first two satellites. Our analysis showed that space acquisition programs plan to spend as much as $16 billion from fiscal years 2004 through 2009 on budget activity 4. Our annual assessments of space systems have shown that the portion of the $16 billion that is to be spent on maturing technology (which we could not readily separate from the portion spent building the first two satellites) is often being used to carry out activities that should be carried out in an S&T environment. For example, the Transformational Satellite program, which is focused on building advanced communication satellites, entered system development in early 2004 with only one of seven critical technologies matured to a point of being tested in a relevant environment. Most of the technologies were at a TRL 3, meaning analytical studies and some laboratory tests had been conducted, but components had not yet been demonstrated to work together. If DOD does not explicitly include acquisition programs in the space S&T strategy, it will not be able to ensure the S&T community has oversight over a considerable amount of ongoing technology development. We were not provided access to NRO to discuss how it collaborated with the DDR&E and the Executive Agent for Space in developing the space S&T strategy and how they intended to work with the DDR&E and the Executive Agent for Space in implementing the strategy. However, DOD officials stated that NRO had participated in the development of the strategy and would participate in all S&T coordination activities identified by the space S&T strategy. Moreover, according to DOD officials, NRO and other intelligence agencies already participate in some DOD space S&T coordination and review efforts, such as the Space Technology Alliance. In addition, the DDR&E and the DOD Executive Agent for Space are continuing to work on increasing coordination between DOD and the intelligence community. DOD officials also noted that the current Executive Agent for Space also serves as the Director of NRO, which has helped to increase coordination between the intelligence community and DOD. While these efforts may be helping to increase coordination between DOD and the intelligence S&T communities, it is still important to specifically include the DOD intelligence agencies in the strategy itself and to identify protocols that can help foster greater knowledge sharing between both communities. While the strategy identifies TRLs as a measure for tracking progress, it does not prescribe metrics that focus on the value of S&T projects relative to specific goals or knowledge being gained from projects. Such metrics would help provide a foundation for assessing progress in achieving strategic goals. Strategy developers stated that technology development organizations are better suited to develop and use their own specific metrics to measure success because different technologies may require different types of metrics. The developers stated that by design, the strategy sets the direction but leaves it up to the laboratories and other S&T entities to establish their own metrics. However, they acknowledged that some of the organizations they worked with did not have adequate metrics. It is important that DOD attempt to identify and use metrics that help assess progress, since these will enable DOD to evaluate investments against its short- and long-term goals and make informed investment decisions. Though the new space S&T strategy takes important first steps toward optimizing investments, there are significant barriers that will make it difficult to make advancements in the way S&T efforts are planned, managed, and transitioned into acquisition programs. Some barriers relate specifically to the space community—principally, incomplete RDT&E funding visibility, inadequate testing resources, and workforce deficiencies. These can potentially be addressed through further study, resource shifts, increased management attention, and/or changes to how funding is captured. Other barriers are more systemic and require more difficult management and cultural changes to be made throughout DOD. Nevertheless, until barriers are largely removed, the impact of a new strategy for space S&T may be limited. The developers of the strategy agreed that the barriers we identified were important and needed to be addressed through efforts beyond the development of the strategy. The current budget process does not readily capture all RDT&E funding for space S&T efforts. In 2001, DOD established a “virtual” Major Force Program for space to increase the visibility of resources allocated for space activities. This is a programming mechanism that aggregates most space-unique funding by military department and function. However, the mechanism does not align funding with RDT&E budget activities, making it more difficult for DOD to assess the balance of funding among basic research, applied research, and advanced technology development. In working with DOD officials to categorize the virtual Major Force Program by RDT&E budget activity, we identified about $3.8 billion from fiscal years 2004 through 2009 for budget activities 2 (applied research) and 3 (advanced technology development). However, funding for budget activity 1 (basic research) cannot be specifically associated to either space or terrestrial platforms, and therefore does not appear in the virtual Major Force Program, which is focused on space-unique funding. Funding in RDT&E budget activities 2 and 3 that is not space unique is also not captured. In addition, some DOD agencies develop space assets but have primary missions that are not associated with space and are therefore, not included in the virtual Major Force Program. For example, MDA’s space efforts are not included in the virtual Major Force Program for space even though MDA is developing a new generation of missile tracking satellite systems using advanced infrared sensors. MDA plans to spend about $4.12 billion on this system from fiscal years 2004-2009, and a considerable portion of this funding is expected to be used to mature technologies for future satellites. Moreover, DARPA reports its space funding by project so space S&T efforts cannot be readily identified without additional knowledge of whether these projects are space related. Currently, DARPA has funded about $200 million annually on projects that are space unique and considerably more on projects that have both space and terrestrial applications. Until the virtual Major Force Program or some other tool can capture and categorize the total amount of RDT&E dollars supporting space-unique S&T projects at a minimum, DOD will be limited in guiding and directing all space investments. Testing resources for space technologies are on the decline. In particular, funding for testing has decreased, costs to launch experiments have increased, and opportunities have been reduced with the loss of the space shuttle, which had been partially used for DOD-related technology experiments. DOD’s Space Test Program, which is designed to help the S&T community find opportunities to test in space relatively cost- effectively, was funded at $62.3 million in fiscal year 1990 but only $38.6 million in fiscal year 2004 (see fig. 2). And because the cost to launch experiments has increased, the program has only been able to launch an average of seven experiments annually in the past 4 years (see fig. 3). According to Space Test Program officials, demand for testing has not diminished. S&T officials cited dwindling testing resources as a barrier to their efforts. While the strategy states that appropriate resources need to be allocated for on-orbit testing, it does not address how this can or will be done. The workforce needed to carry out S&T for space is facing shortages. DOD officials cited staff shortages with science and engineering backgrounds and had more concerns about the future since their workforces were reaching retirement age. These concerns were echoed by DOD and industry studies. A 2002 study on the space research and development industrial base conducted by Booz Allen Hamilton, for example, found that over half of the current space R&D workforce is over 45 years old and that departure of key talent could be especially worrisome in 10 years, as scientists and engineers now in the 45- to 49-year-old group begin to retire from the workforce and are replaced by a smaller pool of less experienced personnel. In its report, the Space Commission noted that both industry and the U.S. government face substantial shortages of scientists and engineers and that recruitment of new personnel is difficult since the space industry is one of many sectors competing for the limited number of trained scientists and engineers. Booz Allen noted that areas in which either recruitment efforts are difficult or a critical mass is lacking include systems engineering and software engineering. The 2004 National Defense Authorization Act directed the Secretary of Defense to promote the development of space personnel career fields within each of the military departments. However, we recently reported that the military services vary in the extent to which they have identified and implemented initiatives to develop and manage their space cadres. Moreover, the space S&T strategy itself merely lays out goals for workforce without identifying actions or resources needed to achieve those goals. In recognizing that more needs to be done to develop, attract, and retain staff with critical skills, the Defense Authorization Act for Fiscal Year 2005 Conference Report directed DOD to develop detailed implementation plans for enhancing the space cadre and to study the ability of academia, industry, and government to educate and train a community of space professionals and to address the definition and development of key competencies and skill levels in the areas of systems engineering, program management, financial management, operations, and tactics. We believe that S&T skill areas should also be included in the strategy given the importance of advancing space technologies and potential future workforce shortages. DOD does not yet have a departmentwide investment strategy that could provide a good foundation for space S&T planning. While desired capabilities are regularly identified by military commanders and are vetted through strategic reviews, such as the Quadrennial Defense Review, DOD has limited ability to make trades among space, air, land, and sea platforms in deciding how best to meet those capabilities, document those decisions, and follow through on those decisions. For example, DOD would like to achieve persistent surveillance to enhance military operations. But it has not been decided how much of the earth needs to be covered and the extent to which air-based assets, such as unmanned reconnaissance aircraft, can achieve this capability versus space-based assets, such as the planned space-based radar system. If DOD conducted thorough and independent analyses of alternatives weighing the pros and cons of using different combinations of both assets and made trade-off decisions that could be enforced across the military services, the S&T community could have a better basis for deciding how much S&T dollars should go toward space-based radar technologies versus technologies supporting air platforms. The need for an investment strategy DOD-wide or for particular functional areas has been cited in a variety of recent studies, including a 1999 Defense Science Board study on tactical battlefield communications and a 2004 study by the Center for Strategic and International Studies. The recently established JCIDS process is designed to identify future capabilities by functional areas and to make trades between space and other platforms. However, it is unknown as to how this work will translate into an investment strategy that could be used to enhance S&T planning. And it is unknown how effectively decisions made through JCIDS will be enforced. DOD has also made changes to its Planning, Programming, Budgeting, and Execution process to provide higher-level guidance to the budgeting process. However, it is also unclear as to how effectively these changes will be implemented over time and whether they can serve as a foundation for directing science and technology investments. We have previously reported that an S&T environment is more forgiving and less costly than a delivery-oriented acquisition program environment. Events such as test “failures,” new discoveries, and time spent in attaining knowledge are considered normal in this environment, while they are seen as a negative event in an acquisition program. Moreover, separating technology development and product development enables organizations to align customer expectations with resources, and therefore minimize problems that could hurt a program in its design and production phases. Budget realities within DOD, however, make it more advantageous to fund technology development in an acquisition program. Historically, S&T organizations receive about 20 percent of DOD’s research and development budget, while weapon system programs receive about 80 percent. The money going toward S&T is spread over several thousand projects, while the money going toward weapons systems is spread out over considerably fewer projects. This “distribution of wealth” makes it easier to finance technology development within an acquisition program. In addition, even though more money is distributed to weapon systems, there is still considerable competition for funding. Such competition makes it advantageous for programs to include in their design immature technologies that offer significant performance gains. Within the space community, there is also a perception that the length of time it takes to develop space systems (which have only “one shot” at incorporating technologies) demands that DOD push for continual advancement of technologies, even after starting an acquisition program. The impact of acquisition programs taking on technology development that should be done in an S&T environment is considerable. Our work over the past several decades has shown that this practice invariably leads to unanticipated cost and schedule increases for space and other weapon system programs since technical problems occurring within acquisition require more time and money to fix. For some large programs for space, cost increases have amounted to billions of dollars and delayed schedules by years. Aside from removing technology development from a more protective environment and from S&T oversight processes, problematic acquisitions may also rob the S&T community and other acquisition programs of investment dollars. Some actions have been taken recently to address this dilemma. In particular, DOD issued a revised directive in November 2003 expanding the DDR&E’s oversight authority to include efforts to develop advanced components and prototypes—RDT&E budget activity 4. According to DDR&E officials, this authority was intended to keep technology development out of the acquisition programs and within the S&T community, but it will take at least 2 years to determine its success. In addition, DOD’s revised acquisition policy for weapon systems encourages programs not to commit to undertaking product development until technologies are matured, that is, at a minimum tested in a relevant environment (TRL 6) and preferably in an operational environment (TRL 7). However, in October 2003, DOD also issued a separate acquisition policy for space, which allows technology development to continue into product development up until a decision is made to build the first product. At the time of our review, DOD was revising the space acquisition policy and reexamining how long technology development should continue within an acquisition program. DOD has taken an initial positive step in optimizing investments in space S&T projects by establishing short- and long-term goals, which can be used to direct spending by S&T organizations, and by establishing a forum by which senior leaders can assess whether spending is going in the right direction. However, there will be significant challenges ahead for DOD in implementing the strategy. Namely, DOD must maintain momentum toward greater collaboration, which began under this effort. This will not be an easy task, given the varied and competing interests of organizations with a stake in DOD’s space S&T investment and the fact that the strategy does not explicitly cover organizations that fall outside the realm of traditional DOD S&T oversight. Moreover, there are formidable barriers that stand in the way of achieving and measuring progress, including inadequate funding visibility, decreased testing resources, workforce deficiencies, and long-standing incentives that encourage technology development to take place within acquisition programs rather than the S&T community. By using the strategy as a tool for assessing and addressing these challenges, DOD can better position itself for achieving its goals and also strengthen the S&T base supporting space. We recommend that the Secretary of Defense direct the (1) Executive Agent for Space and (2) the Under Secretary of Defense (Acquisition, Technology and Logistics) (to whom the DDR&E reports) to make the following improvements to space S&T strategic planning. Establish protocols and mechanisms for enhancing coordination and knowledge sharing between the DOD S&T community, acquisition programs involved in space, and DOD intelligence agencies. Ensure that the space S&T strategy fully reflects warfighter needs by establishing links between space S&T strategic planning and DOD’s new JCIDS. In addition, establish links to architectural development processes to assure that S&T projects align with future technology requirements identified in space-related architectures. Continue to ensure that DOD has the right tools for measuring progress in achieving its goals for space by identifying metrics that could be used for assessing the value of S&T projects relative to strategic goals and knowledge being gained relative to goals. Develop plans for addressing barriers to achieving strategic goals for S&T, including deficiencies in RDT&E funding visibility, testing resources, and workforce. A first step would be to include skills critical to S&T in the workforce study identified in the Fiscal Year 2005 Defense Authorization Act Conference Report. In commenting on a draft of this report, DOD concurred with our recommendations and identified actions being taken to address them. (See app. V for DOD’s comments.) We are sending copies of this report to the Secretaries of Defense and the Air Force and interested congressional committees. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (937) 258-7915. Key contributors to this report were Cristina Chaplain, Maricela Cherveny, Jean Harker, and Rich Horiuchi. Basic research is systematic study directed toward greater knowledge or understanding of the fundamental aspects of phenomena and of observable facts without specific applications towards processes or products in mind. It includes all scientific study and experimentation directed towards increasing fundamental knowledge and understanding in those fields of the physical, engineering, environmental, and life sciences related to long-term national security needs. It is farsighted high-payoff research that provides the basis for technological progress. Applied research is systematic study to understand the means to meet a recognized and specific need. It is a systematic expansion and application of knowledge to develop useful materials, devices, and systems or methods. Applied research may translate promising basic research into solutions for broadly defined military needs, short of system development. Applied research precedes system- specific technology investigations or development. Advanced technology development includes development of subsystems and components and efforts to integrate them into system prototypes for field experiments and/or tests in a simulated environment. The results of this type of effort are proof of technological feasibility and assessment of subsystem and component operability and producibility rather than the development of hardware for service use. Projects in this category have a direct relevance to identified military needs. Program elements in this category involve pre-acquisition efforts, such as system concept demonstration, joint and service-specific experiments, or technology demonstrations, and generally have technology readiness levels (TRLs) of 4, 5, or 6. Projects in this category do not necessarily lead to subsequent development or procurement phases, but should have the goal of moving out of space science and technology (S&T) and into the acquisition process within the future years defense program. Advanced component development and prototypes consists of efforts necessary to evaluate integrated technologies or prototype systems in a high fidelity and realistic operating environment. These activities include system-specific efforts that help expedite technology transition from the laboratory to operational use. Emphasis is on proving component and subsystem maturity prior to integration in major and complex systems and may involve risk reduction initiatives. Advanced component development and prototypes efforts are to occur before an acquisition program starts product development. System development and demonstration consists of newly initiated acquisition programs and includes engineering and manufacturing development tasks aimed at meeting validated requirements prior to full-rate production. Characteristics of this activity involve mature system development, integration, and demonstration to support a production decision. RDT&E management support includes efforts to sustain and/or modernize the installations or operations required for general RDT&E. Such efforts may relate to test ranges, military construction, maintenance support of laboratories, operation and maintenance of test aircraft and ships, and studies and analyses in support of the RDT&E program. Operational system development includes development efforts to upgrade systems that have been fielded or have received approval for full-rate production and anticipate production funding in the current or subsequent fiscal year. Appendix II: Funding on Technology Development within Science and Technology and Acquisition Communities Funding going toward a variety of projects and sources. Lowest level of technology readiness. Scientific research begins to be translated into applied research and development. Examples might include paper studies of a technology’s basic properties. Invention begins. Once basic principles are observed, practical applications can be invented. Active research and development is initiated. This includes analytical studies and laboratory studies to physically validate analytical predictions of separate elements of the technology. Examples include components that are not yet integrated or representative. Basic technological components are integrated to establish that the pieces will work together. This is relatively “low fidelity” compared to the eventual system. Examples include integration of “ad hoc” hardware in a laboratory. Fidelity of breadboard technology increases significantly. The basic technological components are integrated with reasonably realistic supporting elements so that the technology can be tested in a simulated environment. Examples include “high fidelity” laboratory integration of components. Representative model or prototype system, which is well beyond the breadboard tested for TRL 5, is tested in a relevant environment. Represents a major step up in a technology’s demonstrated readiness. Examples include testing a prototype in a high fidelity laboratory environment or in simulated operational environment. Prototype near or at planned operational system. Represents a major step up from TRL 6, requiring the demonstration of an actual system prototype in an operational environment, such as in an aircraft, vehicle, or space. Technology has been proven to work in its final form and under expected conditions. In almost all cases, this TRL represents the end of true system development. Examples include developmental test and evaluation of the system in its intended weapon system to determine if it meets design specifications. Actual application of the technology in its final form and under mission conditions, such as those encountered in operational test and evaluation. Examples include using the system under operational mission conditions.
The Department of Defense (DOD) is depending heavily on new space-based technologies to support and transform future military operations. Yet there are concerns that efforts to develop technologies for space systems are not tied to strategic goals for space and are not well planned or coordinated. In the National Defense Authorization Act for 2004, the Congress required DOD to develop a space science and technology (S&T) strategy that sets out goals and a process for achieving those goals. The Congress also required GAO to assess this strategy as well as the required coordination process. DOD's new strategy for space S&T met four of the nine requirements set out by the Congress and plans are in place to meet the remaining requirements. These included requirements for setting short- and long-term goals and a process for achieving those goals as well as requirements that focused on ensuring the strategy was developed with laboratories, research components, and other organizations involved in space S&T and ensuring the strategy would be reviewed by appropriate entities and revised periodically. In addition to meeting these requirements, GAO found that development of the strategy itself helped spur collaboration within the DOD space S&T community since it required diverse organizations to come together, share knowledge, and establish agreement on basic goals. Since the strategy has only recently been issued, it is too early to assess whether the direction and processes outlined in the strategy will be effective in supporting and guiding future space S&T efforts. Moreover, DOD officials are still working out the details of some implementation mechanisms. However, in order to better position DOD for successful implementation, GAO believes that the plan should contain stronger linkages to DOD's requirements setting process, identify additional measures for assessing progress in achieving strategic goals, and explicitly cover all efforts related to space S&T. Moreover, there are formidable barriers that stand in the way of optimizing DOD's investment in space S&T. DOD does not have complete visibility over all spending related to space S&T, including spending occurring within some S&T organizations and acquisition programs. Without a means to see where funding is being targeted, DOD may not be able to assure all spending on technology development is focused on achieving its goals. The S&T community itself may not have resources critical to achieving DOD's goals. In recent years, funding and opportunities for testing for the space S&T community have decreased. And, concerns have grown about the adequacy of the space S&T workforce. DOD acquisition programs continue to undertake technology development that should be occurring within an S&T environment, which is more forgiving and less costly than a delivery-oriented acquisition program environment. Until this is done, cost increases resulting from technology problems within acquisitions may keep resources away from the S&T community. By using the strategy as a tool for assessing and addressing these challenges, DOD can better position itself for achieving its goals and also strengthen the S&T base supporting space.
Our objectives were to determine (1) the overall status of Defense’s effort to identify and correct its date sensitive systems and (2) the appropriateness of Defense’s strategy and actions to correct these systems. In conducting our review, we used our Year 2000 Assessment Guide to assess Defense’s Year 2000 efforts. This guide addresses common issues affecting most federal agencies and presents a structured approach and a checklist to aid in planning, managing, and evaluating Year 2000 programs. The guidance, which is consistent with Defense’s Year 2000 Management Plan describes five phases—supported by program and project management activities—with each phase representing a major Year 2000 program activity or segment. The phases and a description of each follows. Awareness - Define the Year 2000 problem and gain executive-level support and sponsorship for a Year 2000 program. Establish a Year 2000 program team and develop an overall strategy. Ensure that everyone in the organization is fully aware of the issue. Assessment - Assess the Year 2000 impact on the enterprise. Identify core business areas and processes, inventory and analyze systems supporting the core business areas, and prioritize their conversion or replacement. Develop contingency plans to handle data exchange issues, lack of data, and bad data. Identify and secure the necessary resources. Renovation - Convert, replace, or eliminate selected platforms, applications, databases, and utilities. Modify interfaces. Validation - Test, verify, and validate converted or replaced platforms, applications, databases, and utilities. Test the performance, functionality, and integration of converted or replaced platforms, applications, databases, utilities, and interfaces in an environment that faithfully represents the operational environment. Implementation - Implement converted or replaced platforms, applications, databases, utilities, and interfaces. Implement data exchange contingency plans, if necessary. During our review, we concentrated on Defense’s department-level Year 2000 Program, managed by the Assistant Secretary of Defense, Command, Control, Communications and Intelligence (ASD C3I), who is also the Defense Chief Information Officer. To determine how Defense components and their organizations were implementing Defense policy and managing their Year 2000 program efforts, we also reviewed Year 2000 efforts being carried out by Army, Navy, and Air Force headquarters, three Defense agencies, and three central design activities. We also visited a number of other organizations including the Joint Chiefs of Staff, the Global Command Control System (GCCS) Program Office, and the National Security Agency. The scope and methodology of these individual reviews are detailed in the following GAO reports: Defense Computers: Air Force Needs to Strengthen Year 2000 Oversight (GAO/AIMD-98-35, January 16, 1998). Defense Computers: Technical Support Is Key to Naval Supply Year 2000 Success (GAO/AIMD-98-7R, October 21, 1997). Defense Computers: LSSC Needs to Confront Significant Year 2000 Issues (GAO/AIMD-97-149, September 26, 1997). Defense Computers: SSG Needs to Sustain Year 2000 Progress (GAO/AIMD-97-120R, August 19, 1997). Defense Computers: Issues Confronting DLA in Addressing Year 2000 Problems (GAO/AIMD-97-106, August 12, 1997). Defense Computers: DFAS Faces Challenges in Solving the Year 2000 Problem (GAO/AIMD-97-117, August 11, 1997). Reports on the Army and Navy Year 2000 programs are being developed. We also reviewed efforts by the department to improve the Defense Integration Support Tools database (DIST), which serves as the Defense inventory of automated information systems and is intended to be used as a tool to help Defense components in correcting Year 2000 date problems. The scope and methodology of this work is further detailed in our report, Defense Computers: Improvements to DOD Systems Inventory Needed for Year 2000 Effort (GAO/AIMD-97-112, August 13, 1997). In conducting the individual component reviews and assessing oversight efforts from Defense headquarters, we reviewed and analyzed official memoranda and other documents discussing Defense and component Year 2000 policy and procedures; the June 1996 and February 1997 Defense and component responses to Year 2000 questions from the House Government Operations Committee, Subcommittee on Oversight; the January 1997 Action Plan for Year 2000 Information Technology Compliance; Defense’s May 1997, August 1997, November 1997, and February 1998 component quarterly reports on Year 2000 program status to ASD C3I, and Defense’s subsequent department-level reports to the Office of Management and Budget; Year 2000 status briefings to the Deputy Secretary of Defense by the Military Services, the Joint Chiefs of Staff, DISA, DFAS, and DLA; early drafts and the final April 1997 versions of the Defense Year 2000 Management Plan; and Year 2000 inventory data compiled by ASD C3I, Defense components, and their subcomponents. We also reviewed and monitored Year 2000 Internet homepages maintained by various contractors, government agencies, ASD C3I, DISA, the Army, the Navy, the Air Force, the Marine Corps, and subcomponents; and minutes of federal, Defense, and Air Force Year 2000 Working Groups. In addition, we held discussions with various Defense Department, component and subcomponent officials concerning Year 2000 problems, corrective actions, and related operational and programmatic impacts of the program. We conducted structured interviews on program policies and practices with Defense Department and component-level Year 2000 program officials. We also reviewed the output of various Defense computer generated databases and management information systems related to Year 2000 activities, but did not verify the integrity of the data in these systems. Our audit work on this overview report was conducted from August 1997 through February 1998 in accordance with generally accepted government auditing standards. We requested written comments on a draft of this report from Defense. The Acting Principal Deputy of the Office of the Assistant Secretary of Defense for Command, Control, Communications and Intelligence provided written comments, which are discussed in the “Agency Comments and Our Evaluation” section and reprinted in appendix II. Most of Defense’s automated information systems and weapon systems computers are vulnerable to the Year 2000 problem, which is rooted in the way dates are recorded, computed, and transmitted in automated information systems. For the past several decades, systems have typically used two digits to represent the year, such as “97” representing 1997, in order to conserve electronic data storage and reduce operating costs. With this two-digit format, however, the year 2000 is indistinguishable from 1900, or 2001 from 1901, etc. As a result of this ambiguity, systems or application programs that use dates to perform calculations, comparisons, or sorting may generate incorrect results when working with years after 1999. For example, the Defense Logistics Agency’s Standard Automated Material Management System is used to manage Defense’s vast inventory of supplies. Because it uses dates to automatically target items for deletion, the system erroneously targeted more than 90,000 items for deletion before Defense discovered the problem in 1996. In addition, any electronic device that contains a microprocessor or is dependent on a timing sequence may be also vulnerable to Year 2000 problems. This includes computer hardware, telecommunications equipment, building and base security systems, street lights at military installations, elevators, and medical equipment. For example, Defense components reported to ASD C3I in February 1998 that more than half of the over 730,000 personal computers they had checked had a Year 2000 problem. For Defense, the Year 2000 effort is a significant management challenge because it relies heavily on computers to carry out aspects of all operations, and time for completing Year 2000 fixes is short. For example, the department is responsible for more than 1.5 million computers, 28,000 automated information systems, and 10,000 networks. Its information systems are linked by thousands of interfaces that exchange information within Defense and across organizational and international lines. Successful operation after January 1, 2000, requires that Defense’s systems and all of the systems that they interface with be Year 2000 compliant. Should Defense fail to successfully address the Year 2000 problem in time, its mission-critical operations could be severely degraded or disrupted. For example: In an August 1997 operational exercise, the Global Command Control System failed testing when the date was rolled over to the year 2000. GCCS is deployed at 700 sites worldwide and is used to generate a common operating picture of the battlefield for planning, executing, and managing military operations. The U.S. and its allies, many of whom also use GCCS, would be unable to orchestrate a Desert Storm-type engagement in the year 2000 if the problem is not corrected. The Global Positioning System (GPS) is widely used for aircraft and ship navigation (commercial and military) and for precision targeting and “smart” bombs. The ground control stations use dates to synchronize the signals from the satellites and maintain uplinks to the satellites. Failure to correct Year 2000 problems could cause these stations to lose track of satellites and send erroneous information to the millions of users who rely on GPS. The Defense Message System (DMS) is being developed to replace the aging Automated Digital Network (AUTODIN). These systems provide critical capabilities such as secure messaging for important operations such as intelligence gathering, diplomatic communications, and military command and control. Should Year 2000 problems render DMS or AUTODIN inoperable or unreliable, it would be difficult to monitor enemy operations or conduct military engagements. Aircraft and other military equipment could be grounded because the computer systems used to schedule maintenance and track supplies may not work. Defense could incur shortages of vital items needed to sustain military operations and readiness—such as food, fuel, medical supplies, clothing, and spare and repair parts to support its over 1,400 weapons systems. Billions of dollars in payments could be inaccurate because the computer systems used to manage thousands of defense contracts may not correctly process date-related information. Active duty soldiers and military retirees may not get paid if the systems used to make calculations and prepare checks are not repaired in time. Defense plans to resolve its Year 2000 problem using a five-phased process comparable to that in our Year 2000 Assessment Guide. In keeping with its decentralized approach to information technology management, Defense has charged its components with responsibility for making sure that all of their systems correctly process dates. The Assistant Secretary of Defense for Command, Control, Communications and Intelligence (ASD C3I), as the Department’s Chief Information Officer, is responsible for leading Defense efforts to solve the Year 2000 problem. Further, Defense is requiring the components to reprogram existing funds to correct their systems and will provide no additional funds for Year 2000 fixes. As of February 1998, Defense estimated that it would cost $1.9 billion to address its Year 2000 problem, but as discussed later, we question the reliability of this estimate. To increase the awareness of the Year 2000 problem and to foster coordination among components, Defense has taken the following actions: In a November 27, 1995, memo, the ASD C3I alerted components to the problem and called on them to begin corrective actions if they had not already done so. In December 1996, Defense established the Year 2000 Steering Committee, chaired by the Deputy Secretary of Defense, to oversee progress, provide departmentwide guidance, and make decisions related to the Year 2000. Members, which include the department’s chief information officers, chief financial officer, general counsel and the acquisition executive also discuss Year 2000 issues and exchange information on their programs. The Steering Committee began meeting in September 1997. Defense established a Year 2000 Working Group to support the activities and deliberations of the Steering Committee. The group is chaired by an ASD C3I staff member. Each component has assigned a representative to the group to investigate Year 2000 and cross-functional issues, provide recommendations, identify and share lessons learned, and avoid duplication of effort within Defense. In October 1996, the ASD C3I began a series of interface workshops intended to better coordinate Year 2000 efforts in various functional areas. These workshops are intended to ensure information systems and processes that exchange data are assessed and will be Year 2000 compliant. Workshops are conducted for specific functional areas and will continue until Year 2000 problems are resolved. In April 1997, Defense issued its Year 2000 Management Plan, which formalized its Year 2000 strategy and delineated the activities involved in each phase of its five-phased approach to remediation. The plan also identified responsibilities of the Year 2000 Steering Committee, the Year 2000 Working Group, the ASD C3I, the Defense Information Systems Agency, and the Assistant Secretary of Defense for Intelligence Systems; established reporting requirements, target completion dates, and exit criteria for each of the program phases; provided guidance on estimating costs; and provided a compliance checklist. In May 1997, Defense enlisted its Inspector General to help oversee the department’s Year 2000 program, validate the data on Year 2000 status being reported by each component, identify problems areas, and recommend corrective actions. In July 1997, a Defense Science Board panel was convened to determine whether the department’s strategy, priorities, resources, and funding are sufficient to ensure that all mission-critical systems will be corrected in time. Defense has extensively used the Internet to establish Year 2000 home pages, information libraries, and links to other federal and nongovernment Year 2000 organizations, to enhance awareness and understanding of the Year 2000 problem. In February 1998, Defense reported to OMB that it had 2,915 mission-critical systems and 25,671 nonmission-critical systems. According to Defense, 1,886 mission-critical systems need to be repaired and about half of these are in the renovation phase and a third in the validation phase. In addition, Defense now reports that 15,786 nonmission-critical systems need to be repaired, an increase of over 6,500 systems from the number reported by components in November 1997. Like the mission-critical systems, about half of these nonmission-critical systems are reported to be in the renovation phase. Defense has taken a long time in the early phases of its Year 2000 program and its progress in fixing systems has been slow. For example, Defense took 16 months to issue its Year 2000 Management Plan, 1 year to establish the Year 2000 Steering Committee, and an additional 9 months for the Committee to hold its first meeting. In addition, Defense is still assessing systems even though it originally anticipated that this would be done in June 1997. In February 1998, Defense reported that only about 130 mission-critical systems had completed repairs since November 1997. Technology experts like the Mitre Corporation and the Gartner Group, estimate that about 70 percent of an organization’s total effort will be required for the renovation, validation, and implementation phases. With less than 20 months remaining and most mission-critical systems in these three phases, Defense is running out of time to make the necessary repairs before the Year 2000 deadline. Specific reported totals for February 1998 are shown in table 1. As discussed later in this report, we question the reliability of this information. Information on personal computers and communications and facility equipment reported by components is provided in table 2. We have separately reported on Year 2000 efforts being carried out by the military services, three Defense agencies, and three central design activities. Our reviews have shown that individual components have also taken positive actions to increase awareness. For example, the Air Force established an Air Force Year 2000 Working Group comprised of focal points from each major command, field operating agency, and direct reporting unit. The group has focused on such matters as sharing lessons learned, eliminating duplicative efforts, sharing resources, and tracking component progress. Also, the Air Force and other components, such as DFAS and DLA, each developed written Year 2000 plans that adopted the five-phased approach. In addition, these components, as well as some other organizations, such as the central design activities we reviewed, established Year 2000 program offices and designated program managers. However, there were systemic weaknesses in component Year 2000 programs. For example, many of the components failed to develop contingency plans during the assessment phase to ensure that critical operations can continue in the face of unforeseen problems or delays. They also were not effectively planning to ensure the availability of needed testing facilities and resources. And they had not fully identified interfaces or communicated their Year 2000 plans to their interface partners. Finally, none of the three military services had developed accurate and reliable cost estimates as their systems were assessed. Our findings with regard to these reviews are noted throughout this report and are detailed in appendix I. In view of the magnitude of the Year 2000 problem, our Assessment Guide recommends that agencies plan and manage the Year 2000 program as a single large information system development effort and promulgate and enforce good management practices on the program and project levels. The guide also recommends that agencies appoint a Year 2000 program manager and establish an agency-level Year 2000 program office. Defense has not supported its decentralized approach to the Year 2000 effort with a program manager or an agency-level Year 2000 program office. Instead of establishing a department-level program office, Defense assigned five full-time staff members in the Office of the ASD C3I to oversee the progress of 23 major components and over 28,000 information systems. The group does not have authority to enforce good management practices, direct resources for special needs, or even to question the validity of the data being reported from components. In addition, this group is not supported by an executive that can focus on the Year 2000 problem full-time. For example, the ASD C3I, who has been assigned to lead the effort, is also responsible for (1) providing guidance and oversight for all command, control, communications, and intelligence projects, programs, and systems being acquired by Defense and its components, (2) chairing the Major Automated Information System Review Council, (3) serving as the principal Defense official responsible for software policy and practices, and (4) establishing and implementing information management policy, processes, programs, and standards. Furthermore, Defense has not promulgated and enforced good management practices for Year 2000 corrective efforts. For example, Defense has not provided guidance and authoritative direction needed to ensure that components effectively (1) identify “a system” for purposes of Year 2000 reporting, (2) communicate Year 2000 plans to interface partners, (3) address conflicts between interface partners, and (4) identify common standards and procedures to use in testing. In addition, it has not been validating the information being reported by its components for completeness and accuracy or tracking component progress in completing important Year 2000-related activities, such as contingency planning, acquiring additional test facilities, and prioritizing systems. Because it lacks strong management and oversight controls over Year 2000 remediation efforts, Defense has failed to successfully address a number of steps that are fundamental to correcting mission-critical systems on time. First, Defense does not yet have a complete inventory of systems. Without this, it cannot reliably determine what resources it needs or identify problems requiring greater management attention. Second, Defense has not ensured that mission-critical systems are receiving a higher priority than nonmission-critical systems. Third, Defense has neither identified all system interfaces nor ensured that its components are effectively working with their interface partners to correct the interfaces. Fourth, Defense has not ensured that facilities are available for Year 2000-related testing or that component testing requirements are consistent. Fifth, Defense does not know if components have developed contingency plans necessary to ensure that essential mission functions can be performed even if critical mission systems are not corrected in time. Sixth, Defense does not have a reliable estimate of Year 2000 problem correction costs. These weaknesses and their impact on Defense’s Year 2000 remediation efforts are discussed in the following sections. Our Assessment Guide noted that a key part of the assessment phase is to conduct an enterprisewide inventory of information systems for each business area. Such an inventory should include specific information such as the business processes that systems support, the potential impact on those business processes if systems are not fixed on time, and the progress components are making in correcting their systems. This provides the necessary foundation for Year 2000 program planning. Defense, however, does not yet have a complete and accurate inventory of its systems and other equipment needing repair. As a result, it does not have a clear picture of its overall Year 2000 correction efforts and it cannot reliably determine what resources it needs or identify problems that require greater management attention. Defense is requiring its components to submit quarterly Year 2000 progress reports and to input system information into the departmentwide database of automated information systems, known as the Defense Integration Support Tools (DIST) database. However, many components are still identifying their systems, interfaces, and/or other equipment that may be affected by the Year 2000 problem, such as telecommunications equipment, elevators, and security systems. For example, Defense components are still adding systems to the inventory; the total number of nonmission-critical systems increased by over 3,700 systems between November 1997 and February 1998. The Air Force, the National Reconnaissance Office, the National Security Agency, and the Under Secretary of Defense for Acquisition and Technology have not yet identified other equipment needing repair such as personal computers and telecommunications equipment. Eleven components, including the Defense Information Systems Agency and the Joint Chiefs of Staff, have not yet identified interfaces. In addition, Defense headquarters does not validate the information it is receiving from its components for accuracy or completeness before reporting its status quarterly to the Office of Management and Budget. Similarly, Navy headquarters does not validate the information being reported by its components and system managers. The Army and the Air Force have enlisted their audit agencies to help validate information being reported by components. These audits have identified large discrepancies between information maintained by the services and information maintained by individual system owners. Further, Defense has not provided sufficient guidance to components to ensure they use a common definition of a “system” for reporting purposes. This has further degraded the accuracy of Defense’s inventory reporting. If not precisely defined, one “system” can be interpreted to mean a small application comprised of a few hundred lines of code or the entire collection of systems aboard a major weapon system. At Defense, a variety of interpretations are being used. For example, in August 1997, the Air Force’s F-16 and F-15 weapon system programs reported each system aboard an aircraft (86 and 32 systems, respectively) while the C-17 and B-2 programs treated all onboard systems as a single system. Since each system must be corrected individually, aggregating onboard systems into a single system causes Defense’s inventory to be understated. In addition, while some organizations reported these smaller applications that downloaded and processed information from their major automated information systems, the Defense Logistics Agency did not consider these programs as systems. With some organizations reporting on these systems and others, like DLA, not reporting them, Defense’s inventory is further understating the number of systems that need to be corrected. On February 4, 1998, due to concerns that extensive and detailed information on all of the department’s mission critical systems was available on the Internet, the ASD C3I classified DIST as “secret”—meaning that anyone requiring access to the database must have a validated security clearance and access to secure computer and communications equipment. DIST was removed from the Internet and will remain unavailable until detailed access and security procedures are developed and put in place. As a result, at the close of our review, DIST was not available for system managers to update the Year 2000 status of their systems or determine the status of interfaces or interfacing systems, and Defense and component Year 2000 officials could not use it as a program management tool. In addition, organizations such as the Navy, which rely on the DIST for their only source of inventory information, were directed to create separate databases to meet their quarterly inventory reporting and program management requirements. In commenting on our draft report, Defense officials told us that ASD C3I was in the process of defining options for a new database to replace DIST. The new inventory, which Defense intends to be unclassified, would not have as much detailed information on systems as DIST; instead, it would only contain Year 2000-relevant data. ASD C3I officials plan to have the new system in place by mid-summer 1998. Until this new system is in place, Defense will lack a central source for inventory and status information on Defense’s Year 2000 program. In addition, the new database will be as ineffective as DIST unless components ensure that the information they submit is accurate and complete and Defense headquarters validates their submissions. According to our Assessment Guide, an important aspect of the assessment phase is prioritizing the remediation of the systems that have the highest impact on an agency’s mission and thus need to be corrected first. This helps an agency ensure that its most vital systems are corrected before systems that do not support the agency’s core business. Defense’s Year 2000 plan states that the highest priority should be given to systems that are critical to warfighting and peacekeeping missions and the safety of individuals. The plan makes each component responsible for prioritizing its own systems. This approach is flawed. Since all components’ functions are not equally essential to Defense’s core missions, Defense cannot define its priorities simply by aggregating components’ priorities. For example, as noted in a Defense Science Board report, Defense has no means of distinguishing between the priority of a video conferencing system listed as mission-critical by one component and a logistics system listed as mission-critical by another component. If it had such a means, the board estimated that the number of “priority mission-critical systems” would be reduced by a factor of 10 or greater. Once Defense decides the relative priority of its mission-critical systems, it will still need to ensure that its mission-critical rather than nonmission-critical systems receive focused management attention and resources. However, according to its status reports, Defense is correcting nonmission-critical systems nearly as quickly as its mission-critical systems. In February 1998, it reported that 83 percent of its mission-critical systems being repaired were in the renovation or validation phases versus about 80 percent of it nonmission-critical systems. Defense systems interface with each other as well as with systems belonging to contractors, other federal agencies, and international organizations. For example, supply orders originating from the military services are filled and payments to contractors are made through automated interfaces. Therefore, it is essential that Defense agencies ensure that external noncompliant systems not introduce and/or propagate Year 2000-related errors to compliant Defense systems and that interfaces function after January 1, 2000. Defense has held a series of Interface Assessment Workshops for individual functional areas such as finance, logistics, and intelligence in order to raise awareness of the interface problem. While these workshops have helped to acquaint high-level managers with the nature and extent of interface problems, much more effort is needed to assist system managers in making corrections. First, as noted earlier, the department does not know how many interfaces exist among its systems. Seven of 28 components (25 percent) including the Joint Chiefs of Staff did not report interface information on their February 1998 inventory. In addition, four components, including the National Security Agency and the National Reconnaissance Office reported their interfaces as “to be determined.” Three additional components—including the Navy, Defense Intelligence Agency and Air Force Intelligence—reported interfaces, but had not yet determined whether they were affected by the Year 2000 problem. The longer it takes Defense to identify all interfaces and determine which ones need to be corrected, the greater the risk will be that it will discover too late in its Year 2000 effort that systems will not be able to accommodate the Year 2000 changes from a connecting system. Second, Defense has not provided sufficiently definitive guidance to establish (1) who is responsible for correcting interfaces and (2) how conflicts—for example, who should fund corrective actions—between interface partners will be resolved. Such guidance is necessary since interface problems will likely cut across command, functional, and component lines and may involve contractors, other government agencies, and international organizations. Finally, in order for interfaces to work, both ends need to know what to send and what to expect. This requires formal documentation of the details on data formats, the timing of format changes, etc. While the April 1997 Management Plan directed components to prepare written agreements with their interface partners, Defense has not provided guidance to its components on what the content of interface agreements should be. Components have been slow in responding to the Management Plans direction. For example, at the time of our review, none of the components we reviewed had completed preparation of all required interface agreements. The Army Year 2000 project office reported that its components were behind in their efforts to do so. Defense components have concurred with our recommendations to date concerning the need to develop interface agreements. However, they are still not being uniformly required across the department. Until these agreements are prepared, Defense components will run the risk that key interfaces will not work. The validation (testing) phase of the Year 2000 effort is expected to be the most expensive and time-consuming. Experts estimate that it will account for 45 percent of the entire effort. As Defense’s Year 2000 Management Plan notes, the testing phase will be complex since “components must not only test Year 2000 compliance of individual applications, but also the complex interactions between scores of converted or replaced computer platforms, operating systems, utilities, , databases, and interfaces.” In some instances, the plan notes, Defense components may not be able to shut down their production systems for testing, and may have to operate parallel systems implemented on a Year 2000 test facility. Also, because over 17,500 systems will require testing prior to the March 1999 testing deadline, it will be important to plan for the use of testing resources carefully. To mitigate risks associated with testing, our Year 2000 Assessment Guide calls on agencies to develop validation strategies and test plans, and to ensure that resources, such as facilities and tools, are available to perform adequate testing. Validation strategies are developed at an organization-wide level to ensure that common testing requirements are used by all locations. Our Assessment Guide further notes that this planning should begin in the assessment phase since agencies may need over a year to adequately validate and test converted or replaced systems for Year 2000 compliance. Defense lacks an overall validation strategy that specifies uniform criteria and processes which components should use in testing their systems. Defense’s Management Plan includes a checklist for certifying Year 2000 compliance, but does not require components to use it. Likewise, a number of major components—including DFAS, the Navy, the Air Force, and the Army—have not developed such strategies nor were they ensuring that the organizations reporting to them did so. As a result, Defense runs the risk that all systems and interfaces will not be thoroughly and carefully tested. Another important aspect of planning for the test phase is to define requirements for test facilities. As our Year 2000 Assessment Guide notes, agencies may have to acquire additional facilities in order to provide an adequate testing environment. Because of the length and complexity of the testing phase and the potential that facilities may not be available, our guide recommends that this planning begin in the assessment phase. We found that the Navy, the Air Force, the Army had not yet begun this planning. The Defense Information Systems Agency, which operates the Department’s central computer centers, has only recently begun assessing what the demand for its facilities will be. The longer Defense waits to begin assessing the demand for and the adequacy of test facilities, the less time it will have to acquire additional facilities or otherwise ensure that all mission-critical systems can be tested before the Year 2000 deadline. To mitigate the risk that Year 2000-related problems will disrupt operations, Defense’s Year 2000 Management Plan and our Year 2000 Assessment Guide recommend that agencies perform risk assessments and develop realistic contingency plans for critical systems and activities. Recent OMB directives require quarterly reporting of contingency planning activities. Contingency plans are important because they identify the manual or other fallback procedures to be employed should systems miss their Year 2000 deadline or fail unexpectedly in operation. Contingency plans also define the specific conditions that will cause their activation. Since many of its systems are critical to mission performance and Defense has fallen behind its own Year 2000 schedule, Defense must develop contingency plans now for essential mission functions. However, although Defense’s Year 2000 Management Plan identifies the need for contingency planning to ensure continuity of core processes, Defense is not routinely tracking the status of contingency plans or ensuring that its components are developing them. The need for oversight is serious since many of the components we reviewed were not developing contingency plans until we recommended that they do so. For example: At the time of our review of their programs, DLA and the Naval Supply Systems Command (NAVSUP) had no contingency plans because they expected that all of their systems would be completed by the Year 2000 deadline and would function correctly. This assumption is not well founded because even if systems are replaced or corrected on time, there is no guarantee that they will operate correctly. In addition, in the event that replacement schedules slip, components may not have enough time to renovate, test, and implement a legacy system or identify other alternatives, such as manual procedures or outsourcing. For example, one system used to help manage DLA’s mission-critical $5-billion a year fuel commodity operations had already slipped 4 to 5 months behind its October 1998 scheduled replacement date. Both DLA and NAVSUP began developing contingency plans after we raised these concerns. The Air Force was not tracking the extent to which these plans were being developed by its components for mission-critical systems, and, at the time of our review, five system program offices we surveyed had not prepared such plans. In response to our report, the Air Force began ensuring that contingency plans were developed through Air Force Audit Agency spot checks and management reviews. It also plans to develop contingency plans at crisis response centers as well as incorporate Year 2000 scenarios into existing contingency plans. DFAS was preparing contingency planning for noncompliant systems to be replaced before the Year 2000. However, it was not requiring contingency plans for systems being renovated. We noted that DFAS faced a risk that systems being renovated may not be corrected by January 1, 2000, and may not operate correctly even if completed. In response, DFAS began developing contingency strategies for these systems. In January 1998, the military services briefed the Defense’s Year 2000 Steering Committee on the status of contingency planning for mission-critical systems. The Army and Air Force reported that they had completed contingency plans for 49 percent and 30 percent of their mission-critical systems respectively. The Navy is only requiring contingency plans for systems planned to be renovated after June 30, 1998, or implemented after January 1, 1999. Using this criteria and the Navy’s current schedule, less than 2 percent of the Navy’s 812 mission-critical systems are required to have contingency plans. As Defense’s Year 2000 Management Plan and our Assessment Guide state, a primary purpose of the assessment phase is to determine the size and scope of the Year 2000 problem and to prioritize remediation activities. Reliable cost estimates are needed to ensure that adequate resources will be available for Year 2000 activities. Once reliable estimates have been established, they can provide a baseline to measure program progress and to improve future program management. In addition, because Defense is funding Year 2000 efforts from existing budgets, reliable Year 2000 cost estimates are needed to assess the impact on future information technology budgets. Defense relies on its components to estimate the cost of their Year 2000 efforts, but it has not required that they use a consistent estimating methodology or that they update the estimates when more reliable cost information becomes available during the assessment phase. Defense merely sums up the cost estimates it receives from components to produce the estimate it provides to OMB. As a result, Defense’s Year 2000 cost estimate is neither reliable nor complete, and does not provide a useful management tool for assessing the impact of the Year 2000 problem or determining if sufficient resources will be available to complete its fixes. To make a first rough estimate, Defense suggested that components use a cost formula derived from the Gartner Group and the Mitre Corporation, which recommends multiplying the number of lines of code by $1.10 for automated information systems and by $8 for weapons systems. This rough estimate was to be refined by conducting a detailed cost analysis based on more than 30 cost factors as the component progressed through the assessment phase and learned more about its systems and the resources that would be required to fix them. These include such factors as: the age of systems, the skill and expertise of in-house programmers, the strategy that the agency is pursuing (strategies that involve keeping the two-digit code, for example, may be much less expensive than those that involve changing the two-digit code to a four-digit code), the clarity and completeness of documentation on systems, the availability of source code, and the programming language used by the systems. However, Defense did not require that components use these factors in preparing their quarterly cost estimates or that they refine their rough estimates as more reliable information became available during assessment. The difference between an estimate based on a more reliable analysis of data collected during the assessment phase and an estimate based on the Gartner formula and similar methodologies can be significant. For example, in August 1996, the Army’s Logistics Systems Support Center used the Gartner formula to project Year 2000 costs for its huge Commodity Command Standard System. Based on this formula, it estimated that it would cost $8.4 million to correct the system. In April 1997, the Center conducted a detailed cost analysis based on data collected during the assessment phase, and found that Year 2000 costs would actually be about $12.4 million—a 50 percent increase over the original estimate. While Defense’s Management Plan suggested that components revise their cost estimates as more reliable information becomes available, it has not ensured that components are doing so. While some components may have refined their estimates with each report to the Office of the Secretary of Defense (OSD), the Army, the Air Force, and the Navy continue to provide only rough order-of-magnitude estimates using the Gartner formula or other formulas provided by contractors, or have omitted significant cost items from their estimates. For example: The Army’s November 1997 estimate of $429 million did not include costs for 36 systems. The Navy’s November 1997 estimate of $293 million did not include cost information from about 95 percent of the program managers in the Naval Sea Systems Command. Naval Air Systems Command also indicated that many program managers were not reporting costs. The Navy estimate also did not include an estimated $15 million associated with fixing telephone switches. The Air Force’s November 1997 estimate did not include the cost of fixing telephone switches, which was estimated to be between $70 million and $90 million. Until Defense has a complete and reliable cost estimate, it will not be able to effectively allocate resources, track progress, make trade-off decisions, or resolve funding disputes. Defense operations hinge on the department’s ability to successfully fix its mission-critical computer systems before the Year 2000 deadline. Yet Defense has left it up to its components to solve the problem themselves without establishing a project office, led by a full-time top-level executive, to (1) enforce good management practices, (2) prioritize systems across the department based on criticality to core missions, (3) provide guidance on areas that components should be addressing consistently and ensure that they are doing so, (4) direct resources for special needs, and (5) ensure that data being reported to the Office of Management and Budget and the Congress is accurate. As a result, Defense lacks complete and reliable information on systems, interfaces, and costs. It is allowing nonmission-critical systems to be corrected even though only a small percentage of mission-critical systems have been completed. It lacks assurance that facilities will be available for testing. And, it has not ensured that essential mission functions can be performed if critical mission systems are not corrected in time. Until Defense supports remediation efforts with adequate centralized program management and oversight, its mission-critical operations may well be severely degraded or disrupted as a result of the Year 2000 problem. We recommend that the Secretary of Defense: Establish a strong department-level program office led by an executive whose full-time job is to effectively manage and oversee the Department’s Year 2000 efforts. The office should as a minimum have sufficient authority to enforce good management practices, direct resources to specific problem areas, and ensure the validity of data being reported by components on such things as progress, contingency planning, and testing. Expedite efforts to establish a comprehensive, accurate departmentwide inventory of systems, interfaces, and other equipment needing repair. Require components to validate the accuracy of data being reported to OSD. Provide guidance that clearly defines a “system” for Year 2000 reporting purposes. Clearly define criteria and an objective process for prioritizing systems for repair based on their mission-criticality and ensure that the “most” mission-critical systems will be repaired first. Ensure that system interfaces are adequately addressed by (1) taking inventory and assigning clear responsibility for each, (2) tracking progress in Year 2000 problem resolution, (3) requiring interface agreement documentation, and (4) providing guidance on the content of interface agreements and who should fund corrective actions. Develop an overall, departmentwide testing strategy and a plan for ensuring that adequate resources, such as test facilities and tools, are available to perform necessary testing. Ensure that the testing strategy specifies the common criteria and processes that components should use in testing their systems. Require components to develop contingency plans to ensure that essential operations and functions can be performed even if mission-critical systems are not corrected in time or fail due to Year 2000 problems. Track component progress in completing these plans. Prepare complete and accurate Year 2000 cost estimates so that the department can assess the full impact of the Year 2000 problem, ensure adequate resources are available, and effectively make trade-off decisions to ensure that funds are properly allocated. In reviewing a draft of this report, the Acting Principal Deputy of the Office of the Assistant Secretary of Defense for Command, Control, Communications and Intelligence concurred with all of our recommendations to improve Defense’s Year 2000 program. Specifically, Defense agreed with the need to establish a strong central Year 2000 program office and has appointed a full-time executive to lead the department’s efforts to solve the Year 2000 challenge. Defense stated that this office will have sufficient authority to enforce good management practices. In addition, Defense stated that the DOD Year 2000 Management Plan is being revised to (1) define criteria and processes for prioritizing systems, (2) formalize guidance on identification and documentation of interfaces, (3) establish common testing conditions and dates for attaining Year 2000 compliance, and (4) provide for development of contingency plans in accordance with GAO’s recently issued guidance. The revised Management Plan is scheduled to be issued in April 1998. However, in concurring with several of our recommendations, Defense did not indicate how it would implement them. Instead, it reiterated current practices which to date have not resulted in reliable and complete inventory, progress, and cost data. For example, Defense concurred with our recommendation to establish an accurate departmentwide inventory of its systems and a clear definition of the term “system.” But, it then said that components will continue to validate the accuracy of data submitted for its new database using audit agencies and other independent validation techniques recommended by OMB and claimed that it had already clearly defined the term “system” in a March 1997 memorandum from ASD C3I and in DOD’s Dictionary of Military and Associated Terms. Components have not submitted accurate data to Defense to date, and these actions do not indicate that Defense will validate these data to ensure their accuracy in the future as we recommended. Further, the documents cited do not define the term “system” effectively and, as we reported, components have interpreted the term inconsistently. Likewise, Defense concurred with our recommendation that the Secretary of Defense prepare complete and accurate Year 2000 cost estimates, but then cited current cost estimating guidance and procedures, and noted that it had “requested the Components to improve their estimated costs by using actual figures as they became available.” It added that the Secretary of Defense, through the Year 2000 Steering Committee, will use these estimates to assess the impact of Year 2000 problems, make trade-off decisions, and ensure adequate resources are available. Again, these actions describe current practices which have resulted in incomplete and inaccurate cost estimates. Despite requests from Defense that they refine their cost analyses and prepare complete cost estimates, components continued to provide unreliable and incomplete cost data. Until Defense takes additional action to implement our recommendation to require and ensure that components use a more reliable methodology and report complete costs, it will not have the reliable information it needs to allocate resources, track progress, make trade-off decisions, and resolve funding disputes. We are providing copies of this letter to the Ranking Minority Members of the Senate Committee on Governmental Affairs and the Subcommittee on Government Management, Information and Technology, House Committee on Government Reform and Oversight; the Chairmen and Ranking Minority members of the Subcommittee on Oversight of Government Management, Restructuring and the District of Columbia, Senate Committee on Governmental Affairs; the Subcommittee on Defense, Senate Committee on Appropriations; the Senate Committee on Armed Services; the Subcommittee on National Security, House Committee on Appropriations; and the House Committee on National Security. We are also sending copies to the Deputy Secretary of Defense; the Acting Secretary of Defense for Command, Control, Communications and Intelligence; the Director of the Office of Management and Budget; the Assistant to the President for Year 2000; and other interested parties. Copies will be made available to others on request. If you have any questions on matters discussed in this letter, please call me at (202) 512-6240. Other major contributors to this report are listed in appendix III. The following is GAO’s comment on the Department of Defense’s March 27, 1998, letter. 1. Defense’s additional comments have been incorporated as appropriate but have not been included in the report. George L. Jones, Senior Information Systems Analyst Denice M. Millett, Senior Evaluator Michael W. Buell, Staff Evaluator Karen S. Sifford, Staff Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of Defense's (DOD) program for solving the year 2000 computer systems problem, focusing on the: (1) overall status of DOD's effort to identify and correct its date-sensitive systems; and (2) appropriateness of DOD's strategy and actions to correct its year 2000 problems. GAO noted that: (1) DOD relies on computer systems for some aspect of all of its operations, including strategic and tactical operations, sophisticated weaponry, intelligence, surveillance and security efforts, and routine business functions, such as financial management, personnel, logistics, and contract management; (2) failure to successfully address the year 2000 problem in time could severely degrade or disrupt any of DOD's mission-critical operations; (3) DOD has taken many positive actions to increase awareness, promote sharing of information, and encourage components to make year 2000 remediation efforts a high priority; (4) however, its progress in fixing systems has been slow; (5) in addition, DOD lacks key management and oversight controls to enforce good management practices, direct resources, and establish a complete picture of its progress in fixing systems; (6) as a result, DOD lacks complete and reliable information on systems, interfaces, other equipment needing repair, and the cost of its correction efforts; (7) it is spending limited resources fixing nonmission-critical systems even though most mission-critical systems have not been corrected; (8) it has also increased the risk that: (a) year 2000 errors will be propagated from one organization's systems to another's; (b) all systems and interfaces will not be thoroughly and carefully tested; and (c) components will not be prepared should their systems miss the year 2000 deadline or fail unexpectedly in operation; (9) each one of these problems seriously endangers DOD chances of successfully meeting the year 2000 deadline for mission-critical systems; and (10) together, they make failure of at least some mission-critical systems and the operations they support almost certain unless corrective actions are taken.
To help Iraq assume responsibility for sustaining U.S. reconstruction efforts, U.S. agencies are implementing programs to build the capacity of Iraq’s central and provincial governments, including State’s PRDC program to strengthen the capacity of Iraqi provincial governments to deliver essential services such as water and electricity and USAID’s NCD program to assist the Iraqi government in improving the administrative capacity of several ministries and executive offices through training. In 2005, the United States created PRDCs to give the provinces a voice in deciding how to spend U.S. reconstruction funds for Iraq. The PRDCs comprise members of Iraq’s Provincial Councils, representatives of the governor, and the Director Generals of Iraq’s Central Ministries. The PRDC role is to identify needs within their province, prioritize the needs, and develop a list of projects to address those needs. The primary U.S. objective of the PRDC program is not reconstruction but strengthening the capacity of Iraqi provincial governments to develop and implement essential service projects, according to State. Congress appropriates funds to State, which are used, for the PRDC program to State, and the USACE’s Gulf Regional Division (GRD) implements the program. ITAO coordinates and oversees the selection process for specific projects and according to State provides overall program management, and GRD provides project management. The Office of Provincial Affairs within the U.S. Embassy Baghdad provides policy guidance and support to the Provincial Reconstruction Teams (PRT) program. The PRTs serve as the coordinating body for this funding, assisting PRDCs with identifying, prioritizing, and developing project request packages. Each PRDC creates a prioritized list of reconstruction projects that address provincial needs; these are then discussed in a public forum. The project list is submitted to the Provincial Council for review and approval. Approved projects are forwarded to the PRT provincial program manager who reviews them to ensure they meet U.S. government policies and legal requirements. The PRT provincial program manager forwards the list to ITAO for review and approval. The National Embassy Team reviews and approves projects. The Gulf Region Division (GRD) scopes, estimates, bids, and awards project contracts. As the implementer, GRD assists ITAO by providing program and project oversight, which includes awarding contracts and providing quality assurance and quality control. The PRT engineer provides teaching, mentoring, training, guidance, and support for the PRDCs in preparing scopes of work, bills of quantities, estimates, and project nomination forms. The program funds small-scale projects proposed by the PRDCs, including water and electric plants, roads, bridges, schools, health clinics, airports, and fire stations. For fiscal year 2007 funds, the PRDC program shifted focus to provide funds to help provincial governments sustain and plan essential service projects. Figure 1 is an example of a PRDC project we visited in Iraq in November 2008. Through three interagency agreements between State and USACE, State obligated $700 million for fiscal years 2006 and 2007 to reimburse the USACE for costs incurred and awards made for the PRDC program. Specifically, State obligated $315 million and $385 million for fiscal years 2006 and 2007, respectively, under the agreements. As table 1 shows, for fiscal year 2006, USACE had entered into contracts to implement the program totaling about $259 million and about 135 out of 213 projects had been completed by April 2009, according to GRD. For fiscal year 2007, USACE had entered into contracts amounting to about $207 million and about 40 out of 185 projects had been completed by April 2009, according to GRD. In July 2006, USAID created the National Capacity Development (NCD) program to build the capacity of Iraq’s central government. The program focuses on building the skills and capabilities of several Iraqi government executive offices, such as the Prime Minister’s Office, and 10 key Iraqi ministries, such as the Ministries of Electricity, Oil, and Water. Key tasks include (1) raising the skill levels of Iraqi public managers in project management, fiscal management, human resources, budgeting, and information technology; (2) advising key ministries in strategy development, program planning, and capacity building; and (3) expanding the Iraqi government’s training capacity at its national training center and in the provinces. Additional activities for the program included providing equipment, furniture, and support to develop Iraq’s training centers, and providing overseas scholarships to Iraqi civil servants. To help reform the Iraqi government’s procurement system, USAID purchased equipment for administrative tribunal courtrooms at the Ministry of Planning and Development Cooperation, which rules on disputes over Iraqi government contract awards (see figure 2). USAID is responsible for the NCD program and has hired a contractor, Management Systems International, to implement the program. The initial contract was for $165 million for a 3-year period. Various modifications increased the program funding, changed the scope of work, and extended the completion date to January 31, 2011. A modification made in September 2008 increased the total contract amount to $339 million. According to the USAID’s financial management system, as of April 2009, the program has obligated $259 million, and of that amount about $152 million has been disbursed. State’s PRDC program has management control weaknesses in organization, monitoring, and communication that hinder the achievement of its goal of building provincial government capacity. First, State’s organization of the program does not clearly define who is responsible for the overall management of the program, and the multistep process for implementing the program adds to this ambiguity. Second, State lacks a performance monitoring system that measures progress toward building the capacity of provincial governments. Third, State’s guidelines and policies have changed frequently, as has the direction of the program, but State did not fully communicate or consult with program implementers about these changes. Finally, USACE labor costs for the program are not always supported by adequate documentation, increasing the risk that USACE’ requests to State for reimbursement of labor costs may be overstated or understated. Management control standards require a well-managed and properly structured organization that clearly delineates authority and responsibility. In addition, management control standards call for qualified staff in place without excessive personnel turnover in key functions, such as program management, to implement proper management controls. State’s PRDC program has multiple entities responsible for managing parts of its complicated, multistep process to approve and implement projects. However, no single program manager was clearly responsible for overall management of the program until May 2009, when State designated one in response to GAO’s findings. The PRDC process for approving and implementing projects includes at least 7 entities and 7 steps involving project development, project management, and project execution. Figure 3 illustrates the PRDC’s complex organizational structure and process as reported by State. Until May 2009, no single entity was accountable for the program in its entirety or responsible for ensuring that the program’s objectives were met. For instance, although ITAO has a PRDC program manager, in response to an October 2008 report on ESF in Iraq, State indicated that ITAO coordinates and oversees project selection. The other entities also do not have responsibility for managing and ensuring that the overall program objectives are met. For instance, the Provincial Reconstruction Teams (PRT), through the PRT provincial program manager and PRT engineer, focus on helping the PRDC identify, prioritize, and develop project proposals. The PRDCs create prioritized lists of reconstruction projects that are submitted to the Provincial Council for review and approval. According to a State document, the PRT provincial program manager guides the process; however, a PRT provincial program manager is located at each PRT and therefore guides the process for that individual PRT and does not manage the entire process. ITAO and the National Embassy Team review and approve projects and then forward these to the program implementer, the USACE Gulf Regional Division (GRD). GRD focuses on scoping projects, estimating their costs, receiving bids, awarding projects, and providing quality assurance and quality control. As a result, no entity was responsible for managing the overall program and ensuring the program’s goals are achieved. Without an overall program manager, no one oversees the entire program process and has overall responsibility for addressing systemic problems such as coordination issues. For instance, although coordination between U.S. and Iraqi officials is essential to building provincial capacity, it remains one of the program’s key challenges. In our sample of 40 PRDC projects, we found that about 16 projects had problems coordinating with local Iraqi authorities. For example, determining and verifying land ownership is a major challenge in Iraq and is one of the most common causes for delays in awarding project contracts. In another instance, a $1.5 million potable water network to service Baghdad’s Mansour district lost nearly 7 months waiting for the necessary building permits and test results. Other coordination challenges have also resulted in delays, cost increases, and project terminations. For example, on a $1.4 million Baghdad water network project, a local government office did not follow established guidance in requiring certain technical tests to be performed and rejected subsequent test results because they were conducted by an independent laboratory. The municipality suspended all work at the site and threatened to arrest personnel who continued to work. With about $400,000 already spent, the project is in the process of being terminated. In May 2009, State designated ITAO as the program manager in response to one report finding. Both ITAO and GRD have staffing challenges. Our review of ITAO documents found that ITAO’s PRDC point of contact, who coordinates and oversees the selection process for specific projects, changed six times since December 2006. Specifically, from January 2008 to September 2008, ITAO had three different PRDC managers. According to GRD, these frequent changes in ITAO’s PRDC managers contributed to inconsistent information about program direction. For example, in January 2008, when the PRDC program shifted from building infrastructure to helping provincial governments in sustaining and planning essential services, ITAO failed to consult GRD about developing a new program management plan until September 2008. Gulf Region South officials stated that they have had difficulty obtaining staff with the skills and training to manage reconstruction projects. To address staffing shortages, Gulf Region South hired Iraqi associates to inspect projects in the field. In addition, Iraqi associates have been hired to contribute to a trained local work force, build local infrastructure, and ensure continued project sustainability, according to USACE officials. During our site visits, we observed that the Iraqi engineers were able to visit the sites more frequently, and because they spoke Arabic, they could interact with the Iraqi contractors. Senior GRD management stated that Iraqi workers have been essential, particularly when security conditions deteriorated. Standards for management control require performance measures and indicators to monitor progress in achieving program objectives. The PRDC program has no performance measurement system to assess whether the program is achieving its objective of helping build provincial government capacity to deliver essential services, according to State officials. According to an October 2008 State report, PRDC program accomplishments are measured by the number of projects completed and awarded and amount of funds disbursed. However, this is a measure of State’s ability to obtain and use U.S. funds. The indicator does not provide information about the extent to which U.S. efforts build the capacity of provincial governments to deliver essential services, particularly since only U.S. funds are involved in program funding. Further guidance for PRDC states that the program’s capacity building will be demonstrated when operations and maintenance services and provincial planning projects are identified and programmed into the provincial budgets for 2008. However, at the time our review, ITAO and the Office of Provincial Affairs (OPA) could not provide us with this information. PRT engineers are responsible for assisting PRDC officials by teaching, mentoring, training, guiding, and supporting the preparation of all project scopes of work, bills of quantities, estimates, and project nomination forms. During site visits, we found that PRT engineers conducted training through the local GRD district offices to help Iraqi contractors prepare technical contract proposals. Similarly, OPA provided anecdotal examples to show how PRT engineers are building capacity in two provinces. However, these examples cannot be reliably used to track progress and outcomes in building capacity. Although there is no system to monitor program outcomes, GRD tracks project implementation through the Resident Management System. For example, based on a random sample of 40 projects, we found that 16 projects had missed their milestones; 9 projects were on or ahead of schedule; 6 projects had construction cancelled or terminated; and 6 projects had been completed and accepted by the U.S. government for transferring to the Iraqi government. The most common challenges cited in these projects were contractor inefficiency, poor security, and coordination with local Iraqi authorities. Over two-thirds of the 40 projects we analyzed described numerous problems with contractors’ work. The challenges of conducting reconstruction work in a conflict environment hindered PRDC project execution in nearly half of the projects in our sample. For example, according to officials, dangerous security conditions in Maysan province prevented regional office U.S. personnel from visiting any projects in that province for an 18-month period ending in September 2008. In December 2008, a senior official at the U.S. embassy in Baghdad said the embassy was creating an official process for obtaining the status of all U.S.-funded reconstruction projects with problems that would include a review of the project schedule, budget, project status, and project quality. However, these indicators will not monitor or assess U.S. efforts to build the capacity of provincial government officials to deliver essential services. In commenting on a draft of the report, State agreed with the need to develop outcome measures of effectiveness. State proposed measures to track the length of project development, procurement, contracting, execution, and oversight process to see if it improves over time, as well as the quality of the projects completed. Other possible measures included the degree of constituent input in the project selection process, the degree of transparency and anti-corruption measures in the contracting process, the separation of powers between the executive and legislative, and the rate of Ministries, to follow-through to budget for and sustain the projects. Effective management controls call for the design and implementation of policies and procedures to ensure that management directives are carried out and that information is communicated clearly and in a timely fashion. ITAO issued PRDC program guidelines through action memorandums that specified funding allocations, types of projects that would be approved, priorities, and the general process for project approval. However, the guidelines were revised or clarified six times between August 2006 and July 2008. The program implementer—GRD—expressed concern about these frequent changes, particularly the lack of communication and consultation. In addition, according to a senior GRD official, ITAO had not communicated adequately and consistently about the guidelines and changes. For example, in January 2008, State shifted the focus of the PRDC program from building infrastructure to maintaining and sustaining projects. According to ITAO officials, State emphasized sustaining local operations and maintenance services of U.S.-procured infrastructure, strategic planning for the infrastructure projects, and capacity building for provincial governments’ professional staff. However, according to GRD officials, ITAO waited until September 2008 to consult with GRD on developing a new management plan to implement these changes. As a result, according to GRD documents, in 2008, the GRD district offices developed 109 project proposals with a value of $158 million. According to a senior GRD official, staff wasted resources developing these infrastructure project proposals because these projects were no longer the focus of the program. In commenting on a draft of this report, State indicated that GRD was involved in planning at an early date and that GRD received copies of all changes to the PRDC program via memos. However, according to GRD officials, in July 2008, ITAO directed the National Embassy Team to approve projects for award without any GRD involvement in the approval process. As of May 2009, GRD officials stated that the limited involvement of GRD in the strategic planning process for the PRDC program has hindered the ability of GRD to understand the shifts in program focus and realign resources in an efficient and effective manner to meet the needs of State Department and the program. Standards for internal control call for federal agencies to retain evidence that transactions and events are appropriately classified and promptly recorded throughout the life cycle of each transaction, including final classification in summary records from which reports are prepared. Our tests of USACE’s established controls to help ensure financial accountability for the PRDC program identified deficiencies in the maintenance of adequate documentation to support labor costs that USACE charged to PRDC projects. Inadequate documentation highlights a control weakness that may cause the USACE reported cost for specific PRDC projects to be inaccurate. Further, USACE’s requests to State for reimbursement of labor costs may be overstated or understated. Our review of time and attendance records for 152 USACE employees, totaling about $2.5 million in net labor charges to 36 PRDC projects, disclosed that about 26 percent of these charges did not have adequate supporting documentation. Our review disclosed that APPO timekeepers’ files did not contain complete time and attendance records. USACE procedures require timekeepers in Iraq to send time and attendance documentation to the Administrative Personnel Processing Office (APPO) in Winchester, Virginia, for data entry into the USACE financial management system and retention in APPO files. We also found instances where the hours on the time and attendance records that were located did not agree with the hours entered into the USACE financial management system. However, neither we nor the APPO staff could readily determine the reason for these inconsistencies. Furthermore, a November 2008 APPO review of time and attendance practices in Iraq also identified problems regarding the accuracy of labor hour charges to PRDC projects. For example, the review disclosed that employee supervisors did not routinely verify that hours entered into the financial management system agreed with hours on original time and attendance records. USACE officials also stated that, although certain managers were authorized to correct labor charges that were incorrectly charged to a project in a prior pay period, evidence of the correction was not required to be maintained in the APPO timekeepers’ files because APPO timekeepers were not responsible for recording these corrections in the financial management system. Additionally, an APPO official stated that, in 2006 and 2007, APPO timekeepers sometimes discarded original time and attendance records when corrected time and attendance records were subsequently received to avoid having two or more time and attendance records for the same pay period. Although the official explained that the current procedure is to attach corrected copies of time and attendance documentation to the original documentation, these procedures have not been formally documented. Discarding original time and attendance records precludes the ability to determine why corrections were made to the original entry. GRD program managers use financial reports derived, in part, from time and attendance records and adjustments to monitor a project’s financial status and labor resources expended. In addition, USACE uses time and attendance data to bill State for reimbursement of PRDC labor costs in accordance with an interagency agreement between the USACE and State that provides funding to USACE to implement the PRDC program. The APPO November 2008 review noted that some steps were being initiated to help improve the documentation of time and attendance transactions, which, if successfully implemented, should help improve time and attendance internal controls. However, as of January 2009, APPO informed us that time and attendance reporting problems continue to be identified. To help improve timekeeping, in April 2009, the GRD Finance and Accounting Officer informed us that the timekeeping function would be moved from APPO to the GRD office in Winchester, Virginia, by May 2009. According to the GRD Finance and Accounting Officer, the intent is to hire a team of three people to work exclusively on timekeeping matters with personnel in Iraq to increase timekeeping accuracy. APPO had been tasked with preparing personnel for deployment and travel in addition to timekeeping. We were also informed that the timekeepers are being trained on labor costing and the importance of proper labor charging. In addition, the GRD timekeepers will be responsible for documenting timekeeping problems and informing personnel in Iraq about needed improvements. The goal of USAID’s NCD program is to build the planning and administrative capacity of Iraqi ministers and officials. The organization of the program clearly lays out roles and responsibilities of key players in training and consulting with Iraqi ministries and identifies the reporting chain up to the individual responsible for the overall program. In response to a 2008 USAID Inspector General report, USAID scaled back the NCD program objective of improving ministry service delivery to more achievable objectives such as improving the ministries’ administrative systems and budget execution. For 2008, NCD monitors and tracks both outputs and outcomes for its new objectives and provides regular reporting on the results. USAID’s polices and procedures provide guidance for implementing the program by laying out explicit expectations for contract modifications and task orders in USAID’s automated directive system. Nevertheless, we found that the controls for documenting program expenditures are weak; we found invoices totaling about $17 million that did not have confirmation of receipt. The organization and structure of the NCD program is clearly laid out, and related guidance details the roles of the key players. The units responsible for training Iraqi officials and working with the ministries are clearly identified, and the chain of command is unambiguous. Figure 4 shows the organization of the NCD program. As the figure shows, the director of the Capacity Building Office (CBO) in Iraq has overall responsibility for the program. The deputy director coordinates the program and acts as the liaison with the Iraqi government and the U.S. agencies in Iraq, according to NCD guidance. The deputy director directs and coordinates NCD activities through the contractor chief of party, who is responsible for training Iraqi civil servants, consulting at the ministries, developing Iraqi training centers, and completing progress reports as requested by USAID. To carry out these activities, the program relies on Arabic-speaking employees for all aspects of its operations. As of February 2009, NCD program staff comprised 278 contract staff, of which about 70 percent are Iraqi nationals and the rest are expatriates from the United States and third countries. These staff work within Iraqi ministries, including the Ministries of Planning, Health, Agriculture, Oil, and Electricity. The staff also conducts training at three U.S. compounds in Baghdad and assists in developing the training programs at Iraqi provincial training centers in Basra, Ramadi, and Hilla. According to USAID, early in its implementation, the program faced the challenge of recruiting qualified Arabic-speaking instructors and training advisors who would reside in Baghdad under the security conditions present in Iraq in early 2007. To address this challenge, the program emphasized hiring qualified Iraqis to teach these courses, such as at the Karada compound we visited in October 2008 (see figure 5). By the end of its second year, the NCD program trained more than 25,000 Iraqi civil servants in project management, accounting, and risk analysis, according to a USAID report. USAID uses a results framework with indicators that measure program outputs and outcomes to monitor progress toward program objectives. Through this framework, USAID reviews program activities, makes corrections to identified problems, and responds to audit reports. During the second year of the NCD program, USAID revised its indicators in response to a November 2008 USAID Inspector General’s report stating that the NCD program did not have indicators to measure the program’s impact in improving key ministries’ delivery of core services. For example, USAID measured many of its output goals such as training Iraqi employees, establishing regional training centers, and awarding scholarships. However, there were no outcome indicators to measure the achievement of USAID’s goal to build the capacity of key Iraqi ministries to deliver core services. As a result, USAID narrowed its overall program indicators and stated it would begin to track the budget execution rates of Iraqi ministries such as the percentage of ministries’ approved budget that is spent. For example, USAID will now monitor the value of capital projects approved, the number of capital projects approved by the Ministry of Planning, and the rate of capital projects implemented. In 2007, 3 out of the 20 NCD program accomplishment indicators were output—or numeric—goals, such as number of civil servants trained or the number of scholarships awarded. However, in 2008, the NCD program emphasized the measurement of results and included additional outcome indicators in its accomplishment reporting. For instance, 14 out of the 24 indicators measured outcomes, or actual improvements. Some of the outcome indicators for 2008 included the extent to which trainees were using their new skills at work and saw related improvements at their office; whether the ministries were implementing improved fiscal information technology systems, based on USAID contractor’s recommendations; and the extent to which ministries and the Iraqi government’s public administration training center, the National Center for Consultation and Management Development, were initiating their own training. Table 2 provides examples of these indicators and our reason for considering these to be outcome indicators. USAID has been compiling data on the 2008 overall results for (1) strengthening public management skills, (2) establishing more effective administrative systems, and (3) expanding the Iraqi government’s training capacity. In October 2008, USAID reported that some target measures were exceeded, some were not achieved, and several were on track. For instance, the program did not achieve its target for significant improvement reported by graduates within their ministry or unit. We observed and participants told us during our site visits in October 2008 that many of the trainees were lower-level employees who lacked the authority to implement their new skills within the units to which they returned after training. USAID reported that ministries were either on track to meet or had exceeded other results to establish more effective administrative systems for 6 of 7 indicators. USAID stated it had achieved or exceeded targeted results for 8 out of 10 indicators to expand the Iraqi government’s capacity to train its own officials. However, the results were not complete at the time of our review so we could not independently assess them. Polices and procedures for the NCD program are documented and accessible. USAID programs are required to follow the mandatory guidance in the automated directives system (ADS), which includes USAID internal policy and required procedures as well as external regulations. Agency employees must adhere to these policy directives and required procedures. For example, ADS chapter 596 gives management responsibility for internal controls and provides the policy and required procedures to improve the accountability and effectiveness of USAID programs by establishing, assessing, and reporting on internal controls. In addition, ADS chapter 253 provides guidance for designing and implementing training and capacity building programs. The chapter includes guidance on assigning primary responsibility for the program; host country responsibilities for the program; and requirements for data collection, reporting, and monitoring of the program and participants. The NCD contract and its contract modifications provide specific guidance and expectations for implementing the program. For example, the first task order implementing the contract called for six major tasks, a list of responsibilities assigned to each major task and deadlines from within 30 days to 24 months for these assignments. The assignments included assisting the Iraqi government in developing its own capacity-building strategy, training government of Iraq employees at specific ministries, introducing standard training modules for regional training centers, and sending at least 50 Iraqis abroad to work on degrees or certificates related to public administration. Subsequent modifications added tasks based on the capacity-building needs of the ministries and Iraqi government. For example, a September 2007 modification expanded ministerial capacity development teams and placed project management units in key ministries and institutions. Communication of the results occurred regularly and the contractor was required to provide weekly, monthly, quarterly, and annual reports on program implementation. We reviewed annual, quarterly, and monthly reports for 2006 through 2008 and some of the weekly reports. These reports documented program statistics on training, consulting with ministries, provision of equipment, and other activities. The reports also reported on challenges to implementing the program. For example, in 2007, the security situation in Iraq, including the inability to visit ministries and send Iraqis to training, was a major challenge to implementing the program. Other challenges included high staff turnover and the difficulties in acquiring skilled staff fluent in Arabic, which USAID has addressed by hiring local Iraqis. Longer-term challenges included dealing with extensive capacity needs at the ministries, while identifying ministries and individuals willing to implement reforms. Under USAID policy, an approving officer, usually the cognizant technical officer, performs administrative approval, which provides written evidence that USAID received the services or goods specified on the contractor’s invoice prior to payment, and fills out a checklist to support administrative approval. The checklist is on the USAID Administrative Approval Form and Checklist and includes six different options for supporting administrative approval. Between April 2007 and June 2008, USAID received 18 invoices from the NCD contractor totaling about $79 million. We found that the cognizant technical officer did not check off the option indicating receipt of contractor services on the form for administrative approval for 6 of these 18 invoices, totaling about $17 of the $79 million. Instead, the cognizant technical officer indicated that acceptance of the contractor’s services was based on meeting(s) between the officer and contractor personnel during which the contractor’s performance was discussed. Thus, the cognizant technical officer appeared to rely on the contractor’s statements that the billed services were provided to authorize payment of the bill. Although the Administrative Approval Form and Checklist provides an option for an officer to confirm the receipt of goods or services by marking the appropriate place, instructions do not require confirmation of receipt. For example, the instructions direct the officer to mark as many reasons on the form as possible to justify acceptance of the contractor’s services, and that at least one reason on the checklist must be checked for administrative approval of the contractor’s invoice. Not requiring the confirmation of receipt invalidates this internal control and could circumvent the regulatory and GAO internal control standards that require confirmation of receipt prior to payment. The instructions for the form do not require an explanation if confirmation of receipt is not possible. According to USAID officials, there is a reasonable likelihood that security risks may arise that make confirming the receipt of services impossible due to prohibitive personal danger or security protection costs. If these conditions are not documented, USAID managers cannot readily monitor the extent to which invoices were paid without confirmation of receipt or take other measures to ensure that the government’s interest is protected. USAID policy also requires verification of the pricing and computations on a contractor’s invoice and assigns this responsibility to the financial voucher examiner. We found that USAID did not have reasonable assurance that the voucher examiner completed this function because many of the contractor’s invoices showed no indication that the examiner had performed any verifications, causing a lapse in the internal control to detect any errors in contractor billings. Furthermore, USAID policy did not specifically address the documentation that voucher examiners should use to support their analysis of contractor invoices. However, the USAID Deputy Controller in Iraq stated that, as of November 2008, voucher examiners began using a form to document their analysis of contractor invoices. It is too early to determine whether this form has been an effective control to prevent improper payments to the contractor. We observed that the complexity of these invoices and the process of verifying pricing and computations against the terms of the contract are unwieldy. This increases the risk that the voucher examiner’s review may not prevent improper payments. For example, two of the contractor’s invoices were more than 100 pages and listed numerous labor-hour costs and other direct costs, including a variety of footnotes and adjustments. Invoices listed the number of hours that specific individuals worked on the NCD program and numerous labor hours for administration. In addition, the cost of equipment purchases was not easily identifiable, if at all. Furthermore, the Defense Contract Audit Agency identified problems with certain NCD contractor costs billed to USAID. Iraq committed to sustaining U.S.-funded projects and programs and sharing in their costs in several official documents and the International Compact for Iraq. For example, we found that Iraqi government officials had signed letters agreeing to sustain many of the PRDC projects in our sample. The documents, however, do not specify dollar amounts or other resources to do this. For the NCD program, two Iraqi ministries signed memorandums of understanding for support of the program and eight other ministries developed capacity-building strategies that incorporated NCD materials. Iraq also demonstrated its commitment to U.S. efforts by expanding the NCD training program and starting its own training programs in some ministries. Several ministries also made 2009 budget commitments to continue the NCD training and provide equipment for training centers, among other efforts. These amounts are due to be expended during 2009. However, our past work has found that, although Iraq budgets for investment and sustainment activities, it may not spend the budgeted funds. For the PRDC program, 16 of the 40 projects we reviewed had indications that the Iraqi government agreed to sustain the projects; however, none of the records we examined included specific funding or resource commitments that would allow a check against actual Iraqi budgets and expenditures. For fiscal year 2007 project funds, U.S. guidance required that all PRDC project proposals include a letter of sustainment from the appropriate Iraqi government office. However, in response to our request, ITAO provided only 10 of the 12 letters of sustainment for fiscal year 2007 projects in our sample. For fiscal year 2006, 6 of the 28 projects in our sample had evidence of Iraqi government commitment to sustain the projects. Letters of sustainment indicate Iraqi government approval for the design and construction of the project and an agreement to accept staff, operate, and maintain the project. For example, on a 2007 PRDC project to convert river water to drinking water, the Director General of the Wasit Water Directorate signed a letter agreeing to staff, operate, and maintain the water plant, once completed. For a 2007 PRDC project to build four electrical feeders, the Director General of Electricity agreed to prepare and submit an annual budget to the Ministry of Electricity to operate and maintain the project. The Iraqi government has agreed to support some NCD efforts. For example, the Ministry of Electricity pledged to provide ongoing support for NCD efforts; sustain projects funded in whole or part by NCD; and provide staff to NCD, including finance and accounting specialists, power generation engineers, maintenance engineers, and others. Commitments from other ministries have been demonstrated by their actions to develop capacity-building plans with NCD assistance. In October 2007, we reported that not all U.S. capacity development efforts were clearly linked to the needs and priorities identified by the Iraqis, which may reduce the sustainability of U.S.-funded efforts. USAID has attempted to identify Iraqi government needs and obtain official government commitments by helping the ministries develop their own capacity-building plans. As of January 2009, eight ministries, plus the Prime Minister’s Office and the Council of Ministers’ Secretariat, had developed capacity development plans with NCD assistance. Based on ministry self-assessments that identify Iraqi needs and priorities, the plans emphasized the Iraqi ministries partnering with the U.S. government on budget execution, training in project management, strategic planning, human resources, and fiscal management. For example, the Prime Minister’s Office, as result of developing its capacity development plan in 2007, developed a new organization structure, job descriptions, and a strategy for the office. Iraqi ministries have also demonstrated commitment to NCD’s train-the- trainers program, and five ministries have started their own training programs. USAID stated that the Iraqi government is increasingly taking over USAID’s training. For example, according to USAID’s contractor, as of May 2008, Iraqi government staff who had graduated from USAID’s courses and received additional train-the-trainer courses taught more than half of all monthly courses. Iraqi government staff trained by USAID contractors taught 14,720 (over 54 percent) of 27,127 course participants trained through September 2008. Moreover, as of September 2008, the Iraqi government’s National Center for Consultancy and Management Development delivers all train-the-trainer and core public administration courses in project management, budgeting, procurement, and strategic planning. The center has developed a tool to assess trainers and established a monitoring and evaluation unit to assess the impact of its training programs. State’s PRDC 2006 guidance states that U.S. assistance will be matching in nature to ensure commitment and investment from the provincial government. PRDC program guidelines for fiscal year 2007 ESF funds further state that the development of capacity building would be demonstrated when projects are identified and programmed into Iraq’s 2008 provincial budgets. In its 2007 International Compact, Iraq stated that any new development programs should be co-financed by Iraq to leverage Iraq’s own resources and provide a framework for mutual accountability. ITAO officials could not provide support that specific PRDC projects have been co-financed or that the Iraqi government budgets contained operating and maintenance funds for these projects. In August 2008, GRD officials stated that Iraq had not provided any cost-share funds for implementing fiscal year 2006 and fiscal year 2007 PRDC projects to date. Moreover, according to a GRD official, the Iraqis have been unable to meet their commitments to sustain PRDC projects because the central ministries have not budgeted sufficient funds to sustain projects. For example, according to this GRD official, the Director General for roads in the Ministry of Construction and Housing budgeted only $100 million in fiscal year 2008 for building and maintaining all roads in Iraq—an inadequate amount for road construction and maintenance. Iraq’s past inability to spend its investment budget also raises concerns about whether Iraq is providing funds to sustain PRDC projects. In March 2009, we reported that Iraq’s inability to spend its resources, particularly on investment activities, limits the government’s efforts to further economic development and deliver essential services to the Iraqi people. Although Iraq’s total expenditures grew from 2005 through 2007, Iraq was unable to spend all of its budgeted funds, especially for investment activities such as maintenance of roads, bridges, vehicles, buildings, water and electricity installations, and weapons. In 2007, Iraq spent 28 percent of its $12 billion total investment budget. In 2008, it spent 39 percent of its $24 billion investment budget. In 2008, the Iraqi government spent $949 million, or about 2 percent of its total 2008 expenditures, for the maintenance of Iraqi- and U.S.-funded investments. In 2008, the provinces spent $2,054 million, or 22 percent of total investment expenditures of $9,167 million. The 2008 investment budget for the provinces, including the supplemental budget, was $6,470 million. The provinces spent 32 percent of their investment budget. There were sufficient unspent budget funds for the provinces to provide matching funds. The spending limits imposed by the central ministries in Baghdad also limit the ability of Iraq’s provincial governments to sustain projects. According to a PRT official, the Director General for water in Basra has a $2,500 per item requisition limit. For items higher than that amount, he needs approval from the Ministry of Water in Baghdad. Most items cost more than $2,500; a filter for the Garma water purification facility, for example, costs $25,000. Obtaining approval from the central ministry in Baghdad takes time, and often that ministry will approve some parts but not others, which severely limits the Director General’s ability to live up to his commitment to sustain U.S.-funded water projects in Basra. For the NCD program, Iraq government ministries and executive offices have pledged about $95 million in cost sharing for specific NCD-related activities and procurements, according to USAID. Some of these commitments are included in Iraqi government budgets, according to a USAID report. For example, the Ministry of Agriculture budgeted $5.8 million to construct a strategic planning center for training and capacity building and allocated $5.1 million in its operating budget to run the center. The report also states that the Iraqi Ministry of Planning and Development Cooperation allocated $6 million in its 2009 operational budget to fund postgraduate studies. Moreover, the Council of Ministers Secretariat allocated $1 million to create an executive training department and dedicated office space to a full-time training center. USAID also reported that various ministries and executive offices have agreed to contribute about 41 percent (an estimated $2.2 million) of the total cost of funding equipment, facilities, and training at four geographic information system centers and 16 training centers. A successful U.S. transition from Iraq depends on the Iraqi government’s commitments to programs that build government capacity, such as the PRDC and NCD programs. However, given the management weaknesses in State’s PRDC program, including a lack of an overall program manager, measures of effectiveness, and a lack of a focus on capacity building, it is unclear if the program is achieving its objectives. Although the Iraqi government has agreed to maintain PRDC projects, based on prior experience, the Iraqi government’s commitment to spend resources on U.S.-funded reconstruction projects may not be realized. Also, USACE financial controls for the timekeeping process did not ensure adequate documentation, although USACE introduced initiatives to correct this. The management controls of USAID’s NCD program support the objective of capacity building,and the program, including its outcome indicators and guidance, is focused on this objective. However, the program has weaknesses in the financial controls for confirming the receipt of goods and services and review of contractor invoices. In addition, USAID will need to ensure that the Iraqi government follows through on commitments to sustain USAID NCD programs. To help these programs achieve their objectives of building the capacity of the provincial governments and central ministries, We recommend that, for the PRDC program (1) the Secretary of State ensure that management control weaknesses are addressed in the PRDC program by designating an overall program manager; developing outcome measures of effectiveness; and documenting actual Iraqi government budget allocations and expenditures for fiscal year 2007 PRDC projects; and (2) the Secretary of the Army ensure that USACE initiatives to improve the financial controls for the timekeeping process correct the deficiencies discussed in this report. We recommend that, for the NCD program, the USAID Administrator (1) revise USAID policy and procedures for confirming receipt of goods or services applicable to the NCD program in Iraq to include (a) clarifying that confirmation of receipt of goods/or services must be noted separately from the administrative approval or (b) documenting reasons precluding actual confirmation such as prohibitive personal danger or security protection costs; (2) ensure that USAID/Iraq initiatives to improve the documentation of the voucher examiner’s required review of contractor invoices correct the deficiencies discussed in this report; and (3) document actual Iraqi government budget allocations and expenditures to ensure funds committed to support NCD activities are expended. State, USACE, and USAID provided written comments on a draft of this report, which we have reprinted in appendixes II, III, and IV. The USACE also provided technical comments, which we have incorporated where appropriate. State agreed with our recommendation to address management control weaknesses in the PRDC program. State commented that it had clarified and confirmed ITAO’s overall responsibility for the PRDC program and that a program manager has now been designated. State also accepted our recommendation to develop outcome measures of effectiveness for PRDC and clarified that its reporting of projects approved and funds dispersed in the 2207 report to Congress was not intended to be a measure of PRDC’s success. State further agreed to report on Iraqi government contributions to PRDC projects in its next cost matching report to Congress. USACE agreed with our draft recommendation to strengthen its financial controls for payroll and provided additional information about its initiatives to improve its timekeeping process. We subsequently refined our recommendation to state that the Secretary of the Army ensure that USACE initiatives improve the timekeeping process correct the deficiencies discussed in the report. USACE agreed to this recommendation. In technical comments, USACE noted substantial discrepancies—amounting to millions of dollars and over 100 projects— between its financial and project data and ITAO’s data that we included in our draft report. In reconciling the conflicting data, ITAO agreed to revise its April 3, 2009, Essential Indicators Report to reflect the corrected data. USAID commented that it is taking under advisement our recommendation to require confirmation of receipt of goods and services and that it has already implemented our recommendation to document the voucher examiner’s review of contract invoices. The agency agreed to implement our recommendation to document the government of Iraq’s commitments and expenditures associated with the NCD program. We are sending copies of this report to interested congressional committees and the Secretary of State, the Secretary of Defense, the Secretary of the Army, and the Administrator for USAID. We will also make copies available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8979 or christoffj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. To assess whether the management controls of the Provincial Reconstruction Development Committee (PRDC) program and the National Capacity Development (NCD) program support the achievement of the programs’ objectives, we interviewed agency officials and analyzed project contracts, program files, agency reports, guidelines, financial and programmatic databases, and assessments for both the PRDC and NCD programs. We examined key management controls, including (1) a clear organizational structure with adequate managerial capacity and financial systems that establish an effective control environment; (2) policies and procedures that ensure management directives are carried out and communication of those policies and procedures; and (3) monitoring systems that track progress toward desired outcomes. We interviewed officials at the Department of State (State), U.S. Agency for International Development (USAID), and the U.S. Army Corps of Engineers (USACE) in Washington, D.C., and Iraq. In Iraq, we also met with officials at State’s Iraq Transition Assistance Office (ITAO), which oversees and coordinates aspects of the PRDC program, the Office of Provincial Affairs, and Gulf Regional Division (GRD) and its district offices. We reviewed a sample of 40 PRDC construction projects using Economic Support Funds (ESF) from fiscal years 2006 and 2007and examined all activities in the NCD program. We conducted field visits to several PRDC projects and to a training center, where we observed Iraqi officials in NCD training activities. We used information from the USACE’s Resident Management System database to identify the status of projects and specific challenges. We assessed the reliability of this system database by (1) interviewing agency officials and contractors about data quality control procedures, and (2) checking the data by visiting sites. We determined that the data were sufficiently reliable for the purposes of this report. To develop the sample of 45 PRDC projects, we used the population of 290 PRDC projects as of June 30, 2008, that were initiated with fiscal year 2006 and 2007 ESF. We selected a dollar unit sample to test financial controls over PRDC project data in the USACE’s Financial Management System. We used a 90-percent confidence level, an expected error of 0, and tolerable error of 5 percent, for the sample plan. Given the parameters of the sample test (e.g., sample size, expected errors) any control testing error would indicate the control is not operating as designed. To review the financial management controls over the PRDC and NCD program, we reviewed documents supporting obligation and disbursement transactions for 40 PRDC construction and 5 nonconstruction projects, and all NCD obligation and disbursement transactions. We interviewed USAID officials in Washington, D.C., and Iraq; and USACE officials in Washington, D.C.; Millington, Tennessee; Winchester, Virginia; and Iraq. Our review of PRDC and NCD transactions covered the period October 1, 2005, through June 30, 2008. To determine whether the financial data provided to us were reliable, we interviewed agency officials and performed testing regarding the accuracy and completeness of information. To evaluate U.S. efforts to ensure Iraqi government commitment to sustaining U.S. program efforts, we examined U.S. guidance in obtaining commitments from the Iraqi government and interviewed State and USAID officials and their program implementers, GRD and Tatweer. We then focused on two elements of commitment—letters or other evidence that the Iraqi government committed to sustain or maintain the programs and projects and evidence that the Iraqi government is sharing the cost of U.S. efforts. PRDC program guidelines for 2006 state that the projects will be matching in nature to ensure buy-in and investment from the government and the 2007 guidelines require an Iraqi government letter of sustainment for the projects. Moreover, Iraq has budget surpluses, and in the Annual Review of the International Compact with Iraq, the Iraqi government committed to co-finance any new development programs to leverage its resources and provide a framework for mutual accountability. We conducted this performance audit from March 2008 to June 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. 1. State implies that this GAO report calls for the elimination of steps or partners in the process. Instead, we believe the program’s multistep complexity illustrates the importance of implementing our recommendation to designate a program manager. 2. We added State’s additional information. However, in their technical comments, Gulf Region Division officials stated that communication and coordination remain problems as of May 2009. 3. We have changed program “success” to program “accomplishment” in the final report. However, the larger issue, as State acknowledges, is that the Provincial Reconstruction Development Committee program does not have outcome measures of effectiveness to assess how the program has enhanced the capacity of local Iraqi councils to identify, plan, and deliver essential services. 4. Our past work indicates that although the Iraqi government budgets for investment activities, its actual spending for these investments, including maintenance of projects, is limited. For instance, in 2008, the Iraqi government spent $949 million, or about 2 percent of its total 2008 expenditures, for the maintenance of Iraqi- and U.S.-funded investments. In 2008, the provinces spent $2. 054 billion or 22 percent of budgeted investment expenditures of $9.167 billion. The 2008 investment budget for the provinces, including the supplemental budget, was $6.47 billion. The provinces spent 32 percent. The following are GAO’s comments on the U.S. Army Corps of Engineers letter dated May 22, 2009. 1. We modified our recommendation in response to the additional information USACE provided about its initiatives to strengthen the timekeeping process. In follow up discussions, USACE agreed to our recommendation that the Secretary of the Army ensure the initiatives to improve the financial controls for the timekeeping process correct the deficiencies discussed in the report. Tetsuo Miyabara, Assistant Director; Mary Ellen Chervenic, Assistant Director; Jessica Butchko; Richard Cambosos; Lynn Cothern; K. Eric Essig; Dennis Fauber; Wilfred Holloway; Rhonda Horried; Grace Lui; Jason Pogacnik; and Eddie Uyekawa made key contributions to this report.
Since 2003, the United States has provided $49 billion to help rebuild Iraq. To build the capacity of Iraq's central and provincial governments to sustain this effort, the United States is implementing programs including Department of State's (State) Provincial Reconstruction Development Committee (PRDC) and the U.S. Agency for International Development's (USAID) National Capacity Development (NCD). The use of key management controls, such as appropriate organizational structure and program monitoring, helps ensure programs achieve their objectives. Through field visits in Iraq, interviews with program officials, analyses of official reports, and examination of a sample of projects, we assessed whether the PRDC and NCD's management controls support the programs' objectives of building the capacity of Iraq's government. We also assessed Iraq's commitment to sustaining these U.S. programs. Through the PRDC program, State and USACE work with Iraqis in the provinces to develop proposals and undertake small-scale projects such as building schools, repairing roads, and developing water facilities. However, weaknesses in State's management controls hinder achieving the program objective to build provincial government capacity. First, the program involves multiple organizations and a complex process but had no clearly identified program manager until May 2009 when State designated one in response to GAO's findings. Second, State lacks a performance monitoring system that measures progress toward building provincial capacity to deliver essential services. Third, the program's guidelines and policies have changed frequently, but State did not adequately communicate or consult with the USACE, the program implementer, about these changes. Finally, USACE's financial controls for the timekeeping process did not ensure adequate documentation of time and attendance records for labor charges on projects. USAID's management controls generally supported the NCD program's objective of building ministry capacity by training Iraqi employees in administrative skills such as planning and budgeting and supporting Iraqi training centers. First, USAID's organizational structure is clear, including who is responsible for overall program management. Second, in response to an audit report, USAID narrowed the NCD program objective to improving ministries' administrative capabilities and clearly linked them to measures of outcome. Some of these measures include Iraqi ministries' execution of their capital budgets, including the number of capital projects approved and the rate of spending on capital projects. USAID reported it was on track to meet or exceed its 2008 targeted results. However, as of March 2009, final data on results were not available. Third, USAID's guidelines and program expectations for NCD are documented, clear, and communicated throughout the organization. However, with regard to financial controls, GAO found that USAID officials did not confirm receipt of goods and services for invoices totaling about $17 million of $79 million, prior to payment. The officials did not always document reasons such as security risks, when confirmation was not possible. Iraq has committed to sustaining U.S.-funded programs and sharing in their costs, but actual budget expenditures for such activities are unclear. For the PRDC program, 16 of the 40 projects in our sample had evidence that the Iraqi government agreed to sustain the project; however, the records did not specify actual financial or budget commitments. For the NCD program, the Iraqi government is supporting the program by providing trainers and allocating funds in their 2009 budgets for training center equipment and other NCD efforts. These funds are to be spent in 2009. We have previously reported that the Iraqi government includes funding in its budgets for investment activities such as operating and maintaining U.S.-funded reconstruction projects and training, but does not subsequently expend these funds.
In 1986, IRCA established the employment verification process based on employers’ review of documents presented by employees to prove identity and work eligibility. On the Form I-9, employees must attest that they are U.S. citizens, lawfully admitted permanent residents, or aliens authorized to work in the United States. Employers must then certify that they have reviewed the documents presented by their employees to establish identity and work eligibility and that the documents appear genuine and relate to the individual presenting them. In making their certifications, employers are expected to judge whether the documents presented are obviously counterfeit or fraudulent. Employers are required to retain the Form I-9 and provide it, upon request, to officers of the Departments of Homeland Security and Labor and the Department of Justice’s Office of Special Counsel for Immigration Related Unfair Employment Practices for inspection. Employers generally are deemed in compliance with IRCA if they have followed the Form I-9 process, including when an unauthorized alien presents fraudulent documents that appear genuine. Following the passage of IRCA in 1986, employees could present 29 different documents to establish their identity and/or work eligibility. In a 1997 interim rule, the former U.S. Immigration and Naturalization Service (INS) reduced the number of acceptable work eligibility documents from 29 to 27. The Illegal Immigration Reform and Immigrant Responsibility Act (IIRIRA) of 1996 required the former INS and SSA to operate three voluntary pilot programs to test electronic means for employers to verify an employee’s eligibility to work, one of which was the Basic Pilot Program. The Basic Pilot Program was designed to test whether pilot verification procedures could improve the existing employment verification process by reducing (1) false claims of U.S. citizenship and document fraud, (2) discrimination against employees, (3) violations of civil liberties and privacy, and (4) the burden on employers to verify employees’ work eligibility. In 2007, USCIS renamed the Basic Pilot Program the Employment Eligibility Verification program and later in the year changed the name to E-Verify. E-Verify provides participating employers with an electronic method to verify their employees’ work eligibility. Regardless of whether employers participate voluntarily in E-Verify, they are still required to complete Forms I-9 for all newly hired employees in accordance with IRCA. After completing the forms, those employers participating in the program query E-Verify’s automated system by entering employee information provided on the forms, such as name and social security number, into the E-Verify Web site within 3 days of the employee’s start date. The program then electronically matches that information against information in SSA’s Numident database and, if necessary, DHS databases to determine whether the employee is eligible to work. E-Verify electronically notifies employers whether their employees’ work authorization was confirmed. Those queries that the DHS automated check cannot confirm are referred to USCIS staff, called immigration status verifiers, who check employee information against information in other DHS databases. The E-Verify program process is shown in figure 1. In cases when E-Verify cannot confirm an employee’s work authorization status either through the automatic check or the check by an immigration status verifier, the system issues the employer a tentative nonconfirmation of the employee's work authorization status. In this case, the employers must notify the affected employees of the finding, and the employees have the right to contest their tentative nonconfirmations by contacting SSA or USCIS to resolve any inaccuracies in their records within 8 federal working days. During this time, employers may not take any adverse actions against those employees, such as limiting their work assignments or pay. After 8 days, employers are required to either immediately terminate the employment, or notify DHS of the continued employment, of workers who do not successfully contest the tentative nonconfirmation and those whom the program finds are not work-authorized. The E-Verify program uses the same system as USCIS’s Systematic Alien Verification for Entitlements Program, which provides a variety of verification services for federal, state, and local government agencies. USCIS estimates that more than 150,000 federal, state, and local agency users verify immigration status through the Systematic Alien Verification for Entitlements Program. SSA also operates the Web-based Social Security Number Verification Service, which employers can use to assure that employees’ names and social security numbers match SSA’s records. This service, designed to ensure accurate employer wage reporting, is offered free of charge. Employer use is voluntary, and approximately 12,000 employers requested more than 25.7 million verifications in 2005, according to the SSA Office of the Inspector General. USCIS contracted for an independent evaluation of the E-Verify program. Westat, the organization that conducted the evaluation, issued a report on its evaluation findings in September 2007. According to this report, the Westat evaluation examined how well the federal government implemented modifications made to the original Basic Pilot Program and the extent to which the program met its goals to (1) reduce employment of unauthorized workers, (2) reduce discrimination, (3) protect employee civil liberties and privacy, and (4) prevent undue burden on employers. Based on its findings, Westat made recommendations to USCIS and SSA intended to help improve the program. Mandatory electronic employment verification would substantially increase the number of employers using the E-Verify program, which would place greater demands on USCIS and SSA resources. As of April 2008, more than 61,000 employers have registered to use the program, about 28,000 of whom were active users, according to USCIS. USCIS has estimated that approximately 4,000 employers are registering per month. In fiscal year 2007, USCIS processed about 3.2 million employer queries and for the first 6 months of fiscal year 2008, processed about 2.6 million queries. If participation in the E-Verify program were made mandatory, the program would have to accommodate all of the estimated 7.4 million employers in the United States. USCIS has projected that employers would submit an average of 63 million queries on newly hired employees per year under a mandatory E-Verify program. USCIS officials stated that they have tested the capacity of the E-Verify computer system to handle about four times the projected load of queries that would occur if E-Verify participation were made mandatory for all employers. These tests showed that the E-Verify system can process up to 240 million queries per year, with the purchase of 5 additional servers, exceeding USCIS’s projection of an average of 63 million queries per year under a mandatory E-Verify program. USCIS has developed cost and staffing estimates for operating a mandatory E-Verify program. Although DHS has not prepared official cost figures, USCIS officials estimated that a mandatory E-Verify program could cost a total of about $765 million for fiscal years 2009 through 2012 if only newly hired employees are queried through the program and about $838 million over the same 4-year period if both newly hired and current employees are queried. Mandatory implementation of E-Verify would also require additional USCIS staff to administer the program, but USCIS was not yet able to provide estimates for its staffing needs. Under the voluntary program, USCIS operated E-Verify with 12 headquarters staff members in 2005, which has grown to about 121 full-time employees nationwide, with 21 staff members for monitoring and compliance and 11 for status verification operations. According to USCIS, the agency would increase its staffing level based on a formula that considers monitoring and compliance and status verification staffing needs as the number of employers using E-Verify increases. A mandatory E-Verify program would also require an increase in SSA’s resource and staffing requirements. SSA has estimated that implementation of a mandatory E-Verify program would cost a total of about $281 million for fiscal years 2009 through 2013 and require hiring 700 new employees for a total of 2,325 additional workyears over the same 5-year period. According to SSA, these estimates represent costs if the current E-Verify system is expanded, and any changes to the current process could have significant additional costs to the agency. The estimates include costs for start-up, such as system upgrades, training for current SSA employees, and training, space, and workstations for new employees, and ongoing activities, such as field office visits and system maintenance. SSA’s estimates assume that under a mandatory expansion of the current E-Verify program, for every 100 E-Verify queries, about 1.4 individuals will contact SSA regarding a tentative nonconfirmation. According to SSA officials, the cost of mandatory E-Verify would be driven by the increased workload of its field office staff due to resolving SSA tentative nonconfirmations, as well as some of the computer systems improvements and upgrades that SSA would need to implement to address the capacity of a federal mandatory program. Moreover, the final number of new full-time staff required would depend on both the legislative requirements for implementing mandatory E-Verify and the effectiveness of efforts USCIS has underway to decrease the need for individuals to visit SSA field offices. SSA officials told us that SSA would need time and a phased-in approach for implementation of a mandatory E-Verify program in order to handle the increased workload for SSA field offices. In prior work, we reported that secondary verifications lengthen the time needed to complete the employment verification process. The majority of E-Verify queries entered by employers—about 92 percent—confirm the employee is authorized to work within seconds. About 7 percent of queries are not confirmed by the initial automated check and result in SSA tentative nonconfirmations, while about 1 percent result in DHS tentative nonconfirmations. With regard to the SSA tentative nonconfirmations, USCIS officials told us that the majority of erroneous tentative nonconfirmations occur because employees’ citizenship status or other information, such as name changes, is not up to date in the SSA database, generally because individuals have not notified SSA of information changes that occurred. SSA updates its records to reflect changes in individuals’ information, such as citizenship status or name, when individuals request that SSA make such updates. USCIS officials stated that, for example, when aliens become naturalized citizens, their citizenship status, updated in DHS databases, is not automatically updated in the SSA database. When these individuals’ information is queried through E-Verify, a tentative nonconfirmation would be issued because under the current E-Verify process, those queries would only check against SSA’s database; they would not automatically check against DHS’s databases. Therefore, these individuals would have to go to an SSA field office to correct their records in SSA’s database. USCIS and SSA are planning to implement initiatives to help address SSA tentative nonconfirmations, particularly those issued for naturalized citizens, with a goal of reducing the need for employees to visit SSA field offices. For example, in May 2008 USCIS launched an initiative to modify the electronic verification process so that employees whose naturalized citizenship status cannot be confirmed by SSA will also be checked against DHS’s databases. A query that could not be confirmed by SSA would be automatically checked against DHS’s databases. If the employee’s information matched information in DHS’s databases and the databases showed that the person was a naturalized U.S. citizen, E-Verify would confirm the employee as work authorized. USCIS and SSA intend for this modification to enable USCIS to check naturalization status before an SSA tentative nonconfirmation is issued as a result of the naturalized citizen’s information not matching citizenship information in SSA’s database. According to USCIS, this should help eliminate the need for the employee who is a naturalized citizen to travel to an SSA field office before being confirmed as work authorized. USCIS has projected that as it implements this modification, the number of tentative nonconfirmations should also be reduced. It remains to be seen by how much the number of tentative nonconfirmations will be reduced as a result of this modification. Furthermore, in May 2008 USCIS modified the E-Verify process so that naturalized citizens who receive a citizenship-related mismatch can call DHS directly to resolve this mismatch rather than having to visit an SSA field office in-person to resolve the mismatch. In addition USCIS and SSA are exploring options for updating SSA records with naturalization information from DHS records. Although this could help to further reduce the number of SSA tentative nonconfirmations, USCIS and SSA are still in the planning stages, and implementation of this initiative may require significant policy and technical considerations, such as how to link records in SSA and DHS databases that are stored according to different identifiers. USCIS and SSA are also implementing additional options to reduce delays and improve the efficiency of the verification process. USCIS stated that it is adding databases to the E-Verify program, increasing the number of databases against which queries of employees’ information are checked. For example, USCIS stated that it is incorporating real-time arrival data for noncitizens from the Inter-Agency Border Inspection System (IBIS) database, which tracks individuals, to help reduce the number of tentative nonconfirmations issued for newly arrived noncitizens queried through E- Verify. SSA has also coordinated with USCIS to develop an automated notification capability, known as the Employment Verification SSA Tentative Nonconfirmation Automated Response (EV-STAR) system. This system, available in all SSA field offices, became operational in October 2007 and allows SSA field office staff to view the same information that is provided to employers through E-Verify. In addition, SSA field office staff can notify the employer of the status of and any actions taken on the employee’s record to resolve the tentative nonconfirmation and, through EV-STAR, this information is directly updated in E-Verify. USCIS and SSA officials stated that EV-STAR has helped to reduce the burden on SSA, employers, and employees in resolving SSA tentative nonconfirmations. These efforts may help improve the efficiency of the verification process. However, they will not entirely eliminate the need for some individuals to visit SSA field offices to update their records, as USCIS and SSA efforts do not address all types of changes that may occur in individuals’ information and result in the issuance of tentative nonconfirmations, such as individuals’ name changes. In our prior work, we reported that E-Verify enhances the ability of participating employers to reliably verify their employees’ work eligibility. The program also assists participating employers with identification of false documents used to attempt to obtain employment. When newly hired employees present false information, E-Verify will not confirm the employees’ work eligibility because their information, such as a false name or social security number, would not match SSA and DHS databases. However, the current E-Verify program cannot help employers detect forms of identity fraud, such as cases in which an individual presents genuine documents that are borrowed or stolen because the system will verify an employee when the information entered matches DHS and SSA records, even if the information belongs to another person. USCIS has taken steps to reduce fraud associated with the use of genuine documents in which the original photograph is substituted for another. A photograph screening tool was incorporated into E-Verify in September 2007 and is accessible for most employers registered to use E-Verify. According to USCIS officials, the photograph screening tool is intended to allow an employer to verify the authenticity of a lawful permanent resident card (“green card”) or an employment authorization document, both of which contain photographs of the document holder. As a part of the E- Verify program, the photograph screening tool is used in cases when an employee presents a green card or employment authorization document to prove his or her work eligibility. The employer then inputs the card number into E-Verify, and the system then retrieves a copy of the employee’s photograph that is stored in DHS databases through the photograph screening tool. The employer is then supposed to match the photograph shown on the computer screen with the photograph on the original or photocopy of the employee’s lawful permanent resident card or employment authorization document and make a determination as to whether the photographs match. In completing the Form I-9, the employer is required to review the documents presented by an employee to prove identity and work eligibility and to certify that the documents appear genuine and relate to the individual presenting them. According to USCIS, for about 5 percent of employee queries that are run through E- Verify, employees present a green card or employment authorization document as identification. The use of the photograph screening tool is currently limited because newly hired employees who are queried through the E-Verify system and present documentation other than green cards or employment authorization documents to verify work eligibility—about 95 percent of E- Verify queries—are not subject to the tool. Expansion of the photograph screening tool would require incorporating other forms of documentation with related databases that store photographic information, such as passports issued by the Department of State and driver’s licenses issued by states. Efforts to expand the tool have been initiated, but are still in the early planning stages. For example, according to USCIS officials, USCIS and the Department of State have begun exploring ways to include visa and U.S. passport documents in the tool, but these agencies have not yet reached agreement regarding the use of these documents. The Department of State is working with DHS to determine the business processes and system requirements of linking passport and visa databases to E-Verify. Additionally, USCIS is negotiating with state motor vehicle associations to incorporate driver’s license photographs into E-Verify, and is seeking state motor vehicle agencies that are willing to participate in an image-sharing pilot program. As of April 2008, no motor vehicle agencies have yet officially agreed to participate in the pilot program. As USCIS works to address fraud through data sharing with other agencies, privacy issues—particularly in regards to sharing employee information with employers—may be a challenge. In its 2007 evaluation of E-Verify, Westat reported that some employers joining the Web Basic Pilot were not appropriately handling their employees’ personal information. For example, some employers did not privately inform employees that queries of the employees’ information through E-Verify resulted in tentative nonconfirmations. The report also pointed out that anyone wanting access to the system could pose as an employer and obtain access by signing a MOU with the E-Verify program. USCIS officials told us that taking actions to ensure that employers are legitimate when they register for E-Verify is a long term goal for the program. However, according to USCIS officials, implementing such controls to verify employer authenticity may require access to information from other agencies, such as Internal Revenue Service-issued employer identification numbers, to which USCIS currently does not have access. Additionally, some states and agencies have raised the issue of employee privacy. Representatives of motor vehicle agencies have expressed concerns in regards to the potential threats to customer privacy should their digital images be accessible to employers. USCIS is working to address these privacy concerns. However, it remains to be seen whether USCIS will be able to fully address all privacy concerns related to data and photograph sharing and use among agencies and employers. E-Verify is vulnerable to acts of employer fraud, such as when the employer enters the same identity information to authorize multiple workers. Moreover, although Westat has found that most participating employers comply with E-Verify program procedures, some employers have not complied or have misused the program, which may adversely affect employees. The findings from the Westat report showed that while changes to the E-Verify program appear to have increased employer compliance with program procedures compared to the previous version of the program, employer noncompliance still occurred. For example, Westat reported that some employers used E-Verify to screen job applicants before they were hired, an activity that is prohibited under E- Verify procedures. Additionally, some employers took prohibited adverse actions against employees—such as restricting work assignments, reducing pay, or requiring employees to work longer hours or in poor conditions—while they were contesting tentative nonconfirmations. Finally, Westat found that some employers did not always promptly terminate employees after receiving confirmation that the employees were not authorized to work in the United States. USCIS reported that it is working to address these issues by, for example, conducting education and outreach activities about the E-Verify program. In 2005, we reported that E-Verify provided a variety of reports that could help USCIS determine whether employers followed program requirements intended to safeguard employees—such as informing employees of tentative nonconfirmation results and referring employees contesting tentative nonconfirmations to SSA or DHS—but that USCIS lacked sufficient staff to review employers’ use of the program. Since then, USCIS has added staff to its Verification Office, created a Monitoring and Compliance branch to review employers’ use of the E-Verify system, and identified planned activities for the branch. As of April 2008, the Monitoring and Compliance branch had 21 staff and planned to hire 32 additional staff in fiscal years 2008 and 2009. Additionally, by January 2009, USCIS plans to establish a regional verification office with 135 staff members to conduct status verification and monitoring and compliance activities. With regard to compliance and monitoring activities, USCIS has identified 53 employer and employee behaviors of noncompliance and monitors the program for some of these behaviors. These behaviors include, among others, the use of counterfeit documents or substituted identities; use of the E-Verify system that does not follow procedures identified in the MOU between employers and DHS, such as failures to complete training or perform verifications within specific time frames; misuse of E-Verify to discriminate and/or adversely affect employees such as verifying existing employees, prescreening, firing employees who received tentative nonconfirmations, or not firing unauthorized employees; and detecting instances where privacy information is compromised, such as by sharing of passwords or nonemployer access of the system. Using some of these behaviors, among other things, to monitor employers’ use of E-Verify, USCIS plans to interact with employers who might not be complying with program procedures in four main ways: (1) sending letters or e-mails to advise employers of misuse of the system and to provide appropriate remedies, (2) follow-up phone calls when employers fail to respond to the initial letters or e-mails, (3) audits through which USCIS requests documents and information be sent to the agency from potentially noncompliant employers, and (4) site visits for in-person interviews and document inspection when desk audits reveal cause for further investigation. Under the current voluntary program, USCIS plans to contact about 6 percent of participating employers regarding employer noncompliance. USCIS estimates that under a mandatory E-Verify program, the percentage of employers the agency would contact regarding employer noncompliance would decrease to about 1 to 3 percent. If, as a result of its monitoring activities, USCIS found that it needed to contact more than 3 percent of employers, USCIS officials stated that the agency plans to modify its approach for addressing employers’ noncompliance. As of April 2008, USCIS plans to allocate its monitoring and compliance efforts as follows: 45 percent of its activities would involve sending letters and e-mails to employers; 45 percent would involve follow-up phone calls; 9 percent would involve desk audits; and 1 percent would involve site visits. As part of a mandatory program, USCIS would modify this distribution of monitoring activities by, for example, using letters, e-mails, and phone calls for a larger percentage of interactions with employers. However, USCIS is still in the early stages of implementing its monitoring and compliance activities. Therefore, it is too early to tell whether these activities will ensure that all employers fully follow program requirements and properly use E-Verify under a mandatory program, especially since such controls cannot be expected to provide absolute assurance. The Monitoring and Compliance branch could help ICE better target its worksite enforcement efforts by providing information that indicates cases of employers’ egregious misuse of the system. Although ICE has no direct role in monitoring employer use of E-Verify and does not have access to program information that is maintained by USCIS unless it requests such information from USCIS, ICE officials told us that program data could indicate cases in which employers or employees may be fraudulently using the system and therefore should help the agency better target its worksite enforcement resources toward those employers. ICE officials noted that, in a few cases, they have requested and received E-Verify data from USCIS on specific employers who participate in the program and are under ICE investigation. For example, USCIS told us that by monitoring use of the E- Verify program to date, staff were able to identify instances of fraudulent use of social security numbers and referred such egregious examples of fraud to ICE. However, USCIS and ICE officials told us that case referrals or requests for information between the two agencies have been infrequent, and information on the resolution of these referrals is not formally maintained by ICE. USCIS expects to complete and implement a compliance tracking system to track referrals to and responses to requests from ICE on compliance cases in fiscal year 2009. USCIS and ICE are also negotiating an MOU to define roles, responsibilities, and mechanisms for sharing and using E-Verify information. Outstanding issues that need to be resolved for the MOU include the type of information that USCIS will provide to ICE through the referral process and the purposes for which ICE will use this information. While the MOU between USCIS and ICE is incomplete, ICE officials anticipate that, if the E-Verify program is made mandatory, they would receive an increased number of referrals for investigation from USCIS. Therefore, ICE officials told us that they plan to require additional resources to follow-up on USCIS referrals. ICE also hopes to be able to use elements of the E-Verify program to detect and track large-scale instances of employer or employee fraud. For further information about this testimony, please contact Richard Stana at 202-512-8777. Other key contributors to this statement were Jonah Blumstein, Burns Chamberlain, Frances Cook, Josh A. Diosomito, Rebecca Gambler, Danielle Pakdaman, Evi Rezmovic, Julie E. Silvers, Rebekah Temple, and Adam Vogt. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In 1996, the former U.S. Immigration and Naturalization Service, now within the Department of Homeland Security (DHS), and the Social Security Administration (SSA) began operating a voluntary pilot program, recently named the E-Verify program, to provide participating employers with a means for electronically verifying employees' work eligibility. Legislation has been introduced in Congress to require all employers to electronically verify the work authorization status of their employees. In this statement GAO provides observations on the E-Verify system's capacity and costs, options for reducing delays and improving efficiency in the verification process, ability to detect fraudulent documents and identity theft, and vulnerability to employer fraud and misuse. This statement is based on GAO's products issued from August 2005 through June 2007 and updated information obtained from DHS and SSA in April 2008. We analyzed data on employer use, E-Verify guidance, and other reports on the employment verification process, as well as legislative proposals and regulations. A mandatory E-Verify program would necessitate an increased capacity at both U.S. Citizenship and Immigration Services (USCIS) and SSA to accommodate the estimated 7.4 million employers in the United States. According to USCIS, as of April 2008, more than 61,000 employers have registered for E-Verify, and about half are active users. Although DHS has not prepared official cost figures, USCIS officials estimated that a mandatory E-Verify program could cost a total of about $765 million for fiscal years 2009 through 2012 if only newly hired employees are queried through the program and about $838 million over the same 4-year period if both newly hired and current employees are queried. USCIS has estimated that it would need additional staff for a mandatory E-Verify program, but was not yet able to provide estimates for its staffing needs. SSA has estimated that implementation of a mandatory E-Verify program would cost a total of about $281 million and require hiring 700 new employees for a total of 2,325 additional workyears for fiscal years 2009 through 2013. USCIS and SSA are exploring options to reduce delays and improve efficiency in the E-Verify process. The majority of E-Verify queries entered by employers--about 92 percent--confirm within seconds that the employee is work-authorized. About 7 percent of the queries cannot be immediately confirmed as work authorized by SSA, and about 1 percent cannot be immediately confirmed as work authorized by USCIS because employees' information queried through the system does not match information in SSA or DHS databases. The majority of SSA erroneous tentative nonconfirmations occur because employees' citizenship or other information, such as name changes, is not up to date in the SSA database, generally because individuals do not request that SSA make these updates. USCIS and SSA are planning to implement initiatives to help address these weaknesses and reduce delays. E-Verify may help employers detect fraudulent documents thereby reducing such fraud, but it cannot yet fully address identity fraud issues, for example when employees present genuine documents that may be stolen. USCIS has added a photograph screening tool to E-Verify through which an employer verifies the authenticity of certain documents, such as an employment authorization document, by matching the photograph on the document with the photograph in DHS databases. USCIS is exploring options to expand this tool to include other forms of documentation, such as passports, with databases that store photographic information, but these efforts are in the planning stages and require decisions about data sharing and privacy issues. E-Verify is vulnerable to acts of employer fraud and misuse, such as employers limiting employees' pay during the E-Verify process. USCIS has established a branch to review employers' use of E-Verify. In addition, information suggesting employers' fraud or misuse can be useful to U.S. Immigration and Customs Enforcement (ICE) in targeting worksite enforcement resources. USCIS and ICE are negotiating a memorandum of understanding to define roles and responsibilities for sharing information.
After volunteers separate from the Peace Corps they typically return to the United States, and may transition into new employment. As they make this employment transition, the Peace Corps offers various health care services and benefits to returned volunteers. First, each volunteer receives a close-of-service medical evaluation that assesses their health status as they complete their service. The Peace Corps also has a contract with an insurance company to make a health insurance policy— AfterCorps—available for volunteers to purchase. This policy covers non-service-connected illnesses or injuries. The Peace Corps also pays for certain health examinations for 6 months after a volunteer’s service is completed. Finally, volunteers may also be eligible for reimbursements under the FECA program for medical expenses associated with service- connected illnesses or injuries, such as those identified during the physical conducted at the close-of-service medical evaluation. The FECA program provides health benefits—reimbursement for medical expenses related to illnesses or injuries that DOL determines are service connected—as well as other benefits, such as wage-loss (death and disability) compensation. To receive benefits through FECA, a volunteer must establish that, among other things, he or she was in the performance of duty at the time the illness or injury occurred. Under the FECA program, volunteers are considered to be in the performance of duty 24 hours a day while abroad during the period of their Peace Corps service. DOL requires that if an illness or injury is first discovered after a volunteer has returned from service, then the medical evidence must show that the injury or illness was sustained while overseas or in the performance of duty. In order to be eligible for FECA health care benefits for preexisting illnesses or injuries—a condition that existed prior to service—the volunteer’s medical evidence must demonstrate that the volunteer’s service was the proximate cause of or aggravated, accelerated, or precipitated the illness or injury. Further, volunteers must apply for FECA benefits within 3 years of the date of injury or illness, or within 3 years after they recognize that a health condition is service- connected. In 2010, the FECA program provided about $2.8 billion in health and other benefits to about 251,000 federal and postal employees—including volunteers—who suffered a service related illness or injury. Volunteers who apply for FECA benefits typically go through the following steps: 1. Each volunteer is informed of the availability of FECA benefits at the close-of-service medical evaluation. a. Each volunteer is expected to receive a close-of-service medical evaluation that assesses his or her health status prior to leaving service to document any service-connected illnesses or injuries. Should a volunteer terminate service early—before completing his or her assignment—the volunteer will also undergo a complete medical and dental exam to identify any unmet health care needs and potential medical issues. 2. Volunteers complete a FECA application and submit it to DOL through the Peace Corps’ Post-Service Unit.a. The Peace Corps—through its Post-Service Unit—assists volunteers applying for benefits by helping them to complete the appropriate forms and providing the appropriate medical evidence from volunteers’ Peace Corps medical records. b. The Peace Corps’ Post-Service Unit sends all FECA applications—which includes information on the injury or illness reported by the volunteer—to DOL for review and eligibility determination. 3. FECA applications submitted for volunteers are reviewed by DOL, and the agency then makes an eligibility determination. a. For those applications that do not include sufficient information and require further development, volunteers are given approximately 30 days to submit additional information to support their request for FECA benefits. If the additional information submitted is sufficient, the application is approved. If the additional information is not sufficient, the FECA application is denied and medical treatment is not authorized. b. For those applications that are approved, DOL assigns a medical diagnosis on the basis of medical evidence submitted in the FECA application. This assigned medical diagnosis defines the medical treatment and services for which the volunteer is eligible for FECA reimbursement. 4. Typically, after benefits are approved by DOL, a volunteer obtains health care services through a medical provider. After receiving these services, the volunteer or the volunteer’s medical provider submits a bill to DOL for reimbursement. DOL provides reimbursement for medical expenses. 5. On an annual basis, DOL requires the Peace Corps to pay DOL back for these reimbursements. From 2009 through 2011, DOL provided a total of about $36 million in FECA benefits for volunteers, providing about $22 million in health care benefits—reimbursements for medical expenses to treat service- connected injuries and illnesses for Peace Corps volunteers—and $13.8 million in other benefits. During this period, almost 1,400 volunteers each year received health care benefits. The average reimbursement for medical expenses per volunteer was about $5,000 in 2009, and about $5,600 in 2011. The most-common medical conditions for which DOL provided health care benefits—reimbursements for medical services— were mental, emotional, and nervous conditions; dental; other/nonclassified diseases; and infectious or parasitic diseases.These four medical conditions represented about 40 percent of all medical conditions and accounted for about $5.9 million—or more than a quarter—of all medical reimbursements for volunteers under FECA between 2009 and 2011. See table 1 for the medical conditions for which DOL provided reimbursements for volunteers under FECA. In addition to health care benefits, volunteers also received other benefits—such as wage-loss compensation and reimbursement for travel to receive medical treatment. Specifically, from 2009 through 2011, these other benefits received by volunteers totaled about $13.8 million. In 2011, the total reimbursements for both health care and other benefits were about $12 million, which represents about 3.3 percent of the Peace Corps’ 2012 appropriation of $375 million. According to Peace Corps officials, these health care and other expenses represent a growing portion of its annual budget. These officials explained that from 2009 through 2011 these expenses have increased a total of approximately 7.2 percent. Volunteers who received FECA benefits from 2009 through 2011 are unique in several ways when compared to other recipients of these benefits. Specifically, our analysis of DOL’s FECA program claims data found that the volunteers were generally younger and more likely to be female when compared to others who received benefits under the FECA program. Volunteers were, on average, 12 years younger than others who received FECA benefits. About two-thirds of volunteers receiving FECA benefits were female, whereas less than half of others receiving FECA benefits were female. These differences in age and gender are consistent with the overall demographics of these two populations—the volunteers and federal workers. In addition, the medical conditions for which volunteers received FECA benefits were different than those for others who received FECA benefits. For example, volunteers were more likely than others to receive FECA benefits for mental, emotional, or nervous conditions; dental conditions; other/nonclassified diseases; and infectious or parasitic diseases. While these four medical conditions represented 40 percent of the conditions for volunteers, they represented less than 2 percent for the others receiving FECA benefits. The Peace Corps uses information it has to monitor volunteers’ awareness of the FECA program; however, in general, neither DOL nor the Peace Corps use information in the remaining three areas in our review to monitor the accessibility and quality of FECA benefits for volunteers. These areas are (1) information on volunteers’ knowledge of FECA program and application requirements, such as medical documentation that is required to be submitted with an application; (2) information on DOL’s timeliness in reviewing FECA applications and reimbursing medical expenses, and on the level of customer satisfaction with the FECA program; and (3) information on the availability of FECA- Table 2 summarizes the extent to which registered medical providers.DOL and the Peace Corps use information available in the four key areas to monitor the accessibility and quality of FECA benefits for volunteers. As shown in table 2, the Peace Corps uses information related to volunteers’ awareness of the FECA program. Specifically, to monitor volunteers’ awareness, the Peace Corps currently documents that volunteers have acknowledged that they have been informed of their potential eligibility for FECA during their close-of-service evaluation. Peace Corps officials told us the agency uses this information to help ensure all volunteers are made aware of their possible eligibility for FECA benefits. While the Peace Corps uses information on volunteer awareness, neither DOL nor the Peace Corps use available information related to the remaining three areas of our review to monitor the accessibility and quality of FECA benefits for volunteers. Volunteers’ knowledge of FECA program and application requirements. As table 2 shows, neither DOL nor the Peace Corps use available information, such as data on FECA application denial rates, and information on reasons for denials, in order to monitor the accessibility and quality of FECA benefits for volunteers. DOL officials told us that it is not their responsibility to use this information for this type of monitoring. However, by not using this available information to review volunteers’ level of knowledge of the FECA requirements, DOL and the Peace Corps may be unaware, for example, of the extent to which volunteers experience difficulties accessing FECA benefits because of limited understanding of certain application requirements, such as in (a) providing appropriate and sufficient medical evidence and (b) establishing a service connection for the illness or injury for which the volunteer is seeking FECA benefits. According to volunteer advocates, volunteers and their physicians may lack knowledge of certain FECA documentation requirements, such as the need to include a medical diagnosis rather than just the symptoms of an injury or illness in the FECA application. Furthermore, our examination of a limited number of FECA denial letters confirms that these difficulties are often a contributing factor in the FECA applications that were not approved from 2009 through 2011. For example, our review of denial letters showed that the most-common reasons for denial were lack of sufficient medical documentation and inability to establish a service connection. Further, DOL and the Peace Corps also do not work together to use the information available to them on volunteers’ knowledge of program and application requirements. DOL maintains metrics for measuring performance for the overall FECA program, including those that are part of the Protecting Our Workers and Ensuring Reemployment (POWER) Initiative, the Government Performance and Results Act of 1993 (GPRA), as amended, and measures outlined in DOL’s Operational Plan. The POWER Initiative established goals related to FECA—such as the timeliness of filing a FECA application. Under GPRA, DOL established customer satisfaction measures for the FECA program. Under its Operational Plan, DOL established additional timeliness and customer satisfaction measures, such as those to monitor the timeliness of the FECA application review process. of the overall FECA population. Instead, DOL’s focus has been on using the data in order to monitor FECA program timeliness and customer satisfaction for all individuals who receive FECA benefits. While it is reasonable that DOL focus on the entire FECA program, DOL and the Peace Corps also do not work together to use the timeliness and customer satisfaction information to help the Peace Corps gauge whether volunteers are receiving FECA benefits in a timely and satisfactory manner. For example, Peace Corps officials told us that a survey of former volunteers specifically about access and satisfaction issues would be useful. According to Peace Corps officials, the results of such a survey could help clarify whether volunteers have access to the care they need and what the volunteers think about the quality of the care they receive. Without this information, DOL and the Peace Corps may be unable to determine volunteers’ level of satisfaction with the FECA program. Our review of DOL timeliness data suggests that between 2009 and 2011, the agency met its timeliness benchmarks related to review of FECA applications for volunteers. these data to determine the timeliness in reviewing volunteers’ FECA applications, DOL may not be able to determine whether or to what extent its performance on timeliness is sustained in the future. Furthermore, a lack of ongoing examinations of timeliness may make it difficult for DOL to identify problems if they should arise in the future or to provide information to alleviate the concerns of advocates and Peace Corps officials regarding the timeliness of the review of FECA applications. For example, our review of information showed that DOL reviewed about 97 percent of all volunteers’ applications related to traumatic cases within 45 days of receiving the application—meeting its benchmark to review 90 percent within that time frame. geographic areas and for certain medical specialties.information DOL has on the geographic location and medical specialty of FECA-registered providers, DOL and the Peace Corps cannot determine the extent to which there are limitations in the availability of FECA- registered providers in certain geographic areas and for certain medical specialties. DOL’s available information on FECA-registered providers suggests that volunteers may face some challenges accessing registered providers. Officials stated that although it is the responsibility of the volunteer to find a FECA-registered provider, DOL publishes an online search tool that contains a partial listing of the available FECA-registered providers as a service to FECA beneficiaries, including volunteers, to help locate providers. Officials also noted the agency does not actively manage or Although the online search tool is recognized by DOL as update the list. incomplete, it does provide some partial information about the availability of FECA-registered providers. We reviewed this online search tool and found, for example, that as of June 2012 there were no FECA-registered providers in the online search tool listed as mental health specialists in Peace any of the 10 states with the largest population of volunteers.Corps officials and volunteer advocates also noted there are a limited number of FECA-registered providers in some geographic locations and medical specialties. In addition, Peace Corps officials told us that they have assisted volunteers in finding and enrolling providers, and have had difficulty in doing so. Although the information on FECA-registered providers in the online search tool that DOL provides as a resource to volunteers may be incomplete, it includes information that could be used to help identify potential access issues and areas for monitoring the accessibility of FECA benefits for volunteers. The Peace Corps and DOL both have certain responsibilities related to the provision of FECA benefits for eligible volunteers who return from service abroad. Specifically, DOL administers the FECA program and the Peace Corps pays for the expenses incurred by volunteers in the program. From DOL’s perspective, volunteers do not represent a large proportion of the overall FECA population. However, FECA is a relatively larger issue from the Peace Corps’ perspective. The volunteers are a unique population compared to others who receive benefits under FECA—for example, they are more likely to have mental, emotional, or nervous conditions that are service-connected—and, according to Peace Corps officials, the amount the Peace Corps pays DOL for FECA reimbursements represents an increasing portion of the Peace Corps’ annual budget. Because both of the agencies have certain responsibilities related to the provision of FECA benefits for eligible volunteers who return from service abroad, it is especially important that the Peace Corps and DOL jointly monitor the accessibility and quality of the FECA program to ensure that the FECA program is achieving its intended objectives— including ensuring that eligible volunteers receive needed FECA health care benefits. The Peace Corps and DOL have information available to them in the four key areas we reviewed that could be used to monitor the accessibility and quality of FECA benefits for volunteers: (1) volunteers’ awareness of FECA; (2) volunteers’ knowledge of program and application requirements; (3) DOL’s timeliness in reviewing FECA applications and reimbursing medical expenses, and the level of customer satisfaction with the FECA program; and (4) availability of FECA-registered medical providers. However, in general, the two agencies are not using this information for such monitoring. For example, the agencies do not use the information they have to determine whether there is a gap in the number and geographic location of FECA-registered providers, such as the potential gap we identified in the number and geographic location of FECA-registered providers who treat mental health conditions—the most common medical condition for which volunteers received reimbursement. While information is available to DOL and the Peace Corps that could be used for monitoring, the agencies are generally not working together to use the available information to monitor the accessibility and quality of FECA benefits for volunteers. Working together is important because neither agency has all the information to monitor the program on its own. Finally, because the information we identified under the four areas is not a comprehensive list of all the information the agencies could use to monitor FECA benefits for volunteers, the Peace Corps and DOL may be able to identify other information that could be used for this purpose. Unless the two agencies work together on monitoring, they will miss the opportunity to make use of the available information to help ensure the accessibility and quality of FECA benefits for volunteers. We recommend that the Secretary of Labor and the Director of the Peace Corps jointly develop and implement an approach for working together to use available information to monitor the access to and quality of FECA benefits provided to returned volunteers. We provided a draft of this report to the Department of Labor (DOL) and the Peace Corps for review. Peace Corps provided written comments (reprinted in app. I), and both provided technical comments, which we incorporated as appropriate. Neither DOL nor the Peace Corps indicated whether or not they agreed with our recommendation. Instead, among other things, DOL’s technical comments identified examples of the agency’s collaboration with the Peace Corps to provide benefits under the FECA program. For example, DOL noted that officials from both agencies have met multiple times over the last 2 years to try to improve the handling of volunteers’ claims, and that DOL officials are available to work with the Peace Corps to improve the process of providing benefits to volunteers. In contrast, the Peace Corps noted specific improvements that it believes could assist returned volunteers, but stated that it cannot make these reforms on its own and needs action from DOL. DOL’s and the Peace Corps’ comments further underscore that the two agencies do not have a joint approach for monitoring the quality and accessibility of benefits for returned volunteers under the FECA program. As a result, we are concerned that the two agencies are missing opportunities to collaborate. We also remain convinced that DOL and the Peace Corps should, as we recommended, work together and develop an approach for using available agency information to monitor the accessibility and quality of FECA benefits for returned volunteers. We are sending copies of this report to the Secretary of Labor, the Director of the Peace Corps, and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or at kohnl@gao.gov. Contact points for our Office of Congressional Relations and Office of Public Affairs can be found on the last page of this report. Other major contributors to this report are listed in appendix II. In addition to the contact named above, Will Simerl and Cynthia Grant, Assistant Directors; N. Rotimi Adebonojo; Melinda Cordero; Carolyn Fitzgerald; Krister Friday; Marina Klimenko; Amy Leone; and Jennifer Whitworth made key contributions to this report.
Peace Corps volunteers who suffer a service-connected illness or injury are eligible to receive certain health care and other benefits under FECA--a workers' compensation program administered by DOL. FECA provides health care benefits--reimbursements for medical expenses--to federal employees and volunteers for illnesses or injuries that DOL determines are service-connected. GAO was mandated to report on the access and quality of health care benefits for Peace Corps volunteers. This report (1) identifies the health care and other benefits provided to volunteers from 2009 through 2011 under the FECA program, and (2) examines the extent to which DOL and the Peace Corps use available agency information to monitor the accessibility and quality of FECA health care benefits provided to volunteers. GAO reviewed agency documents, interviewed agency officials, and analyzed DOL data. GAO developed a framework with four areas to define access and quality and examined available information in these areas that could be used for monitoring. From 2009 through 2011, the Department of Labor (DOL) provided a total of about $36 million in Federal Employees' Compensation Act (FECA) benefits--health and other benefits--for Peace Corps volunteers who have returned from service abroad (volunteers). Specifically, DOL provided about $22 million in health care benefits for these volunteers in the form of reimbursements for medical expenses related to service-connected injuries and illnesses, and $13.8 million in other benefits, such as reimbursement for travel expenses incurred when seeking medical care. During this period, approximately 1,400 volunteers each year received these health care benefits under the FECA program. The most common types of medical conditions for which DOL provided reimbursements were mental, emotional, and nervous conditions; dental; other/nonclassified diseases; and infectious or parasitic diseases. These four medical conditions accounted for more than a quarter of all medical reimbursements for volunteers under FECA from 2009 through 2011. In general, neither DOL nor the Peace Corps use all available information in the four areas GAO reviewed to monitor access and quality of FECA benefits for volunteers. GAO found that the Peace Corps uses information in just one of the areas--volunteers' awareness of the FECA program; however, in general, neither agency uses information in the remaining three areas. These areas are (1) information on volunteers' knowledge of FECA program and application requirements, such as required medical documentation; (2) information on DOL's timeliness in reviewing FECA applications and reimbursing medical expenses, and on the level of customer satisfaction; and (3) availability of FECA-registered medical providers. By not using information available to the agencies, DOL and the Peace Corps are missing an opportunity to determine whether, or to what extent, volunteers face access and quality issues in the FECA program. For example, DOL and the Peace Corps may not be able to determine the extent to which there are limitations in the availability of FECA-registered providers for certain medical specialties. DOL and the Peace Corps each have certain responsibilities related to the provision of FECA benefits for eligible volunteers, and each has information that could be used for monitoring. From DOL's perspective, volunteers do not represent a large proportion of the overall FECA population. However, FECA is a relatively larger issue from the Peace Corps' perspective. The volunteers are a unique population compared to others who receive benefits under FECA, and the FECA costs associated with volunteers represent a growing portion of the Peace Corps' annual budget. Neither agency has all the information GAO reviewed, and the agencies generally do not work together to use available information to monitor the accessibility and quality of FECA benefits for volunteers. As a result, DOL and the Peace Corps are missing an opportunity to make use of the available information to help ensure the accessibility and quality of FECA benefits for volunteers. GAO recommends that the Secretary of Labor and the Director of the Peace Corps jointly develop and implement an approach for working together to use available agency information to monitor the access to and quality of FECA benefits provided to volunteers. Neither DOL nor the Peace Corps indicated whether or not they agreed with GAO's recommendation. Instead, the agencies provided additional context related to the provision of FECA benefits. GAO recommends that the Secretary of Labor and the Director of the Peace Corps jointly develop and implement an approach for working together to use available agency information to monitor the access to and quality of FECA benefits provided to volunteers. Neither DOL nor the Peace Corps indicated whether or not they agreed with GAO’s recommendation. Instead, the agencies provided additional context related to the provision of FECA benefits.
There are four types of HUBZones. Initially, the HUBZone Act of 1997 (which established the HUBZone program) identified qualifying areas as qualified census tracts, which are determined by the area’s poverty rate or household income; qualified nonmetropolitan counties, which are determined by the area’s unemployment rate or median household income; and lands within the external boundaries of an Indian reservation, based on the areas meeting certain criteria. In subsequent years, Congress expanded the criteria for HUBZones to add bases closed under various base closure acts and include counties in difficult development areas outside of the continental United States as part of the qualified nonmetropolitan counties. To be certified to participate in the HUBZone program, a firm must meet the following criteria: when combined with its affiliates, be small by SBA size standards; be at least 51 percent owned and controlled by U.S. citizens; have its principal office—the location where the greatest number of employees perform their work—located in a HUBZone; and have at least 35 percent of its employees reside in a HUBZone. HUBZone designations can change with some frequency. However, SBA’s communications to firms about programmatic changes, including changes to the HUBZone map (area designations), generally have not been targeted or specific to individual firms that would be affected by the changes. SBA updates the HUBZone designations regularly based on whether they meet statutory criteria (such as having certain income levels or poverty or unemployment rates). SBA generally uses data from other federal agencies to determine if areas still qualify for the HUBZone program. SBA reassesses the status of nonmetropolitan counties more frequently than it reassesses the status of census tracts. For example, SBA updates the nonmetropolitan county designations each January to incorporate median household income data generated from ACS and each May to incorporate updated unemployment rates from BLS. In contrast, SBA historically has reassessed the status of qualified census tracts at multiyear intervals, such as every 5 years, using the census tract designations that the Department of Housing and Urban Development (HUD) uses for another program. See appendix II for more information about the specific criteria for each type of HUBZone area. In 2001, Congress extended the eligibility of census tracts and nonmetropolitan counties that lost HUBZone eligibility because of changes in income levels or poverty or unemployment rates. These areas—labeled redesignated areas—remain eligible for 3 years after “the date on which the census tract or nonmetropolitan county ceased to be so qualified.” During the 3-year period, firms in those areas can continue to apply to and participate in the program and receive contracting preferences. In 2004, Congress revised the definition of redesignated areas to allow areas to retain eligibility for 3 years or until the public release of data from the 2010 decennial census, whichever was later. Consequently, all of the areas that were redesignated between 2001 and October 2008 received an extension of their 3-year redesignation period. With the release of 2010 Census data in October 2011, a large number of previously redesignated HUBZone areas that had received the extensions lost their designation, including 333 nonmetropolitan counties. In total, 2,396 of the more than 8,000 HUBZone certified firms at the beginning of fiscal year 2012 were decertified when these changes took effect because their principal office was no longer located in a HUBZone. (We discuss decertification in more detail later in this report.) The redesignations for 3,417 HUBZones will expire in 2015, largely because of the end of the approximately 3,400 census tracts that were redesignated in 2012. Finally, redesignated areas also may requalify for the program during their extended eligibility period. For example, after receiving the most recent unemployment data from BLS, SBA may change a nonmetropolitan county’s designation from redesignated to qualified. Qualified and redesignated areas generally are spread throughout the United States (see fig. 1). The map suggests that more qualified HUBZones are in the West and Southwest, but that is in part reflective of the larger geographic size of the nonmetropolitan counties in those regions. The map also suggests a higher concentration of areas within redesignated HUBZones in the East Coast and Midwest. For information about the characteristics of the HUBZones and differences in economic conditions (for example, as indicated by poverty and unemployment rates) among redesignated, qualified, and nonqualified areas, see appendix II. According to our analysis, 83 percent of certified firms are located in a qualified area (see table 1). Most of the firms (578) that are located in a redesignated HUBZone are located in areas that will have their designations expire in 2015. When the redesignation status of an area expires, HUBZone firms in that area lose their program certification unless they relocate their principal office or, in some cases, their employees move to another HUBZone. Generally, SBA relies on information posted on its website to communicate with interested parties about the HUBZone program. For instance, the HUBZone website includes links to the HUBZone map, frequently asked questions, and guidance to help firms apply for and maintain their certification. Firms can use the map of HUBZone areas to determine if an address or a particular area is designated as a HUBZone. More specifically, the website contains links to underlying data tables for the HUBZone map, and provides a potential applicant or program participant the ability to determine if a specific address is in a qualified HUBZone. Firms also can access a help desk by e-mail to get status information, help in resolving technical difficulties, or individualized assistance; obtain eligibility assistance through a twice-weekly interactive forum; and request a 15-minute appointment with a HUBZone analyst to discuss topics such as HUBZone policies and procedures, initial documents, or HUBZone applications. However, according to SBA officials, firms are primarily responsible for keeping themselves informed about the program so that they can remain eligible. In our June 2008 report, we found that SBA’s HUBZone map contained ineligible areas and had not been updated to include eligible areas. As a result, ineligible small businesses had participated in the program, and eligible businesses had not been able to participate. Consequently, we recommended that SBA take steps to correct and update the map used to identify HUBZone areas and implement procedures to ensure that the map would be updated with the most recently available data on a more frequent basis. In response to our recommendation, SBA modified its contract with its mapping vendor to enable more frequent updating of the HUBZone map (annually). SBA officials stated that the accuracy of the map is checked twice after each upgrade, first by the mapping vendor and second by an SBA employee. SBA’s communications to firms about programmatic changes, including changes to the HUBZone map (area designations), generally have not been targeted or specific to individual firms and may not have reached all affected firms. Since February 2013, SBA has used a broadcast e-mail (which simultaneously sends the same message to multiple recipients) to distribute information about the program, including changes to the HUBZone maps. According to SBA officials, the e-mail list initially included all certified firms, but firms certified since the list was created have not been automatically added to the list. They and other interested parties must sign up (subscribe) to receive the e-mails through SBA’s website, and as a result not all certified firms may have done so. Through fiscal year 2014, SBA sent 13 e-mails through the system, covering topics such as changes made to the maps or the expiration of redesignated nonmetropolitan counties. Of the 13 e-mails, 9 focused on map-related issues. For example, a September 2014 e-mail notified subscribers that redesignations for 102 nonmetropolitan counties would expire on October 1, 2014. SBA also noted that there were 109 firms with principal offices in a county with expiring status. The e-mail included a link to the HUBZone map and a spreadsheet that showed the status of every county in January and May 2014, and encouraged firms to check their area’s status. However, the e-mail did not identify the firms with principal offices in the counties that would lose redesignated status or identify the specific counties. Furthermore, according to SBA officials, SBA began using a revised certification letter in October 2014 that includes links to the program’s website and regulations, but unlike the previous certification letter that SBA sent to firms it does not include information on whether the firm is in a redesignated area or when that status will expire. According to HUBZone program policy, the approval letter should include language notifying the newly qualified firm of the possible expiration of the HUBZone. According to SBA officials, the only time that a firm would be directly notified that it was located in a redesignated area would be when it received a notice of proposed decertification after the status of the area expired. (We discuss the certification process later in the report.) Federal internal control standards require that management ensure there are adequate means of communicating with, and obtaining information from, external stakeholders that may have a significant impact on the agency achieving its goals. In a May 2008 report, SBA’s Office of Advocacy stated that although HUBZone staff had been described as helpful and informative, there did not seem to be any systematic outreach or promotion of the program beyond responding to inquiries. In October 2014, SBA announced the development of a new initiative (Destination: HUB) that is intended to promote and support HUBZone firms in federal contracting opportunities. The initiative will consist of three major components; first, an in-depth examination of successes and needs in the program; second, an analysis of best practices for successful public-private partnerships; and third, the launch of a broad grassroots educational initiative at the regional and national levels. SBA officials said that SBA does not include a list of firms in the e-mails it sends because it wants every firm to be proactive and check its eligibility status. According to SBA officials, SBA’s responsibility is to evaluate a firm’s initial compliance and its ongoing eligibility (through recertifications, program examinations, and protests); firms have a responsibility to ensure that they remain eligible for the program. But the combination of relatively frequent changes to HUBZone designations, SBA’s generally nonspecific communications to firms as illustrated by its certification letter, and the incomplete addressee list for the broadcast e-mails have increased the possibility that not all certified firms affected by such changes receive information about the changes or are made aware in a timely fashion of any effects on their program eligibility. We recognize that small businesses participating in the HUBZone program should be responsible for determining their eligibility, and they appear to have the ability to contact SBA to receive information specific to their situation. However, given the information SBA has at its disposal and the large number of HUBZone firms, SBA is in a position to better inform firms of changes affecting program eligibility. SBA revised its certification process in response to our June 2008 recommendation, and now requires firms to provide documentation and reviews this documentation to determine their eligibility for the HUBZone program. As we reported in 2008, at that time SBA relied on data that firms entered in the online application system and performed limited verification of the self-reported information. Although agency staff had the discretion to request additional supporting documentation, SBA did not have specific guidance or criteria for such requests. We recommended that SBA develop and implement guidance to more routinely and consistently obtain supporting documentation upon application. The agency agreed with our recommendation and, according to SBA officials, since fiscal year 2009 SBA has required all firms to provide supporting documentation for information they enter in SBA’s online application system. SBA then performs a full-document review on all applications as part of its initial certification process. The current initial certification process comprises multiple steps (see fig. 2). For instance, after firms verify that the information they entered was complete and accurate, administrative staff request supporting documents from the applicant, including a lease agreement, map of employees’ home addresses, personal and business tax returns, and payroll records. According to SBA officials, one staff reviews and analyzes the documents provided and makes a recommendation for the application and a second staff reviews the analysis and recommendation. Once a recommendation has been made, the HUBZone program director performs a final review before making a final determination. At the end of the process, SBA sends a signed letter to the applicant stating either that the firm is certified, or the basis for denial if the firm did not qualify. See appendix IV for summary statistics related to SBA’s review of initial applications. Our review of 39 application files SBA received from October 1, 2012, through September 30, 2013, indicates that SBA conducts full documentation review of the applications submitted by firms seeking certification. For the 29 applications that were approved or denied, SBA staff prepared an analysis to support their recommendation to approve or deny the application. For 18 of the 29 applications, SBA staff followed up with the applicant to obtain additional documentation; 12 of these applicants provided the documents and were approved, and the remaining 6 were denied because they did not. Our review of the approval and denial letters relating to the 29 application files confirmed that the final determination and the effective dates entered into the system were the same. We also found that SBA has not updated its SOP since 2009 when it began making changes to its certification processes, but expects to do so by the end of March 2015. Based on SBA’s response to a recent OIG recommendation, the agency had planned to complete the update by September 2014. The current SOP was last updated in November 2007 and therefore does not contain information about the full document review. According to SBA officials, to guide staff reviewing HUBZone certification packages, SBA currently uses guidance it developed that is specific to addressing situations that may arise during certification reviews. For example, the guidance discusses how to designate an employee of a HUBZone firm as located at a principal office when a person works an equal amount of time in a principal office and a secondary office. The guidance also includes language that should be included in a final e-mail request to the applicant firm when the analyst determines that SBA needs to obtain clarification, additional documents, or both to complete its review of the application package. According to SBA officials, SBA had delayed updating the SOP because the agency intended to revise the HUBZone regulations first and then incorporate those changes into the SOP. HUBZone trade association representatives told us that SBA has not always clearly communicated information about its review process to program applicants. They stated that the lack of clarity about SBA’s certification review process likely could be solved through more education and communication. For example, they noted that after SBA changed the definition of an employee in May 2010 from an individual who was required to work 30 hours per week to an individual who worked 40 hours per month, the agency required that applicant firms provide documentation verifying that the employee worked during the previous month. However, according to the representatives, SBA then changed its interpretation to require that the employee worked during the period during which the application was submitted but did not communicate the change to the industry. According to SBA officials, SBA discusses the period that an employee must work in its frequently asked questions. While SBA defined an employee in its frequently asked questions, we did not find information related to the period that an employee must work to meet the eligibility requirement. SBA officials told us that the agency again reviews the eligibility of some firms—generally, already certified firms—by conducting site visits to a select number of firms every year. In June 2008, we reported that SBA rarely conducted site visits during program examinations to verify a firm’s information and recommended that the agency conduct more frequent site visits. In response to that recommendation, according to SBA officials, SBA now conducts site visits on 10 percent of its portfolio of certified firms every year. According to the officials, the site visits are based on established criteria (such as the amount of HUBZone contract awards the firm has received) or done on an as-needed basis in connection with the review of an initial certification application or after receipt of a protest. SBA officials told us that SBA uses a pass/fail system to describe the outcome of a site visit. In fiscal years 2013 and 2014, the officials told us that SBA selected 511 and 550 firms, respectively, for site visits. In fiscal 2013, SBA noted that 81 percent of the firms passed the site visit and 19 percent failed. For fiscal year 2014, SBA noted that 88 percent of the firms passed and 12 percent failed. According to SBA officials, SBA sends a notice of proposed decertification to firms that fail a site visit. (We discuss decertification in the next section.) Once a firm is approved for participation in the HUBZone program, SBA may decertify the firm if SBA determines that the firm is no longer eligible for the program or if SBA is unable to verify the firm’s continuing eligibility. For example, firms in redesignated areas are proposed for decertification and are provided an opportunity to establish continued eligibility following the expiration of the area’s redesignated status. For instance, a firm could prove its continued eligibility by showing that it had relocated its principal office to a qualified HUBZone area. These firms are ultimately decertified if they do not respond to the notice of proposed decertification by indicating that they are still eligible. According to SBA officials, the decertification process typically consists of three steps. SBA e-mails a notice of proposed decertification to the firm and allows the firm 30 days to respond. If the firm does not respond, SBA sends the notice of proposed decertification to the firm by certified mail. If the firm does not respond to the certified notice within 30 days, SBA then decertifies the firm on the following day. Decertification also can occur in two other ways. A firm voluntarily can decertify if the firm determines that it no longer meets the program’s eligibility requirements. SBA also can immediately decertify a firm during the protest process if, according to SBA officials, SBA determines that the firm did not meet the eligibility requirements at the time of bid and award, or at the time of protest if the contract had not yet been awarded. From fiscal year 2010 through fiscal year 2013, SBA reported 124 protests of HUBZone contracts, of which 46 (37 percent) were dismissed, 41 (33 percent) were sustained, and 37 (30 percent) were denied. SBA noted that it decertified 4,660 firms from fiscal year 2010 through fiscal year 2013. The vast majority of decertifications occurred from fiscal years 2010 through 2012. According to SBA officials, in fiscal year 2009, before conducting a legacy portfolio review, SBA asked firms to confirm their compliance with the program requirements or to voluntarily withdraw from the program. The officials told us there were approximately 14,000 HUBZone firms that needed to validate their compliance with the program eligibility requirements. At that time, SBA notified the firms that the agency planned to conduct a full document review in 2009 or early 2010 to confirm compliance. According to SBA officials, many firms voluntarily withdrew from the program as a part of this effort. As shown in figure 3, our analysis of SBA data (for applications received from fiscal year 2010 through fiscal year 2013 for firms that were later decertified) showed that firms most frequently were decertified because their principal offices were not located in a HUBZone.SBA also decertified about 2 percent of firms that applied to the program during that period because they did not meet the 35 percent employee residency requirement. Representatives from the HUBZone trade association noted that the employee residency requirement of 35 percent can make it difficult for a firm to stay in compliance with the program’s regulations. For example, according to the representatives, initiating and obtaining a contract may take a long time and maintaining many employees on the payroll without a contract can become a major overhead expense for firms. Furthermore, for smaller firms, losing one or two employees could result in noncompliance with the residency requirement. If firms do not maintain enough HUBZone-resident employees on payroll, they eventually lose their certification. HUBZone recertifications once again have become backlogged. According to HUBZone regulations, firms wishing to remain in the program without any interruption must recertify their continued eligibility to SBA within 30 calendar days after the third anniversary of their date of certification and each subsequent 3-year period. In June 2008, we reported that many firms were in the program for more than 3 years without being recertified. We recommended that SBA establish a specific time frame for eliminating the backlog of recertifications and take the necessary steps to ensure that recertifications were completed in a more timely fashion in the future. In March 2009, we determined that SBA eliminated the backlog of recertifications by hiring contract staff, but had yet to implement necessary procedures to ensure that future recertifications were completed in a timely fashion. At that time, SBA officials stated that their ongoing business process reengineering would include an assessment of the recertification process. Subsequently, in July 2012 SBA provided data illustrating that the backlog had not reoccurred and that they had recently obtained approval to replace those contract staff with 10 full-time equivalent (FTE) staff who would perform all future recertifications. We believed at that time that these steps would address the intent of our recommendation. However, SBA did not hire the 10 staff that were to replace the contract staff because, according to SBA officials, part of its funding authority was rescinded in 2013. As a result, according to SBA officials, as of September 2014 they faced a 1-year backlog for recertifying firms—that is, the agency was still recertifying firms that had their 3-year anniversaries in 2013. SBA’s current backlog indicates that it does not have processes in place to ensure that recertifications are completed in a timely fashion. SBA’s current process for recertifying firms involves manual sorting of data from HCTS to identify firms whose 3-year anniversary is upcoming. Based on our analysis of SBA information for the most recent notification of firms, SBA notified almost 53 percent (375) of the firms past the deadline (that is, more than 30 days past the firms’ 3-year anniversary). We identified three firms that were last recertified in 2009. Similarly, one firm with which we spoke said it received the SBA notification 3 or more months after the 3-year anniversary. According to SBA officials, the recertification backlog is due in part to limitations with HCTS and resource constraints. According to SBA officials, the recertification module in HCTS was designed to electronically review and approve recertification applications submitted online and alert SBA when it identified significant differences in the information firms submitted for recertification, but has not worked as intended. As a result, the officials said that SBA has to manually identify firms that are due for recertification and only does so twice a year. According to agency officials, SBA began processing recertifications in batches in fiscal year 2013 because the agency lacked sufficient staff to do it more often. As a result of this batching schedule, in some instances SBA has been notifying firms almost 6 months after their 3-year anniversary date. The officials also told us that they have chosen not to allow firms to initiate the recertification process, which would enable firms to recertify in a more timely way. They said that having a process in which firms would submit e-mails within 30 calendar days after their 3-year anniversary date would be inefficient due to the additional steps required. For instance, officials said that they would have to manually sort through the e-mails and remove any firms that were not due for recertification. According to agency officials, SBA has requested funding in its fiscal year 2016 budget to enhance the current system. With this funding, SBA plans to investigate fixes for the recertification process. For example, these fixes may allow firms to submit recertification applications on their anniversary date and HCTS internal logic would ascertain compliance. For those that were noncompliant, the system could be configured to notify program staff that the recertification had to be reviewed or decertification proposed. However, it is not clear whether SBA will receive additional funding in fiscal year 2016 or changes can be made to HCTS that would allow the system to ascertain compliance with the program requirements. In the interim, SBA officials noted that the agency’s goal in 2015 is to notify firms and process the recertifications on a monthly basis. While SBA’s intention is to conduct more timely recertifications, its limited efforts since 2008 to prevent backlogs have not been successful. Until SBA makes it a priority to address ongoing challenges with its recertification process and can develop an effective approach to recertify firms in a timely manner, the agency will continue to face reoccurring backlogs and risks associated with ineligible firms participating in the program. SBA relies on firms’ attestations of continued eligibility and generally does not request supporting documentation during recertification. SBA officials told us that SBA currently only requires that firms submit a notarized recertification form stating that their eligibility information is accurate. However, internal control standards for the federal government call for ongoing monitoring of program activity and indicate that federal agencies should have control activities in place, such as verification, to ensure compliance with program requirements. According to SBA officials, they do not believe they need to request supporting documentation from recertifying firms because all firms currently in the program have undergone a full document review, either when the firm initially applied or through the full document review SBA completed of its legacy portfolio review in fiscal year 2012, as previously discussed. While SBA officials noted that they have the authority to ask for supporting documentation when recertifying a firm, they have not done so. The SOP notes that agency staff may request and consider additional information, but it does not specify what circumstances warrant a request for supporting documentation. As previously noted, SBA officials said that the recertification process is affected by resource constraints. But as a result, SBA lacks reasonable assurance that only qualified firms are allowed to continue in the HUBZone program and receive preferential contracting treatment. Moreover, while SBA’s review of its legacy portfolio represented a comprehensive effort, it was a one-time review and took place between fiscal years 2010 and 2012. The characteristics of firms and the status of HUBZone areas—the bases for program eligibility—can often change, and need to be monitored. For example, the size of a firm and the residency location of its employees can change in 3 years. In addition, monitoring processes can take resource constraints into account. In March 2009, we found 10 of 19 firms we examined to be egregiously out of compliance with HUBZone program requirements and recommended that SBA consider incorporating a risk-based mechanism for conducting unannounced site visits as part of the screening and monitoring process. Such a risk-based approach could be applied to SBA’s recertification process to review and verify information from firms that appear to pose the most risk to the program. Since we reported on the HUBZone program in 2008, SBA has implemented a number of actions both to better ensure that only eligible firms participate in the HUBZone program and address weaknesses with internal controls that we and the OIG identified. The frequency of changes to HUBZone designations present challenges for both firms and SBA. While SBA uses a number of mechanisms to communicate with firms about changes to HUBZone designations, these methods are insufficient as they are at a general level and not all firms may receive them. The communications are not specific to firms or HUBZone areas (for example, broadcast e-mails). Additionally, not all certified firms may be on the broadcast e-mail subscription list. As a result, not all firms may be aware of changes that would affect their continuing program eligibility. SBA has strengthened its internal controls for initial HUBZone certifications but missed opportunities to address weaknesses in controls related to the recertification of firms. The current recertification process has a number of issues that continue to limit its effectiveness. First, SBA again has a significant backlog in processing recertifications and has not implemented a sustainable process. Second, SBA requires firms to wait until being notified before beginning recertification, but routinely notifies firms too late for the firms to meet the deadline established in HUBZone regulations. Finally, the current recertification process requires no supporting documentation—in effect, firms self-certify. SBA officials noted that resource constraints prevent a full documentation review of all recertifications and that staff can request supporting documentation (but SBA does not have guidance on when staff should request or verify documentation). While the backlog and inability of firms to start the process are concerns, the greater issue is the lack of ongoing eligibility review and verification during recertification. To improve SBA’s administration and oversight of the HUBZone program and reduce the risk that firms that no longer meet program eligibility criteria receive HUBZone contracts, the Administrator of SBA should take the following two actions: Establish a mechanism to better ensure that firms are notified of changes to HUBZone designations that may affect their participation in the program, such as ensuring that all certified firms and newly certified firms are signed up for the broadcast e-mail system or including more specific information in certification letters about how location in a redesignated area can affect their participation in the program. Conduct an assessment of the recertification process and implement additional controls, such as developing criteria and guidance on using a risk-based approach to requesting and verifying firm information, allowing firms to initiate the recertification process, and ensuring that sufficient staff will be dedicated to the effort so that a significant backlog in recertifications does not recur. We sent a draft of this report to SBA for its review and comment. In response, SBA provided written comments, which are reproduced in appendix VI. SBA agreed with our recommendations and outlined steps it has taken or plans to take to address them. SBA stated that it has been analyzing existing resources to implement our recommendation on notifying firms of changes that could affect their participation in the program. SBA also said that by July 2015, it plans to implement an enhanced mechanism to better inform all certified and newly certified firms of HUBZone designation changes or, at least, explain how being in a redesignated area could affect program participation. In response to our recommendation on assessing the recertification process and implementing additional controls, SBA stated that it will assess the current process. In addition, SBA stated that by September 30, 2015, it plans to identify improvements to reduce the risk of fraud, waste, and abuse. If you or your staff have any questions concerning this report, please contact me at (202) 512-8678 or shearw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. This report examines the Historically Underutilized Business Zone (HUBZone) program of the Small Business Administration (SBA). More specifically, the report (1) describes HUBZone designations and how SBA communicates with interested parties about the program, and (2) examines SBA’s certification and recertification processes for firms, including the extent to which SBA has implemented procedures to address recommendations previously made to improve these processes. In addition, we present information about selected characteristics of HUBZones, effects of hypothetical changes to criteria on HUBZone designations, HUBZone application processing, and selected characteristics of HUBZone firms in appendixes II, III, IV, and V, respectively. To address both objectives, including how SBA notifies and communicates with interested parties about the HUBZone program, we reviewed previous GAO and SBA Office of Inspector General (OIG) reports; applicable statutes and regulations; and SBA documents, including Standard Operating Procedures, internal policy guidance for staff, process work guidelines, checklist of required documents, criteria used to select firms to undergo site visits, outreach and communication materials, goaling report, Congressional Budget and Performance report, and a new initiative that SBA created intended to boost HUBZones in the federal marketplace. We interviewed SBA staff and representatives from the HUBZone Contractors National Council—the HUBZone trade association. We also interviewed representatives from local economic development agencies and from HUBZone firms. We selected a purposive sample of16 HUBZone firms. To ensure a range of HUBZone firms, our sample included 8 firms that were currently certified, 4 firms that had been denied entry into the program, and 4 firms that were once certified but had been decertified. Of the 16 firms, we interviewed 8, the findings from which cannot be generalized to the overall population of firms that have applied to the HUBZone program. To identify the firms, we relied on a multistage sampling approach: To select firms that were certified, we first selected four states, one from each of the four regions of the United States—West, Midwest, Northeast, and South. We used 2013 unemployment rates from the Bureau of Labor Statistics to select two states with higher unemployment rates and two states with unemployment rates lower than the U.S. average rate. To ensure a mix of rural and urban states, we used 2010 Census data and calculated the percentage of each state that was rural and picked some states that had higher concentrations of rural areas. We also picked some states that had a large number of HUBZone certified firms, according to the Dynamic Small Business Search (DSBS) database, to ensure we had a sufficient number of firms from which to select firms to interview. We selected one county from each of the four states based on the number of certified firms located in each county that had been awarded federal contracts and firms that had not been awarded contracts in fiscal years 2010 through 2013 based on Federal Procurement Data System-Next Generation (FPDS-NG) data. We also considered the number of contracts awarded to the firms and the dollar amount of the contracts in fiscal year 2013 based on FPDS-NG data. In addition, we selected the county from each state that had a mix of certified firms with and without contracts and that had a high number of contracts awarded to firms in the county in fiscal year 2013, as compared with other counties in the state. For one state, we selected the county with the second-highest number of contracts because it represented a redesignated nonmetropolitan area. Within the selected counties, we selected four certified firms that had been certified in 2009 or later and had been awarded the most contracts among firms in the county from fiscal years 2010 through 2013. Based on the oldest certification date, we also selected four firms that had not received any contracts. Finally, we used applicant data from SBA’s HUBZone Certification Tracking System (HCTS) to select four firms that were denied entry into the program and four firms that were decertified. We used the same four U.S. regions and randomly selected two firms from each region, one for each of the two categories. To examine how SBA designates HUBZones, selected characteristics of the areas, and how potential changes to designation criteria would affect HUBZones and firms, we reviewed applicable statutes and regulations, such as the HUBZone Act of 1997. We also reviewed prior GAO, Congressional Research Service, and SBA OIG reports on the HUBZone program. We accessed the list of qualified and redesignated areas from SBA’s HUBZone website and plotted them using Mapinfo. We also downloaded the list of certified firms as of June 16, 2014, and plotted those firms into their corresponding HUBZones. According to SBA officials, firms may have more than one profile and address in DSBS and there is no way within the system to identify which address represents a principal office. Consequently, the numbers we report are based on the addresses in the DSBS profiles. We were unable to plot 573 firms in a HUBZone. For the firms we could plot, we matched these certified firms with federal procurement data from FPDS-NG to identify the number and amount of contracts that the firms received from fiscal year 2010 to fiscal year 2013. We used these data sources to identify characteristics of the different HUBZone types, such as the number of firms in the different HUBZone types as well as the contract dollars those firms received. To identify economic characteristics of the different HUBZone types, we reviewed selected economic indicators, such as the poverty and unemployment rates, and the median housing income and housing value for the different HUBZone types. We used data from the Census Bureau’s American Community Survey (ACS) and unemployment data from the Bureau of Labor Statistics (BLS). After reviewing related documentation or interviewing knowledgeable agency officials, we deemed the DSBS, FPDS-NG, Census Bureau, BLS, and SBA data sufficiently reliable for the purposes of describing the characteristics of HUBZones, participating firms, and their contracting. To determine the impact of hypothetical changes to HUBZone criteria, we reviewed county-level unemployment data from BLS. We applied different criteria to the data to determine the impact that the changes would have on the number of eligible counties, by state. For example, we used a 5- and 10-year average unemployment rate instead of the 1-year rate that SBA currently uses to determine the eligibility for nonmetropolitan counties. To determine the impact of the statutory 20 percent cap imposed on census tracts as part of the Low- Income Housing Tax Credit (LIHTC) program, we analyzed 2014 data from the Department of Housing and Urban Development (HUD) and applied the algorithm that HUD uses to determine if a census tract qualifies, and then stopped applying the criteria prior to the imposition of the 20 percent cap. We compared the number of areas that would qualify under both scenarios. To examine SBA’s certification and recertification processes for firms and the extent to which SBA addressed previous recommendations, we reviewed SBA’s certification and recertification processes for certifying firms, including its policies and procedures for certifying and monitoring firms. To test whether SBA had implemented procedures to help ensure that only eligible firms participate in the program, we reviewed a purposive sample of 39 application case files. To select these files, we used data from HCTS to identify the universe of applications SBA received from October 1, 2012, to September 30, 2013. We grouped these files into four outcome categories—approved, denied, withdrawn, and decertified and randomly selected 10 from each category. We reviewed these case files and collected information using a data collection instrument (DCI) to gather information such as whether SBA staff prepared an analysis summary; analyzed information, including supporting documentation, to determine if the applicants met the program eligibility requirements; sent follow-up communications to request additional documentation; and performed a second review before making a final determination on the application. We also determined the date on which the application was received and the date of final determination. The findings from this limited review of 39 case files cannot be generalized to the overall population of applications received in fiscal year 2013. We developed the DCI after reviewing SBA’s regulations, its description of the HUBZone current certification process, and the HUBZone policy guidance, document request checklist, and process work guidelines. Two GAO team members independently entered information from one sample case file using the DCI and compared the results. After the two team members agreed on the final fields to be used in the DCI, they entered information from the case files into the DCI. A third staff member verified the accuracy of the entries. Using HCTS data, we analyzed data about firms for which applications were received from fiscal years 2008 through 2013, by year of receipt, to determine (1) the number of applications approved, denied, or withdrawn; (2) the number of firms that had been recertified; (3) the number that had been decertified; (4) the length of time it took SBA to approve applications; and (5) the reasons firms were decertified. We also analyzed HCTS data to examine the characteristics of HUBZone applicant firms, including the industry codes and number of employees. A copy of the HCTS data was originally provided to GAO in Oracle format and was the most current version as of July 2014. These data were then read in using SAS software to allow for analysis. Before conducting the analysis, a GAO data analyst reviewed the data for inconsistencies and completeness. As a part of this work, inconsistencies related to the effective dates of different processes were discovered. We determined that the data were sufficiently reliable for the analysis we report by reviewing related documentation, interviewing knowledgeable officials, and electronic testing of the data, but there were internal inconsistencies that limited our reporting. Specifically, in some cases, there were conflicting outcomes or dates that appeared to be out of sequence or illogical. In particular, the dates associated with events in the certification review processes were found to be unreliable. To clarify, GAO met with SBA and learned that the inconsistencies were related to manual data entry processes that were required since fiscal year 2006 because of deficiencies discovered in the software related to the recertification module in HCTS. We concluded that while the actual outcomes were reliable enough to report, the data that relied on the effective dates of these outcomes were not sufficiently reliable to report. To identify the eligibility-related reasons for decertification, we analyzed the reason field contained in HCTS for applications received from 2010 to 2013 that were later decertified. Because a large percentage of these applications had “other” for their recorded reason, we conducted a review of the comments field to code for the possible other reasons. To conduct that review, the GAO data analyst did a string search of the text using keywords such as “35 percent” and “principal office” and identified one additional category and added additional cases to existing reasons. An analyst reviewed a subset of the coding to ensure accuracy. To determine whether SBA implemented procedures to help ensure that only eligible firms continue to participate in the program, we assessed SBA’s recertification process. We reviewed SBA’s regulations on recertification and information from HCTS relating to firms’ certification anniversary dates and SBA’s e-mail notification for firms to recertify. We assessed the reliability of the data on these processes by interviewing officials knowledgeable about the data and performing electronic data testing to detect errors in completeness and reasonableness. We determined that they were sufficiently reliable for the purpose of identifying the amount of time between a firm’s prior certification and SBA’s notice about recertification. We compared SBA’s certification and recertification processes with federal internal control standards for collecting documentation and verifying information. We conducted this performance audit from April 2014 to February 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. There are four different types of qualified HUBZone areas as defined by federal statute. Qualified census tracts. The qualified census tracts are the same as those determined by HUD for the LIHTC program. The current criteria are any census tract that is designated by HUD and, for the most recent year for which census data are available on household income in such tract, has (1) at least 50 percent of households with income below 60 percent of the median gross income of the MSA (in metropolitan census tracts) or for all nonmetropolitan areas of the state (in nonmetropolitan census tracts) or (2) a poverty rate of at least 25 percent. The LIHTC program limits the amount of qualified census tracts in any MSA or PMSA so that they together do not contain more than 20 percent of the total population of the MSA or PMSA. As such, it is possible for a tract to meet one or both of the criteria, but not be designated as a qualified census tract. Qualified nonmetropolitan county. A qualified nonmetropolitan county is any county that was not located in an MSA at the time of the most recent census. To qualify, the (1) median household income for the county must be less than 80 percent of the nonmetropolitan state median household income, based on the most recent data available from the Bureau of the Census; or (2) the unemployment rate is not less than 140 percent of the average unemployment rate for the United States or for the state in which such county is located, whichever is less, based on the most recent data available from the Secretary of Labor. SBA uses a 5-year median income, as determined through the ACS. The most recent update reflected data from 2008- 2012. The unemployment rate is derived from data released annually by BLS and is typically sent to SBA during May, June, or July. Additionally, nonmetropolitan counties can qualify if they include a difficult development area, as designated by HUD, within Alaska, Hawaii, or any territory or possession of the United States outside of the 48 contiguous states. Qualified Indian land. Lands within the boundaries of Indian reservations may qualify as a HUBZone area. The Bureau of Indian Affairs provides data delineating these areas. Qualified base closure area. HUBZone eligibility is extended for 5 years to lands within the external boundaries of a military installation closed through a privatization process under the authority of various base closure laws. The military base’s HUBZone eligibility commences on the effective date of the law (Dec. 8, 2004) if the military base already was closed at that time, or on the date of formal closure if the military base was still operational at that time. According to SBA officials, no HUBZone firms are located in these areas because it is very difficult for firms to meet the 35 percent residency requirement. Additionally, the SBA officials noted that, according to statute, the areas are only eligible for 5 years from the date of base closure, and it may take several years before the area is able to receive business leases, leaving applicant firms with only 1 to 2 years of eligibility. According to our analysis, about 2,600 of the approximately 5,200 certified firms were in a qualified or redesignated census tract, as of June 2014 (see table 2). Additionally, about 12 percent of the firms were located simultaneously in at least two types of HUBZones. Most commonly, firms were located in both a qualified census tract and qualified nonmetropolitan county (311 firms) or a qualified census tract and Indian land (107 firms). However, about 88 percent of HUBZones— including all the base realignment and closure areas—did not contain any certified firms (that we could geographically code). In fiscal year 2013, certified firms received, on average, about $789,000 through HUBZone contracts. Firms in qualified census tracts received about $1 million in contracts, with those in redesignated areas receiving about $300,000 more, each. Firms in all redesignated areas were obligated almost $800 million in federal contracts in fiscal year 2013. Overall, 211 firms comprised the top 10 percent, in terms of the size of contracts received, while almost 4,300 comprised the bottom 10 percent. As shown in figure 4, there are certified firms in all 50 states. In general, counties in the West had more certified firms, on average, than counties elsewhere in the United States. As shown in table 3, California, New York, and Texas had the largest number of qualified areas. California, Oklahoma, Texas, and Virginia all had more than 200 certified firms, with California having the most (477). Our analysis of the economic conditions of qualified and redesignated areas as of 2012 found that redesignated areas had, on average, economic conditions between those of qualified and nonqualified areas. For example, as shown in figure 5, qualified census tracts had poverty and unemployment rates of 32 percent and 14 percent, respectively, while nonqualified census tracts had poverty and unemployment rates of 11 percent and 8 percent, respectively. In contrast, the poverty rate in redesignated census tracts was 24 percent while the unemployment rate was 12 percent. As shown in figure 6, a similar pattern exists for qualified, redesignated, and nonqualified nonmetropolitan counties. As shown in table 4, our analysis of various economic indicators for qualified, redesignated, and non-HUBZone areas found that qualified areas had, on average, higher poverty and unemployment rates, and lower median household income and housing values then either redesignated or non-HUBZone areas. Redesignated areas had, on average, economic indicators between those of qualified and not qualified areas. As shown in figure 7, about 55 percent of qualified census tracts had poverty rates of 30 percent or more. In contrast, almost 76 percent of redesignated census tracts had poverty rates between 10 percent and 29.9 percent. Similarly, almost 72 percent of qualified census tracts had unemployment rates greater than 10 percent, whereas about 40 percent of redesignated areas and 70 percent of nonqualified areas had unemployment rates of less than 10 percent. Similarly, our analysis of various nonmetropolitan counties found that redesignated areas had, on average, economic indicators between those of qualified and not qualified areas (see table 5). However, redesignated nonmetropolitan counties had a lower average median housing value than either qualified or not-qualified HUBZone counties. As shown in figure 8, about 16 percent of qualified nonmetropolitan counties had poverty rates of at least 30 percent. In contrast, less than 1 percent of redesignated or nonqualified nonmetropolitan counties had a poverty rate of at least 30 percent. Similarly, 22 percent of qualified areas had an unemployment rate of at least 15 percent, while about 7 percent of redesignated counties and 1 percent of nonqualified counties had a similar level. We applied hypothetical changes to selected statutory criteria for designating HUBZones to illustrate how such changes could affect the number of eligible areas. As we reported in 2008, establishing new HUBZone areas could provide economic benefits to these new areas, but also could result in diffusion—decreased targeting of areas of greatest economic distress—by lessening the competitive advantage upon which small businesses may rely to thrive in economically distressed communities. As discussed earlier in this report, one of the ways in which a nonmetropolitan county can qualify as a HUBZone is based on its unemployment rate. More specifically, the unemployment rate for the nonmetropolitan county must not be less than 140 percent of the average unemployment rate for the United States or for the state in which the county is located, whichever is less, based on the most recent data available from the Department of Labor.Adjustments could make this definition more uniform across all of the states. Under the current definition, two counties in different states with the same unemployment rate would not necessarily both qualify as HUBZones, depending on the unemployment rate of the state in which they are located. In general, every county in a state with an unemployment rate less than the U.S. average would qualify as a HUBZone if its unemployment rate was at least 140 percent of the state’s (even if it was less than the U.S. average). In contrast, counties in states with unemployment rates higher than the U.S. average must have an unemployment rate at least equal to 140 percent of the U.S. average to qualify as a HUBZone. We analyzed the impact on the number of nonmetropolitan counties that could qualify under a variety of adjustments to the current definition. Current definition. Under the current definition, 424 nonmetropolitan counties would qualify as a HUBZone. Lowest state unemployment rate. If 140 percent of the lowest state unemployment rate (2.9 percent) was applied to all counties and states, almost 1,700 counties would qualify. Under this scenario, for example, Texas would go from 16 qualified nonmetropolitan counties to almost 150. Highest state unemployment rate. If 140 percent of the highest state unemployment rate (9.8 percent) was applied to all counties and states, there would only be 54 qualified nonmetropolitan counties nationwide. Thirty-four states would not have any qualified nonmetropolitan counties, while Kentucky and Mississippi would have the most, with 9 each. Applying U.S. average unemployment rate. If 140 percent of the U.S. average unemployment rate (7.4 percent) was applied uniformly to all counties and states, 301 nonmetropolitan counties would qualify. Average of all state unemployment rates. If 140 percent of the average of all state unemployment rates (6.8 percent) was used, about 450 counties would qualify. Under this scenario, states with relatively high unemployment rates would have additional nonmetropolitan counties (such as Georgia, which would have 17 additional qualifying counties), while states with relatively low unemployment rates would have fewer qualifying counties (such as Virginia, which would have 20 fewer). In addition to comparing all nonmetropolitan counties against the same unemployment rate, the qualifying criteria could be changed from the current standard (data from 1 year) to use a 5- or 10-year average unemployment rate to determine if a nonmetropolitan county qualified for the program. Such a change could minimize the wide variations that can occur by using the 1-year rates. For example, the average unemployment rates that equaled 140 percent of the U.S. unemployment rate ranged from 6.4 percent in 2006 and 2007 to 13.4 percent in 2010. Similarly, the difference between the states with the highest and lowest unemployment rates ranged from 6 percentage points in 2004 to about 14 percentage points in 2010. However, using a 5- or 10-year average unemployment rate would result in fewer counties qualifying as a HUBZone. Using a 5-year unemployment rate. If a 5-year average unemployment rate were used, 345 nonmetropolitan counties would qualify. With the change from 1- to 5-year, additional counties in 6 states would become eligible, while the number of eligible counties in 21 states would decrease. The net number of affected counties would range from 6 added to 15 eliminated. Using a 10-year unemployment rate. If a 10-year average unemployment rate were used, 377 nonmetropolitan counties would qualify. Under this adjustment, 10 states would have more counties eligible and 18 would have fewer. See figures 9 and 10 for a summary of the impact of each potential change on the number of qualified nonmetropolitan counties. As shown in table 6, the impact of the hypothetical changes on the number of qualified nonmetropolitan counties in the individual states varies. Currently, HUBZone-qualified census tracts are those census tracts designated by HUD. Statutory provisions governing the LIHTC program limit the amount of qualified census tracts in any MSA or PMSA to no more than 20 percent of the total population of the MSA or PMSA. Consequently, some census tracts that may qualify for the HUBZone program based on the area’s median household income or poverty rate might not be designated as qualified census tracts because of the population cap, and are therefore not included in the HUBZone program. Our analysis of data HUD used for its 2014 designations found that about 2,400 more census tracts would qualify as HUBZone areas if the 20 percent cap were not in place, an increase of 15 percent from the current number of qualified tracts. Eight states or territories would have at least 20 percent more qualified areas if all eligible tracts were included. According to our analysis of data for applications received by SBA during fiscal years 2008-2013, the number of applications submitted by firms declined significantly from 2009 through 2011 (from a high of about 4,500 in 2009 to under 1,500 in 2013). Since 2009, the application approval rate also declined. For example, SBA approved about 32 percent of the approximately 4,000 applications submitted in 2010 (shortly after the new process was implemented), while it approved almost 53 percent of the approximately 3,000 applications submitted in 2008 under the previous process (see table 7 and fig. 11). In general, a higher percentage of applications have been withdrawn since SBA implemented the revised process—more than 50 percent of applications for every year except 2013. According to SBA officials, firms can withdraw an application or SBA can close it if it is believed that the firm will not meet program requirements. According to SBA officials, SBA’s goal is to process an application within 90 days. Our analysis of SBA data found that it took SBA more than 90 days to process most of the applications approved since fiscal year 2008. As shown in table 8, of the applications SBA received in fiscal years 2008-2013, about 27 percent were processed in 90 or fewer days while 73 percent took more than 90 days to process. SBA reported that in fiscal years 2013 and 2014 it took an average of 103 and 123 days, respectively, to process any application regardless of outcome. According to the officials, SBA has completed a review of its certification process and plans to implement changes to improve and more accurately reflect its processing time. For example, according to SBA officials, SBA has been considering requiring that the program analyst, instead of administrative staff, request supporting documentation from the applicant firms. Furthermore, the official explained that SBA has been considering changing when it starts calculating the processing time. Currently, SBA begins to calculate the processing time when a high-ranking official from the firm electronically validates the application. However, SBA cannot begin its review until it receives supporting documents from the firm, which can take weeks. Consequently, SBA has been considering changing the start date to when it receives the supporting documents. According to our analysis of SBA data for approved applications submitted during fiscal years 2008-2013, HUBZone firms vary in size and types of services and products provided. Number of employees. The number of employees at HUBZone firms ranged from 1 to 496, with a median of 4 employees for all years except 2009 and 2010, in which the median was 5. Industries in which firms operated. Table 9 lists the top 10 industries based on the number and percentage of approved HUBZone firm applications. In addition to the contact above, Harry Medina (Assistant Director), Daniel Newman (Analyst-in-Charge), Pamela Davidson, Cynthia Grant, Julia Kennon, Yola Lewis, John McGrail, John Mingus, Marc Molino, Caroline Neidhold, Gloria Proa and Barbara Roesmann made key contributions to this report.
Small firms participating in SBA's HUBZone program received about $4 billion in federal contracts in fiscal year 2013. The program's purpose is to stimulate economic development in economically distressed areas. A certified HUBZone firm is eligible for federal contracting benefits, including limited competition awards such as sole-source and set-aside contracts. GAO previously reported on weaknesses in SBA's internal controls and problems with ensuring that only eligible firms participate in the program. GAO was asked to examine the steps SBA has taken to address these issues. This report (1) describes HUBZone designations and how SBA communicates with interested parties about the program, and (2) examines SBA's certification and recertification processes for firms. To address these objectives, GAO analyzed statutory provisions, SBA documents, and federal procurement data. GAO also interviewed SBA and representatives from applicant firms (certified, decertified, and denied) and local economic development agencies located in four HUBZones selected for geographic diversity. The Small Business Administration (SBA) designates economically distressed areas as Historically Underutilized Business Zones (HUBZone), based on demographic data such as unemployment and poverty rates, but lacks an effective way to communicate program changes to small businesses. The designations apply to areas such as nonmetropolitan counties and census tracts and are subject to periodic changes as economic conditions change. Small businesses in HUBZones can apply for certification to participate in the program. HUBZones that lose their qualifying status due to changes in economic conditions become “redesignated” and undergo a 3-year transition period. In 2015, 3,417 redesignated areas will lose their HUBZone status. There are 578 firms in those areas (see table below). SBA relies on website updates and broadcast e-mails to inform firms about program changes, and consequently not all affected may be informed about the changes before their resultant decertification. SBA has initiated efforts to improve notification of program changes, but its communications may not reach all affected firms and do not specify when the status of areas might change or what firms are located in those areas. As a result, some firms in the program lack timely awareness of information that could affect their eligibility. SBA has addressed weaknesses in its certification process that GAO previously identified, but lacks key controls for its recertification process. For instance, to receive certification SBA now requires all firms to provide documentation to show they meet the eligibility requirements. SBA also conducts site visits at selected firms based on, for example, the amount of federal contracts they received. However, SBA does not require firms seeking recertification to submit any information to verify their continued eligibility or provide guidance on when staff should request or verify documentation for recertification. Instead, it relies on firms attesting that they continue to meet the program's eligibility requirements. By not routinely requiring and reviewing key supporting documentation from recertification applicants, SBA is missing an additional opportunity to reduce the risk that ineligible firms obtain HUBZone contracts. SBA should (1) establish a mechanism to better ensure firms are notified of changes that could affect their participation in the program, and (2) assess the recertification process and implement additional controls, such as criteria and guidance for a risk-based approach to requesting and verifying information during recertification. SBA agreed with both recommendations.
The authorizing legislation for the federal crop insurance program states that the purpose of the program is to promote the national welfare by improving the economic stability of agriculture. According to RMA’s mission statement, the agency provides risk-management tools, such as crop insurance, to strengthen the economic stability of agricultural producers and rural communities. Specifically, RMA’s fiscal years 2011 to 2015 strategic plan states that the agency’s goal for the federal crop insurance program is that it will provide a broad-based financial safety net for producers. The fiscal years 2011 to 2015 strategic plan includes the agency’s strategic goals and core values in support of its mission. These goals are, among other things, to continue to expand participation, ensure actuarially sound products, safeguard the integrity of the program, and to do so as responsible stewards of taxpayer dollars and with transparency. Through the federal crop insurance program, farmers insure against losses on more than 100 crops. These crops include the five major crops (corn, soybeans, wheat, cotton, and grain sorghum), as well as nursery crops and certain fruits and vegetables. According to an RMA document, the amount of federal crop insurance purchased based on planted acres is relatively high in comparison with the past for the five major crops. In 2012, corn acreage was 84 percent insured, soybean acreage was 84 percent insured, wheat acreage was 83 percent insured, cotton acreage was 94 percent insured, and grain sorghum acreage was 74 percent insured. As shown in table 1, the federal government’s crop insurance costs generally increased for fiscal years 2003 through 2013. A widespread drought and crop losses in crop year 2012 contributed to the spike in government costs to $14.1 billion in fiscal year 2012.weather conditions were more favorable, so government costs were lower than in fiscal year 2012. According to an April 2014 CBO estimate, for fiscal years 2014 through 2023, program costs are expected to average $8.9 billion annually. In crop year 2013, The 2014 farm bill included a provision that affects the dollar value that a farmer can insure when the farmer’s county has experienced substantial crop losses in previous years. RMA uses the actual production history (APH)—4 to 10 years of historical crop yields— to establish a farmer’s insurance guarantee. Existing law before the 2014 farm bill allowed a farmer to replace a low actual yield in the APH with a yield equal to 60 percent of the historical county crop yield. The 2014 farm bill enhanced this provision by allowing farmers to exclude without replacement any recorded or appraised yield from the APH calculation if the average crop yield in the county for any particular year is less than 50 percent of the 10-year county average. According to a USDA document, this provision will provide relief to farmers affected by severe weather, including drought, by allowing them to have a higher approved crop yield. In general, RMA will set increased premium rates for farmers who choose to use this option, meaning the subsidy provided by the federal government will increase. CBO estimated that this provision change will cost $357 million over the 10 years from fiscal year 2014 through fiscal year 2023. The government’s crop insurance costs are substantially higher in areas with higher crop production risks than in other areas. From 2005 through 2013, government costs per dollar of crop value in areas with higher crop production risks were over two and a half times the costs in other areas. However, RMA does not monitor and report on the government’s crop insurance costs in these higher risk areas. According to an RMA official, RMA’s county target premium rates are the best available measure of crop production risks. In the 20 percent (510) of U.S. counties with the highest average county target premium rates, these rates ranged from 20 percent to 83 percent, with a median rate of 25 percent. In comparison with other types of property and casualty insurance, 25 percent is a relatively high premium rate. For example, at 25 percent, the annual homeowner’s insurance premium on a house valued at $400,000 would be $100,000. The remaining 80 percent (2,044) of U.S. counties had lower average county target premium rates. Those rates ranged from 0.6 percent to nearly 20 percent, with the median rate of 9 percent. Figure 1 shows counties organized in groups of 20 percent based on average county target premium rates, with the darker areas representing counties with higher average county target premium rates. The color- shaded counties represent all 2,554 counties that had county target premium rates for at least one of the five major crops. Figure 2 shows the riskiest 20 percent of counties (510) in terms of average county target premium rates. These 510 higher risk counties are color-shaded on the basis of their 2013 premium dollars to show which counties purchased the most crop insurance. The Great Plains, which has areas with relatively high drought risk, had a large portion of the higher risk counties’ premium dollars. Figure 3 compares the estimated government crop insurance costs per dollar of expected crop value for the five major crops in the 510 higher risk counties with the costs in the 2,044 other U.S. counties from 2005 through 2013.year depending on weather-caused crop losses, crop prices, and farmers’ decisions about how much insurance coverage to purchase. To control for variations in crop prices and farmers’ purchase decisions, and to normalize the costs for higher risk counties and lower risk counties while still reflecting weather-caused crop losses, we expressed the estimated government costs in relation to expected crop value. As shown in figure 3, the costs in higher risk counties were substantially greater. Over the 9- year time frame, government costs averaged 14 cents per dollar of expected crop value in the higher risk counties and 5 cents per dollar in the other counties. For example, if two farms each had an expected crop value of $1 million, the higher risk farm would have had an average annual government cost of $140,000, and the lower risk farm would have had an average annual government cost of $50,000. In 2013, the higher risk counties had a government cost of 17 cents per $1 of expected crop value, 3 cents higher than the average during the time frame, and the other counties had a government cost of 5 cents per $1 of expected crop value, the same as the time frame average. RMA implemented changes to premium rates in 2014, decreasing some rates and increasing others, but our analysis of RMA data shows that, for some crops, RMA’s higher risk premium rates may not cover expected losses. RMA made changes to premium rates from 2013 to 2014, but its practice of phasing in changes to premium rates over time could have implications for actuarial soundness. Further, many premium rates in areas with higher production risks were lower than they should have been to cover expected losses and RMA’s increases to these premium rates were not as high as they could have been under the law to fully cover expected losses. We found that RMA adjusted higher risk county base premium rates and county target premium rates from 2013 to 2014 for the five major crops. The revisions in premium rates were in response to a 2010 study of its methodology and its periodic reviews of crop loss history. The changes included increases and decreases of higher risk county base premium rates and county target premium rates for each of the five major crops. On average, RMA’s various changes to premium rates from 2013 to 2014 resulted in decreases to county base premium rates and county target premium rates for corn and soybeans and increases for grain sorghum.These changes also represented an increase in the percentage of county base premium rates that were aligned with county target premium rates for corn, soybeans, cotton, and grain sorghum, but not for wheat. RMA has indicated in agency documents that it phases in new rates, especially those that require an increase, to keep premiums stable and provide farmers with predictable rates. For example, in a 2013 document, the agency reported that it planned to slowly phase in changes to county base premium rates to mitigate the impact of a 2012 drought. However, phasing in changes to premium rates can have implications for improving actuarial soundness. For example, USDA’s Office of Inspector General (OIG) reported in 2005 that, from crop years 2000 through 2003, when cotton crop losses were high relative to premiums, premium rates for cotton were decreased, unchanged, or increased only moderately and, in these same 4 years, premiums were not sufficient to cover losses. An RMA official told us that RMA uses judgment when changing county base premium rates, factoring in the agency’s goal of maintaining stability for farmers in its decisions. When county base premium rates are lower than county target premium rates, RMA is required by statute to limit annual increases in premium rates to 20 percent of what the farmer paid for the same coverage in the previous year. However, RMA uses discretion in deciding whether to raise rates by the full 20 percent or by a lesser amount (as indicated by its practice of phasing in rate changes). Based on our analysis, for the higher risk premium rates, half of county base premium rates for corn, cotton, and grain sorghum, and nearly half of county base premium rates for wheat are lower than the county target premium rates.the lower risk premium rates, most of the county base premium rates for In contrast, for corn, cotton, grain sorghum, and wheat meet or exceed county target premium rates. Figure 7 shows the percentage of county base premium rates that meet, exceed, or are lower than county target premium rates in 2014. We calculated percentage differences between county base premium rates and county target premium rates. This provided a measure of the gap between the premium rate RMA charges a farmer (the county base premium rate) and the premium rate RMA should charge a farmer (the county target premium rate). For example, for nonirrigated cotton, one county had a county base premium rate of 26 percent and a county target premium rate of 32 percent. Thus, the county base premium rate was 6 percentage points less than the county target premium rate; however, the percentage difference was 23 percent. For most higher risk premium rates where the county base premium rates were lower than the county target premium rates, the higher risk county base premium rates were within 20 percent of the county target premium rates in 2014, meaning RMA could fully align the rates in a single year. As shown in table 2, across the major crops, a larger percentage of county base premium rates are lower than county target premium rates by 20 percent or more for higher risk premium rates as compared to the lower risk premium rates. Based on our analysis, from 2013 to 2014, RMA changed some county base premium rates by the full 20 percent allowed by the law. However, we also found that from 2013 to 2014, RMA did not raise county base premium rates as high as the law allows for many of the higher risk premium rates. For example, as shown in table 3, RMA made a lesser adjustment in about half of the county base premium rates for corn and cotton, and nearly half for grain sorghum that required a change of 20 percent or more to either meet or move closer to the county target premium rate. Table 3 shows the percentage of premium rates where a change of at least 20 percent was necessary to move the county base premium rate closer to the county target premium rate and where RMA did not use the full 20 percent authorized in statute. An RMA official told us that the agency strives for actuarial soundness not only nationwide, but also, at the county and crop level. Without sufficient increases to premium rates, where such increases are necessary, RMA’s premium rates may not cover expected losses and may not be as high as they could be under the law, which may have implications for the actuarial soundness of the program. Among the higher risk premium rates, if the county base premium rates and the county target premium rates were identical, the federal government’s total program costs in these areas would be lower because more premium dollars would be collected. For example, in analyzing data on premium dollars for 2013, our analysis showed that had the county base premium rates been aligned with the county target premium rates in higher risk counties, the federal government could have potentially collected tens of millions of dollars in additional premiums. However, the federal government’s total program costs would not be reduced by the same amount as the additional premiums. The amount of the premium that the federal government provides on behalf of farmers (premium subsidy), about 62 percent, on average, would increase, but the portion of the premium that farmers pay would also increase. Thus, the additional premiums would reduce the government’s costs. Also, when county base premium rates are lower than county target premium rates, farmers’ production decisions may not be based on the true cost of their risk of loss due to weather-related events, such as drought; and, the federal government does not have information about the full amount of premium dollars the federal government should collect from farmers. Ensuring that the federal government has information about the full amount of premium dollars it should collect from farmers would be an activity consistent with RMA’s core values. Federal crop insurance plays an important role in protecting farmers from losses from natural disasters and price declines, and the federal crop insurance program has become one of the most important programs in the farm safety net. RMA has overall responsibility for administering the program, including controlling costs. With increasing budgetary pressures, it is critical that federal resources are targeted as effectively as possible. One of USDA’s strategic objectives is to maximize the return on taxpayer investment in the department through enhanced stewardship activities of resources and focused program evaluations. Such evaluations could include an analysis of the government’s crop insurance costs in higher risk areas, where, as our analysis found, government costs are substantially higher than in other areas. However, RMA does not monitor and report on the government’s crop insurance costs in higher risk areas to identify potential cost savings, which would be consistent with USDA’s strategic objective. Without additional information from RMA on the government’s crop insurance costs in higher risk areas, Congress may not have all the information it needs to make future assessments of the crop insurance program’s design and costs. In implementing premium rates, RMA seeks to balance its goals for participation and ensuring stability for farmers with maintaining an actuarially sound program. RMA updates its premium rates periodically, but there are continuing gaps between county base premium rates and county target premium rates. RMA has the ability to make changes to more quickly achieve greater actuarial soundness at the county and crop level but is not always doing so for areas with higher production risks. Without sufficient increases to premium rates, where applicable, RMA may not be taking all the actions available to achieve greater actuarial soundness. Additionally, moving to ensure that more county base premium rates meet county target premium rates will provide more information about the full costs to the federal government for insuring farmers in higher risk areas, consistent with the core value in RMA’s fiscal years 2011 to 2015 strategic plan, and could also save federal funds. To better inform Congress in the future about crop insurance program costs, reduce present costs, and ensure greater actuarial soundness, we recommend that the Administrator of the U.S. Department of Agriculture’s Risk Management Agency take the following two actions: Monitor and report on crop insurance costs in areas that have higher crop production risks. As appropriate, increase its adjustments of premium rates in areas with higher crop production risks by as much as the full 20 percent annually that is allowed by law. We provided USDA with a draft of this report for review and comment. We received written comments from the RMA Administrator. These comments are summarized below and reproduced in appendix III. In these comments, RMA disagreed with our first recommendation and agreed with our second recommendation. RMA stated that, consistent with the second recommendation, it will continue to revise premium rates in an appropriate, prudent, and actuarially sound manner, taking proper account of current rates and premium rate targets consistent with generally accepted actuarial practices. In its written comments, RMA disagreed with our first recommendation to monitor and report on crop insurance costs in areas that have higher crop production risks and said it currently provides crop insurance data that have all the information necessary to determine crop insurance costs in all areas. RMA’s website provides some information—such as county- level data on premiums, premium subsidies, loss claim payments, and loss ratios—that enables others to do some analysis of crop insurance costs. However, that information is not complete for the purpose of analyzing the government’s costs and is not organized in a way that facilitates an understanding of the government’s costs in higher risk areas. For example, regarding data on loss ratios (i.e., loss claim payments divided by premiums) that is on RMA’s website, a recent article by an agricultural economist from the University of California notes that this loss ratio information for crop insurance (1) is not informative about and misrepresents the actuarial exposure borne by the program and (2) by excluding administrative expenses, understates the extent of public expenditure on the program. We agree with this assessment. Furthermore, placing data on a website does not constitute the monitoring of crop insurance costs. We continue to believe RMA can and should do more to monitor and report on crop insurance costs in higher risk areas, where we found government costs to be substantially higher than in other areas. As we said in this report, without additional information from RMA on the government’s crop insurance costs in higher risk areas, Congress may not have all the information it needs to make future assessments of the crop insurance program’s design and costs. In addition, RMA commented on our analysis of the government’s cost of the crop insurance program in higher risk areas. RMA said it has developed a definition of high-risk land, mapped out these areas, and applied significant premium surcharges. RMA said our definition of what we deem to be “higher risk areas” is much broader. RMA defines “high- risk land” as acreage with identifiable physical limitations, such as floodplains and high sand content soils. Our identification of higher risk areas (i.e., the 20 percent of counties that had the highest weighted average county target premium rates) enabled us to broadly assess crop insurance costs, and we believe this approach, which we discussed with RMA officials, was consistent with our purpose. In its comment letter, RMA said our use of crop insurance costs (or benefits to farmers) per dollar of expected crop value “appears to exaggerate the difference in program costs in higher-risk areas versus other areas, or at least masks some important details.” To present another perspective, RMA compared corn premium subsidies per acre for two states that had a large number of higher risk counties with two states that had no higher risk counties and stated that the premium subsidies per acre were similar amounts. We believe our use of a dollar-based measure (i.e., premium subsidies per dollar of expected crop value) is more appropriate than a physical measure (i.e., acres) for comparisons between costs and farmer benefits in higher risk areas and other areas. This is consistent with the methodology of a 2013 article by agricultural economists from The Ohio State University and the University of Illinois that compared net farm insurance payments using a dollar-based measure. Moreover, the use of a dollar-based measure is consistent with property insurance methods, which are based on the value of the property being insured. RMA stated that coverage levels were another offsetting effect, noting that growers in higher risk areas tend to choose lower coverage levels than in other areas, because higher premium rates make higher coverage less affordable. RMA appears to be suggesting that our analysis overlooks this difference, which is not true. As explained in our report, one of the reasons that we expressed estimated government costs in relation to expected crop value was to control for variations in farmers’ purchase decisions. Farmers’ decisions in selecting coverage levels vary between higher risk areas and other areas. According to our analysis, in 2013, farmers in higher risk counties chose to insure, on average, 67 percent of their expected crop value, while farmers in the other counties chose to insure, on average, 76 percent of their expected crop value. Thus, we used expected crop value—which is not affected by coverage levels—rather than insured crop value so that our analysis would not be distorted by differences in coverage levels. RMA agreed with our second recommendation that, as appropriate, RMA increase its adjustments of premium rates in areas with higher crop production risks by as much as the full 20 percent annually that is allowed by law, saying it mirrors how premium rate adjustments are currently administered. However, RMA stated it disagreed with our assessment of the extent to which premium rates need to be adjusted to the full amount allowed by statute and that adjusting premium rates fully to changes in premium rate targets would undercut the basic purpose of insurance—to provide financial stability. We continue to believe RMA’s adjustment of premium rates should be consistent with insurance principles and the statutory directive to set premium rates that improve actuarial soundness. Furthermore, in its discussion of premium rates in “higher-risk” areas, RMA states that the report makes certain assumptions about premium rate targets in high risk areas that are not completely accurate and do not necessarily result in improved actuarial soundness. RMA further states that following each random variation to its fullest can subject growers to a roller-coaster ride of ups and downs in their premiums. RMA presented a simulation of yields and losses, showing that adjusting premium rates by less than needed to meet the premium rate target leads to a smaller variation in rates. RMA’s simulation suggested that the agency’s current method of adjusting premium rates will yield an average premium rate that is actuarially sound at the national level. We agree that RMA’s average premium rate, nationwide, may allow the program to be considered actuarially sound. However, our analysis focused on RMA’s practices in charging premium rates in areas with higher production risks that may be lower or higher than the actuarially sound premium rate. In addition, RMA’s simulation did not account for systematic variation in risk at the county and crop level. As we concluded, RMA could more quickly achieve actuarial soundness at the county and crop level. Moreover, as we stated, charging premium rates that are less than the actuarially sound premium rates could also have implications for total costs to the federal government in areas with higher production risks. Thus, we continue to believe that increasing the adjustments of premiums rates in areas with higher crop production risk by as much as the full 20 percent annually that is allowed by law is prudent and in keeping with sound fiscal practices. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees; the Secretary of Agriculture; the Administrator of USDA’s Risk Management Agency; the Director, Office of Management and Budget; and other interested parties. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or morriss@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. Our objectives were to determine, for areas with higher crop production risks, (1) the government’s cost of the crop insurance program and (2) the extent to which RMA’s premium rates, as implemented, cover expected losses. To address these objectives, we reviewed relevant provisions of the Food, Conservation, and Energy Act of 2008 (2008 farm bill) and the Agricultural Act of 2014 (2014 farm bill); other statutes; and U.S. Department of Agriculture (USDA), regulations; and we analyzed crop insurance program data from RMA. We interviewed USDA officials, including officials from RMA, and reviewed documents they provided, such as, descriptions of the agency’s methodology for calculating premium rates. We selected 2013 because it was the most recent year for which complete RMA data on program costs was available at the time we performed this analysis. We recognize that an area or location may be high risk for one crop or crop type or practice type but not for a different crop. However, by using 2013 premium dollars to weight the average of the different target rates used in a given county, we maintain that such a calculation allows a reasonable average approximation of a location’s production risk. practice, and crop-type combinations RMA used for premium calculations in 2013. Additionally, the use of a weighted-average county target premium rate allowed us to calculate a single measure for each county by which we could examine government costs in specific geographic areas. We then ranked the counties from the highest to lowest weighted average target rate. We defined those counties ranked in the top 20 percent as “higher risk counties” and the remaining 80 percent of counties as “lower risk counties.” To calculate total government costs in these higher risk counties, we analyzed RMA data for 2005 through 2013. We used the expected crop value to compare the costs in areas with higher production risks to the costs in other areas. Expected crop value is equal to the expected crop production multiplied by the expected (or elected) crop price. However, we did not have information on expected crop prices, so we calculated expected crop value by dividing the liability dollars by the coverage rate. Finally, to address the first objective we interviewed RMA officials, reviewed USDA’s and other studies that examined the costs of the crop insurance program and the role of premium subsidies, and consulted documents from other stakeholders, including farm industry groups. To address the second objective, we analyzed RMA data on production- based (or yield) premium rates for the five major crops (corn, cotton, grain sorghum, soybeans, and wheat) for crop years 2013 and 2014. Production-based premium rates are RMA’s premium rates for production-based policies and are used to determine the premium rates for revenue policies.and crop-type combination, on the county base premium rate and the county target premium rate. We used the same 2013 data on county target premium rates to identify higher risk counties in our first objective. Using the 2013 premium rate data, we ranked the county target premium rates from highest to lowest and identified the highest 20 percent of county target premium rates for each crop, practice, and crop-type combination. For a given crop, a single county may have multiple county target premium rates, depending on the number of combinations of practices (e.g., irrigated and nonirrigated) and crop types (e.g., winter wheat and spring wheat) insured in the county. For example, for wheat, a county may have county target premium rates for two practice and crop- type combinations—nonirrigated winter wheat and nonirrigated spring wheat―in which one or both of the premium rates may fall in the highest 20 percent in 2014. For example, for wheat, County A in state B had county target premium rates for two practice and crop-type combinations—nonirrigated winter wheat and nonirrigated spring wheat— that were in the highest 20 percent in 2014. In total, there were 40 combinations or rankings of crops, practices, and crop types. Thus, if a county target premium rate fell into the top 20 percent in its ranking, that county target premium rate was placed in the “higher risk premium rate” category. The remaining 80 percent were placed in the “lower risk premium rate” category. We compared each county base premium rate with the applicable county target premium rate identified above. In each instance, we compared the county base premium rate with the county target premium rate for a single practice, and crop-type combination, and calculated the percentage difference. We calculated the percentage difference by subtracting the county base premium rate from the county target premium rate and dividing the result by the county base premium rate. If the percentage difference between the county base premium rate and the county target premium rate was zero, we considered the county base premium rate as having met the county target premium rate. If the percentage difference was greater than zero, we placed the county base premium rate in the “lower than” category; and, if the percentage difference was less than zero, we placed the county base rate in the “higher than” category. Table 4 provides details on the crop-type and RMA provided us with data, for each crop, practice, practice combinations included in this review. Finally, we interviewed RMA officials in headquarters and two field offices regarding the agency’s method for setting and implementing changes to county premium rates. We judgmentally selected the field offices based on the offices having had experience implementing premium rates in areas with higher production risks. We also reviewed studies that examined the agency’s methodology for assigning premium rates and reviewed relevant audits by USDA’s Office of the Inspector General. For the various data used in our analyses, as discussed, we generally reviewed related documentation, interviewed knowledgeable officials, and reviewed related internal controls information to evaluate the reliability of these data. In each case, we concluded that the data were sufficiently reliable for the purposes of this report. We conducted this performance audit from December 2013 to February 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our objectives. Figure 8 shows the 255 counties that were the riskiest 10 percent of counties in terms of average county target premium rates. These counties are shaded on the basis of their 2013 premium dollars to show which risky counties had the most crop insurance. In addition to the individual named above, Thomas M. Cook, Assistant Director; Kevin S. Bray; Mark Braza; Gary Brown; Michael Kendix; Tahra Nichols; Susan Offutt; Ruth Solomon; and Frank Todisco made key contributions to this report. In addition, Cheryl Arvidson, and Kiki Theodoropoulos made important contributions to this report. Climate Change: Better Management of Exposure to Potential Future Losses Is Needed for Federal Flood and Crop Insurance. GAO-15-28. Washington, D.C.: October 29, 2014. Crop Insurance: Considerations in Reducing Federal Premium Subsidies. GAO-14-700. Washington, D.C.: August 8, 2014. Extreme Weather Events: Limiting Federal Fiscal Exposure and Increasing the Nation’s Resilience. GAO-14-364T. Washington, D.C.: February 12, 2014. Fiscal Exposures: Improving Cost Recognition in the Federal Budget. GAO-14-28. Washington, D.C.: October 29, 2013. 2013 Annual Report: Actions Needed to Reduce Fragmentation, Overlap, and Duplication and Achieve Other Financial Benefits. GAO-13-279SP. Washington, D.C.: April 9, 2013. High-Risk Series: An Update. GAO-13-283. Washington, D.C.: February 2013. Crop Insurance: Savings Would Result from Program Changes and Greater Use of Data Mining. GAO-12-256. Washington, D.C.: March 13, 2012. Crop Insurance: Opportunities Exist to Reduce the Costs of Administering the Program. GAO-09-445. Washington, D.C.: April 29, 2009. Crop Insurance: Continuing Efforts Are Needed to Improve Program Integrity and Ensure Program Costs Are Reasonable. GAO-07-944T. Washington, D.C.: June 7, 2007. Crop Insurance: Continuing Efforts Are Needed to Improve Program Integrity and Ensure Program Costs Are Reasonable. GAO-07-819T. Washington, D.C.: May 3, 2007. Climate Change: Financial Risks to Federal and Private Insurers in Coming Decades Are Potentially Significant. GAO-07-760T. Washington, D.C.: April 19, 2007. Climate Change: Financial Risks to Federal and Private Insurers in Coming Decades Are Potentially Significant. GAO-07-285. Washington, D.C.: March 16, 2007. Suggested Areas for Oversight for the 110th Congress. GAO-07-235R. Washington, D.C.: November 17, 2006.
The federally subsidized crop insurance program, which helps farmers manage the risk inherent in farming, has become one of the most important programs in the farm safety net. Since 2000, the government's costs for the crop insurance program have increased substantially. The program's cost has come under scrutiny as the nation's budgetary pressures have been increasing. GAO was asked to identify the costs to the federal government for insuring crops in areas with higher production risks. This report examines, for these areas, (1) the government's cost of the crop insurance program and (2) the extent to which RMA's premium rates, as implemented, cover expected losses. GAO analyzed RMA crop insurance program data from 1994 through 2013 (the most recent year with complete program data) and premium rate data for 2013 and 2014; reviewed relevant studies, RMA documents, and documents from stakeholders including farm industry groups; and interviewed RMA officials. The federal government's crop insurance costs are substantially higher in areas with higher crop production risks (e.g., drought risk) than in other areas. In the higher risk areas, government costs per dollar of crop value for 2005 through 2013 were over two and a half times the costs in other areas. The figure below shows the costs during this period. However, the U.S. Department of Agriculture's (USDA) Risk Management Agency (RMA)—the agency that administers the crop insurance program—does not monitor and report on the government's crop insurance costs in the higher risk areas. RMA implemented changes to premium rates in 2014, decreasing some rates and increasing others, but GAO's analysis of RMA data shows that, for some crops, RMA's higher risk premium rates may not cover expected losses. RMA made changes to premium rates from 2013 to 2014, but its plans to phase in changes to premium rates over time could have implications for improving actuarial soundness. USDA is required by statute to limit annual increases in premium rates to 20 percent of what the farmer paid for the same coverage in the previous year. However, GAO found that, for higher risk premium rates that required an increase of at least 20 percent to cover expected losses, RMA did not raise these premium rates as high as the law allows to make the rates more actuarially sound. Without sufficient increases to premium rates, where applicable, RMA may not fully cover expected losses and make the rates more actuarially sound. Furthermore, in analyzing data on premium dollars for 2013, GAO found that had RMA's higher risk premium rates been more actuarially sound, the federal government could have potentially collected tens of millions of dollars in additional premiums. GAO recommends that RMA (1) monitor and report on crop insurance costs in areas that have higher crop production risks and (2), as appropriate, increase its adjustments of premium rates in these areas by as much as the full 20 percent annually that is allowed by law. RMA disagreed with GAO's first recommendation and agreed with the second. GAO continues to believe that RMA can and should do more to monitor and report on crop insurance costs in higher risk areas, where government costs were found to be substantially higher.
The South Florida ecosystem extends from the Chain of Lakes south of Orlando to the reefs southwest of the Florida Keys. This vast region, which is home to more than 6 million Americans, a huge tourism industry, and a large agricultural economy, also encompasses one of the world’s unique environmental resources—the Everglades. Before human intervention, freshwater moved south from Lake Okeechobee to Florida Bay in a broad, slow-moving sheet. The quantity and timing of the water’s flow depended on rainfall patterns and on slow releases of stored water. Even during dry seasons, water stored throughout the vast areas of the Everglades supplied water to wetlands and coastal bays and estuaries. For centuries, the Everglades provided habitat for many species of wading birds and other native wildlife, including the American alligator, which depended on the water flow patterns that existed before human intervention. Following major droughts from the early 1930s through the mid-1940s and drenching hurricanes in 1947, the Congress authorized the Central and Southern Florida Project in 1948. The project, an extensive system of over 1,700 miles of canals and levees and 16 major pump stations, prevents flooding and saltwater intrusion into the state’s aquifer while providing drainage and water for the residents of South Florida. However, as shown in figure 1, the engineering changes from the Central and Southern Florida Project, coupled with agricultural and industrial activities and urbanization, have reduced the Everglades to about half its original size and have had a detrimental effect on wildlife habitats and water quality. The loss of habitat has caused sharp declines in native plant and animal populations, placing many native species at risk. To address the ecosystem’s deterioration, the South Florida Ecosystem Restoration Task Force was established by a federal interagency agreement to promote and facilitate the development of consistent policies, strategies, and plans for addressing the environmental concerns of the South Florida ecosystem. The Task Force consisted of assistant secretaries from the Departments of Agriculture, the Army, Commerce, and the Interior; an assistant attorney general from the Department of Justice; and an assistant administrator from the Environmental Protection Agency. The Water Resources Development Act of 1996 formalized the Task Force; expanded its membership to include state, local, and tribal representatives; and charged it with coordinating and facilitating the efforts to restore the ecosystem. To accomplish the ecosystem’s restoration, the Task Force established the following three goals: Get the Water Right. Restoring more hydrologic functions to the ecosystem while providing adequate water supplies and flood control will involve enlarging the ecosystem’s freshwater supply and improving how water is delivered to natural areas. The goal is to deliver the right amount of water, of the right quality, to the right places, at the right times. Restore, Preserve, and Protect Natural Habitats and Species. Restoring lost and altered habitats and recovering the endangered or threatened species native to the ecosystem will involve acquiring lands and reconnecting natural habitats that have become disconnected through growth and development. Foster Compatibility of the Built and Natural Systems. Achieving the long- term sustainability of the ecosystem will not be possible if decisions about the built environment are not consistent with the ecosystem’s health. Land use decisions must be compatible with the ecosystem’s restoration while supporting of the needs for water supply, flood control, and recreation. The goal will also require developing public understanding and support of ecosystem restoration issues. The Task Force has published several documents and developed strategies and plans to address specific restoration issues since its establishment in 1993. However, at the time of our review in 1999, it had not developed an overall strategic plan to guide the restoration effort and accomplish its goals. We recommended that the Task Force develop a strategic plan that would clearly lay out how the restoration would occur and contained quantifiable goals and performance measures that could be used to track the restoration’s progress. On July 31, 2000, the Task Force issued its strategic plan entitled Coordinating Success: Strategy for Restoration of the South Florida Ecosystem. The July 2000 plan submitted by the Task Force fully addresses two of the four recommended elements. The plan identifies the resources needed and identifies the agencies accountable for accomplishing specific actions. In addition, because the Task Force included discussions of several important aspects of outlining how the restoration will occur, we believe the plan partially addresses the third element. (The section after this one discusses why the plan does not fully address the third element.) In identifying the resources needed to achieve the restoration and assigning accountability for accomplishing specific actions, the plan states that it will cost an estimated $14.8 billion to restore the South Florida ecosystem and describes major programs and plans that will contribute to the restoration. The plan also includes an appendix that provides additional information on the cost estimate, the categories of costs, and how the estimate was developed. The appendix contains information on the estimated cost of each goal and shows how the costs will be shared between the federal and state governments. Information on the ongoing and future programs and activities that are associated with the goals, the agencies accountable for implementing those programs and activities, the total cost of each program and activity, and the amount appropriated for or allocated to those programs through fiscal year 2000 is also presented in the appendix. The plan also provides information on the over 260 projects that the Task Force believes will contribute to achieving the ecosystem’s restoration. The plan contains a project summary table that clearly identifies which of the goals and subgoals each project is associated with and provides information on each project’s total costs, the lead agencies accountable for implementing each project, the start and end date of each project, and the amount that has been appropriated for each project to date. For example, the plan’s table shows that the Modified Water Deliveries Project is associated with subgoal 1.A.3, Removing Barriers to Sheetflow, which is one of the subgoals of Goal 1—Get the Water Right. The table identifies the National Park Service as the accountable agency for the project, which is intended to reestablish natural hydrologic conditions in Everglades National Park, and shows that the project started in 1990 and is expected to be completed in 2003. The project summary table also shows that the total cost of the project is $135,363,000 and that $62,037,000 has already been appropriated. In addition, the plan includes detailed data sheets that provide a description of each project and detailed budget information. By showing where the projects fit into the overall restoration effort, the plan provides information that, if utilized by the participating agencies, will be very valuable in assisting the agencies involved in the restoration in establishing priorities and justifying and obtaining the authorization and funding necessary to implement the planned projects. The Task Force could also use this information to develop interim outcome performance targets—a key element not yet included in this plan—that could provide the Task Force with a greater ability to gauge the progress being made in restoring the ecosystem. The Task Force’s plan also includes discussions of several important aspects of how the restoration will occur—the third element. The Task Force added subgoals and specific objectives for accomplishing two of the restoration’s three strategic goals. For example, the plan divides the restoration’s strategic goal of delivering the right amount of water, of the right quality, to the right places, at the right times (Get the Water Right) into two subgoals. Under the first subgoal—Get the Hydrology Right (water quantity, timing, and distribution), the plan describes three objectives designed to recapture and store water that is currently discharged to the Atlantic Ocean or Gulf of Mexico and redirect it to match, as closely as possible, natural hydrological patterns. The plan also describes two objectives under the second subgoal—Get the Water Quality Right—that are aimed at reducing the level of phosphorus entering the Everglades and other protected areas and ensuring that impaired water bodies in the ecosystem will meet federal, state, and tribal water quality standards. Similar details have been provided for the restoration’s second goal—Restore, Preserve, and Protect Natural Habitats and Species. The Task Force also included a list of end results (outcomes) that are representative of what it expects to eventually achieve by carrying out the activities described in the plan. In addition, the Task Force included a description of desired future conditions when describing each of its goals to further explain what each goal means. The Task Force also included a discussion of the other factors, such as obtaining adequate and reliable funding and the willingness of landowners to sell or lease their lands, that could affect its ability to achieve the restoration’s goals. In describing these aspects in its plan, the Task Force has begun to develop a blueprint, or framework, for restoring the ecosystem that it can use to guide the restoration as well as communicating the size, scope, and importance of the effort to the Congress, other decisionmakers, and the public. Although the Task Force has included discussions of several important aspects of the third element of outlining how the restoration will occur, additional work is needed before the plan will provide a clear picture of how the restoration will occur. The current plan also does not link the strategic goals of the restoration to outcome-oriented interim goals, an element that is essential to tracking and measuring the Task Force’s progress in restoring the ecosystem. The Task Force’s strategic plan does not contain several key attributes that are necessary to clearly outline how the restoration will occur. The plan does not discuss or describe the approaches and strategies that will be used to achieve one of its long-term strategic goals—the compatibility of the built and natural systems. The Task Force recognizes that unless decisions made about the built environment are consistent with the ecosystem’s health, the long-term sustainability of the ecosystem cannot be achieved, and the billions of dollars spent to restore the ecosystem could be wasted. Given the significance of the link between the built and natural environments, it is important that the Task Force define and integrate this aspect of the restoration into its plan. With a clear picture, or blueprint, of how the entire restoration will occur, the Task Force and participating agencies will be better able to establish appropriate priorities and milestones for accomplishing the entire effort and will improve their ability to accomplish the restoration in a timely and efficient manner. Having a clear outline of the restoration could also help the Task Force to ensure that the participating agencies do not duplicate or counter each other’s efforts. In addition, because of the inevitable turnover in the Task Force’s representation that will occur over the time it will take to restore the ecosystem, having a clear outline of how the restoration will occur could make the transition of new and replacement members easier by helping them to more quickly understand what is needed to successfully complete the restoration effort. The plan also does not describe the relationship between the end results, or outcomes, that the Task Force has indicated that it expects to achieve and its long-term strategic goals. We recognize that the Task Force will continue to refine its plan because not all of the data needed to restore the South Florida ecosystem are available now and uncertainties exist about how the ecosystem will respond to the projects undertaken by agencies participating in the restoration. However, showing how the strategic goals, objectives, and projects contained in the plan will achieve or contribute to achieving the end results expected by the Task Force will provide the Congress and other participants with a better understanding and appreciation of the Task Force’s direction and what its participants are accomplishing with the funding being provided. Such assurances could help the participating agencies justify their requests for funding and help address one of the challenges that the Task Force discusses in its plan— obtaining adequate and reliable funding from the federal and state governments. In addition, the plan submitted by the Task Force in July 2000 does not consistently include a quantifiable or numerical starting point (baseline) or target when describing the end results and future conditions that the Task Force expects to achieve. For example, the Task Force includes the following—“the spatial extent of wetlands and other natural systems will be sufficient to support the historic functions of the greater Everglades ecosystem”—as a “desired future condition” under goal 2—Restore, Preserve, and Protect Natural Habitat and Species. But the plan does not discuss how many acres of wetlands and other natural systems now exist (baseline) or the number of acres that will be needed to support the historic functions of the greater Everglades ecosystem (target). Without the inclusion of baselines and targets that will allow the Task Force to accurately measure the results or outcomes that it is achieving, the Congress, the Task Force, and other stakeholders will be not able to accurately compare the expected progress in restoring the ecosystem with the actual progress made. Furthermore, the Task Force’s plan is missing the fourth element that we recommended—linking the strategic goals of the restoration to outcome- oriented interim goals. Many of the end results and future conditions expected by the Task Force may take up to 50 years to realize. For example, one of the end results that the Task Force expects to achieve is improving the status for 14 federally listed threatened or endangered species, and no decline in status for those additional species listed by the state, by 2020. However, the plan does not discuss any plans for assessing the Task Force’s progress in achieving this result during the 20-year period. Setting interim time frames and performance measures will provide focus and a sense of direction and help the Task Force gauge its progress in achieving the end results or outcomes that it expects. In addition, establishing interim benchmarks for performance would enable the Task Force to identify problems early and work with the accountable agencies to make needed adjustments if progress is not satisfactory, thus minimizing the impact on the restoration effort. Conversely, establishing and measuring interim benchmarks could show that the Task Force had underestimated the expected results and that the expected end results, or outcomes, could be accomplished more quickly. Such information would provide the basis for adjusting the restoration’s time lines, revising the Task Force’s priorities, or increasing the Task Force’s expected outcomes. The Task Force has efforts under way to develop the additional information that we believe needs to be added to the plan. For example, the Task Force has established a subcommittee to complete the development of the restoration’s third strategic goal—Foster Compatibility of the Built and Natural Systems. The subcommittee is already working with advisors who include many state and local technical experts to refine and complete the development of this goal. The Task Force also has other efforts under way, such as the development of a land acquisition plan for the restoration effort and working with the Multi-species/Ecosystem Recovery Implementation Team to develop a strategy to implement a plan for protecting and recovering threatened and endangered species located in South Florida. According to Interior’s Director of Everglades Restoration, information developed from these efforts is expected to be included in the July 2002 update of the plan. The initial strategic plan developed by the Task Force is a good start. However, because the plan does not contain all the elements that we recommended, it does not fulfill the requirement placed on the Secretary of the Interior, as the Task Force Chair, by the House and Senate Committees on Appropriations. We recognize that the plan is a “work in progress” and that the Task Force will continue to refine and improve its strategic plan as it learns more about the ecosystem and how the ecosystem is responding to the Task Force’s efforts. Revising the plan when it is updated in 2002 to include all the elements would fulfill the Committees’ requirement and provide the Task Force with a basis for better assessing the progress of the restoration and determining what refinements are needed. It will also help smooth the transitions that will occur as the restoration progresses and Task Force members are replaced because new and replacement members could more quickly gain an understanding of what is needed to restore the ecosystem. We provided the Department of the Interior with a draft of our report for review and comment. The Department shares our view that the Task Force has made substantial progress in developing a strategic plan and believes that the plan is a solid foundation that the Task Force can build on. The Department also agreed that the plan submitted in July 2000 does not yet include all the recommended elements and that further refinements and revisions are necessary before the plan will fulfill the requirement placed upon the Secretary of the Interior, as Chair of the Task Force, by the House and Senate Committees on Appropriations. The Department acknowledged that additional work needs to be done to complete the restoration’s third strategic goal—Foster Compatibility of the Built and Natural Systems—and pointed out that a subcommittee established by the Task Force is presently working with advisors who include state and local government technical experts to develop subgoals and measurable objectives for this goal. The Department also agreed that the plan can be improved by refining and expanding the interim time frames and performance measures. The Department indicated that the Task Force expects to revise the plan to include additional information and refinements when it is updated in July 2002. The Department’s comments are presented in their entirety in appendix I. The Department also provided technical comments, which we incorporated as appropriate. To determine if the strategic plan developed by the South Florida Ecosystem Restoration Task Force included all the elements that we recommended in our April 1999 report, we obtained and reviewed the strategic plan submitted to the Congress on July 31, 2000. We compared the plan’s elements and attributes with the elements that we recommended to determine the plan’s completeness. In addition, because we used the criteria in Agencies’ Strategic Plans Under GPRA: Key Questions to Facilitate Congressional Review (GAO/GGD-10.1.16, May 1997) and OMB’s Circular A-11 in our 1999 review to develop (1) our finding that existing Task Force documents did not contain all of the elements of a strategic plan and (2) our recommendation to the Task Force to develop a strategic plan, we also used these documents to assess whether the Task Force’s plan contained all the necessary elements and would be sufficient to guide the restoration effort. We also reviewed other Task Force documents, such as the South Florida Ecosystem Restoration Program’s Fiscal Year 2001 Cross-Cut Budget, which provides detailed budget information for the federal and state agencies involved in the restoration, and the 1999 biennial report entitled Maintaining the Momentum, which summarizes the progress that the Task Force made in the preceding 2 years to restore the South Florida ecosystem. We also met with and discussed the development of the strategic plan with the Executive Director of the South Florida Ecosystem Restoration Task Force and representatives from the Task Force and the Department of the Interior who were involved in developing the strategic plan. In addition, we met with scientists and representatives of agencies involved in the restoration who attended the Greater Everglades Ecosystem Restoration Science Conference held in December 2000. The conference’s objectives were to define specific restoration goals, determine the best approaches to meet these goals, and provide benchmarks for measuring the success of restoration efforts. We also met with the Director of the Chesapeake Bay Program Office and discussed the efforts, experiences, and lessons learned by that program in developing and using environmental indicators and outcome measures to determine the success of efforts to restore the Chesapeake Bay. We conducted our review from October 2000 through February 2001 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Honorable Gale A. Norton, Secretary of the Interior; Michael Davis, Director of Everglades Restoration, Department of the Interior; and other interested parties. We will also make copies available to others upon request. If you or your staff have any questions, please call me at (202) 512-3841. Key contributors to this report were Chet Janik and Sherry McDonald.
The South Florida Ecosystem Restoration Initiative is a complex, long-term effort to restore the South Florida ecosystem--including the Everglades--that involves federal, state, local, and tribal entities, as well as public and private interests. In response to growing signs of the ecosystem's deterioration, federal agencies established the South Florida Ecosystem Restoration Task Force in 1993 to coordinate ongoing federal activities. The Task Force is charged with coordinating and facilitating the overall restoration effort. The Task Force's strategic plan is a good start. However, because the plan does not contain all the elements that GAO recommended in a previous report, it does not fulfill the requirement placed on the Secretary of the Interior, as the Task Force Chair, by the House and Senate Committees on Appropriations. GAO recognizes that the plan is a "work in progress" and that the Task Force will continue to refine and improve its strategic plan as it learns more about the ecosystem and how the ecosystem is responding to the Task Force's efforts. Revising the plan when it is updated in 2002 to include all the elements would fulfill the Committees' requirement and provide the Task Force with a basis for determining what refinements are needed. It will also help smooth the transitions that will occur as the restoration progresses and the Task Force members are replaced because new and replacement members could more quickly gain an understanding of what is needed to restore the ecosystem.
Studies published by the Institute of Medicine and others have indicated that fragmented, disorganized, and inaccessible clinical information adversely affects the quality of health care and compromises patient safety. In addition, long-standing problems with medical errors and inefficiencies increase costs for health care delivery in the United States. With health care spending in 2004 reaching almost $1.9 trillion, or 16 percent, of the gross domestic product, concerns about the costs of health care continue. As we reported last year, many policy makers, industry experts, and medical practitioners contend that the U.S. health care system is in a crisis. Health IT provides a promising solution to help improve patient safety and reduce inefficiencies. The expanded use of health IT has great potential to improve the quality of care, bolster the preparedness of our public health infrastructure, and save money on administrative costs. As we reported in 2003, technologies such as electronic health records and bar coding of certain human drug and biological product labels have been shown to save money and reduce medical errors. For example, a 1,951-bed teaching hospital reported that it realized about $8.6 million in annual savings by replacing outpatient paper medical charts with electronic medical records. This hospital also reported saving more than $2.8 million annually by replacing its manual process for managing medical records with an electronic process to provide access to laboratory results and reports. Health care organizations also reported that IT contributed other benefits, such as shorter hospital stays, faster communication of test results, improved management of chronic diseases, and improved accuracy in capturing charges associated with diagnostic and procedure codes. However, according to HHS, only a small number of U.S. health care providers have fully adopted health IT due to significant financial, technical, cultural, and legal barriers such as a lack of access to capital, a lack of data standards, and resistance from health care providers. According to the Institute of Medicine, the federal government has a central role in shaping nearly all aspects of the health care industry as a regulator, purchaser, health care provider, and sponsor of research, education, and training. Seven major federal health care programs, such as Medicare and Medicaid, provide health care services to approximately 115 million Americans. According to HHS, federal agencies fund more than a third of the nation’s total health care costs. Table 1 summarizes the programs and number of citizens who receive health care services from the federal government and the cost of these services. Given the level of the federal government’s participation in providing health care, it has been urged to take a leadership role in driving change to improve the quality and effectiveness of medical care in the United States, including an expanded adoption of IT. In April 2004, President Bush called for the widespread adoption of interoperable electronic health records within 10 years and issued an executive order that established the position of the National Coordinator for Health Information Technology within HHS. The National Coordinator’s responsibilities include the development and implementation of a strategic plan to guide the nationwide implementation of interoperable health IT in both the public and private sectors. The first National Coordinator was appointed in May 2004, and two months later HHS released The Decade of Health Information Technology: Delivering Consumer-centric and Information-rich Health Care—Framework for Strategic Action, the first step toward the development of a national strategy. The framework described goals for achieving nationwide interoperability of health IT and actions to be taken by both the public and private sectors to implement a strategy. Just last week, President Bush issued an executive order calling for federal health care programs and their providers, plans, and insurers to use IT interoperability standards recognized by HHS. In the summer of 2004, we testified on the benefits that effective implementation of IT can bring to the health care industry and the need for HHS to provide continued leadership, clear direction, and mechanisms to monitor progress in order to bring about measurable improvements. Last year, we reported that HHS, through the Office of the National Coordinator for Health IT, had taken a number of actions toward accelerating the use of IT to transform the health care industry. To further accelerate the adoption of interoperable health information systems, we recommended that HHS establish detailed plans and milestones for meeting the goals of its framework for strategic action and take steps to ensure that those plans are followed and milestones are met. The department agreed with our recommendation. We also reported in June 2005 that challenges associated with major public health IT initiatives still need to be overcome to strengthen the IT that supports the public health infrastructure. Federal agencies face many challenges in their efforts to improve the public health infrastructure, including (1) the integration of current initiatives into a national health IT strategy and federal architecture to reduce the risk of duplicative efforts, (2) development and adoption of consistent standards to encourage interoperability, (3) coordination of initiatives with state and local agencies to improve the public health infrastructure, and (4) overcoming federal IT management weaknesses to improve progress on IT initiatives. To address these challenges, we recommended that HHS align federal public health initiatives with the national health IT strategy and federal health architecture, coordinate with state and local public health agencies, and continue federal actions to encourage the development and adoption of data standards. Last September, we testified about the importance of defining and implementing data and communication standards to speed the adoption of interoperable IT in the health care industry. Hurricane Katrina highlighted the need for interoperable electronic health records as thousands of people were separated from their health care providers and their paper medical records were lost. As we have noted, standards are critical to enabling this interoperability. Although federal leadership has been established to accelerate the use of IT in health care, we testified that several actions were still needed to position HHS to further define and implement relevant standards. Otherwise, the health care industry will continue to be plagued with incompatible systems that are incapable of exchanging medical information that is critical to delivering care and responding to public health emergencies. In March 2006, we testified before this subcommittee on HHS’s continued efforts to move forward with its mission to guide the nationwide implementation of interoperable health IT in the public and private health care sectors. We identified several steps taken by the department, such as the establishment of the organizational structure and management team for the Office of the National Coordinator for Health IT under the Office of the Secretary and the formation of a public-private advisory body—the American Health Information Community—to advise HHS on achieving interoperability for health information exchange. The community, which is co-chaired by the Secretary of HHS and the former National Coordinator for Health IT, identified four breakthrough areas — consumer empowerment, chronic care, biosurveillance, and electronic health records—and formed workgroups intended to make recommendations for actions in these areas that will produce tangible results within a one-year period. Subsequently, in May 2006 the workgroups presented 28 recommendations to the American Health Information Community that address standards, privacy and security, and data-sharing issues. We also reported in March 2006 that HHS—through the Office of the National Coordinator for Health IT— awarded $42 million in contracts that address a range of issues important for developing a robust health IT infrastructure, such as an increasing number of health care providers adopting electronic health records, definitions of health information standards being developed, architectural definitions for a national network, and the development and implementation of privacy and security policies. HHS intends to use the results of the contracts and recommendations from the American Health Information Community proceedings to define the future direction of a national strategy. In March, the National Coordinator told us that he intended to release a strategic plan with detailed plans and milestones later this year. The contracts are described in table 2. HHS and its Office of the National Coordinator for Health IT have made progress through the work of the American Health Information Community and several contracts in five major areas: (1) advancing the use of electronic health records, (2) establishing standards to facilitate the exchange of patient data, (3) defining requirements for the development of prototypes of the Nationwide Health Information Network, (4) incorporating privacy and security policy, practices, and standards into the national strategy, and (5) integrating public health into nationwide health information exchange. These activities and others are being used by the Office of the National Coordinator for Health IT to continue its efforts to complete a national strategy to guide the nationwide implementation of interoperable health IT. Since the release of its initial framework in 2004, the office has taken additional steps to define a complete national strategy, building on its earlier work. However, while HHS has made progress in these areas, it still lacks detailed plans, milestones, and performance measures for meeting the President’s goals. HHS has made progress toward advancing the adoption of electronic health records by defining initial certification criteria for ambulatory electronic health records. The Certification Committee for Health IT, which was awarded the Compliance Certification Process for Health IT contract, finalized functionality, security, and reliability certification criteria for ambulatory electronic health records in May 2006 and described interoperability criteria for future certification requirements. The committee subsequently certified 22 vendors’ electronic health records products in July. Its next phase is to define and recommend certification criteria for inpatient electronic health records. The committee plans to publish these criteria for public comment during the last quarter of 2006, with certification beginning in the second quarter of 2007. Additionally, the Nationwide Health Information Network contracts have thus far resulted in the identification of draft functional requirements for incorporating lab results and patient information, such as medical history and insurance information, into electronic health records. The requirements were presented to the Secretary of HHS in June 2006, and an initial set of requirements for the Nationwide Health Information Network are expected to be issued in September 2006. In our March 2006 testimony, we described the Gulf Coast Electronic Digital Health Recovery contract, which was awarded by HHS to promote the use of electronic health records to rebuild medical records for patients in the Gulf Coast region affected by hurricanes last year. The outcomes of the contract are expected to coordinate planning for the recovery of digital health information in cases of emergencies or disasters and to develop a prototype of health information sharing and electronic health records support. The contract established a task force of local and national experts to help area providers turn to electronic medical records as they rebuild medical records for their patients. HHS awarded its Standards Harmonization Process for Health IT contract to ANSI. The contract is supported by ANSI’s Health IT Standards Panel, a collaborative partnership between the public and private sector. This effort integrates standards previously identified by the Consolidated Health Informatics and other federal initiatives. To date, the panel has selected 90 interoperability standards for areas such as electronic health records and public health detection and reporting. The selected standards specifically address components of the breakthrough areas defined by the American Health Information Community and were produced by accepted standards organizations. The Nationwide Health Information Network functional requirements also incorporate standards defined through the work of the Standards Harmonization Process for Health IT contract. The selected standards are currently being reviewed for acceptance by the Secretary. HHS has also involved the Department of Commerce’s National Institute for Standards and Technology (NIST) with HHS’s work to implement health IT standards through its standards harmonization contract. HHS’s standards harmonization contractor is required to maximize the use of existing processes and collaborate with NIST where appropriate, including consideration of outputs from the standards harmonization process as Federal Information Processing Standards relevant to federal agencies. NIST’s issuance of Federal Information Processing Standards for health IT is to be aligned with recommendations from public and private sector coordination efforts through the American Health Information Community, as accepted by the Secretary of HHS. The Federal Information Processing Standards are to be consistent with the standards adopted by the harmonization contract to enable the alignment of federal and private sector standards and widespread interoperability among health IT systems, particularly electronic health records systems. HHS’s Nationwide Health Information Network contracts are intended to provide architectures and prototypes of national networks based on the breakthrough areas defined by the American Health Information Community. HHS awarded contracts for developing these architectures and prototypes to four contractors. The contractors are to deliver final operating plans and prototypes of a national network that demonstrates health information exchange across multiple markets in November 2006. In late June 2006, HHS held its first Nationwide Health Information Network forum. More than 1000 functional requirements for a Nationwide Health Information Network were presented for discussion and public input. The requirements addressed general Nationwide Health Information Network infrastructure needs and the breakthrough areas defined by the American Health Information Community. The requirements are being reviewed by the National Committee for Vital and Health Statistics, which is expected to release its approved requirements by September 2006. HHS, through its contracts and recommendations from the American Health Information Community and the National Committee for Vital and Health Statistics, has initiated several actions to address privacy and security issues associated with the nationwide exchange of health information. In May 2006, 22 states subcontracted under HHS’s privacy and security contract to perform assessments of the impact of organization-level business policies and state laws on security and privacy practices and the degree to which they pose challenges to interoperable health information exchange. In August 2006, 11 more states and Puerto Rico were added to the scope of the contract. The outcomes of the contract are to provide a nationwide synthesis of information to inform privacy and security policy making at federal, state, and local levels. In addition, the standards selected through the standards harmonization contract include those that are applicable to the consumer empowerment breakthrough area, specifically privacy and confidentiality. Its initial standards are intended to allow consumers the ability to establish and manage permissions and access rights, along with informed consent for authorized and secure exchange, viewing, and querying of their medical information between designated caregivers and other health professionals. Additionally, the proposed functional requirements for the Nationwide Health Information Network include security requirements that are needed for ensuring the privacy and confidentiality of health information. In May 2006, several of the American Health Information Community workgroups recommended the formation of an additional workgroup comprised of privacy, security, clinical, and technology experts from each of the other American Health Information Community workgroups. The Confidentiality, Privacy, and Security Workgroup was formed in July to frame the privacy and security policy issues relevant to all breakthrough areas and solicit broad public input to identify viable options or processes to address these issues. The recommendations developed by this workgroup are intended to establish an initial policy framework and address issues including methods of patient identification, methods of authentication, mechanisms to ensure data integrity, methods for controlling access to personal health information, policies for breaches of personal health information confidentiality, guidelines and processes to determine appropriate secondary uses of data, and a scope of work for a long-term independent advisory body on privacy and security policies. The workgroup convened last month. In June 2006, the National Committee on Vital and Health Statistics presented to the Secretary of HHS a report recommending actions regarding privacy and confidentiality in the Nationwide Health Information Network. The recommendations cover topics that are, according to the committee, central to challenges for protecting health information privacy in a national health information exchange environment. Specifically, they address (1) the role of individuals in making decisions about the use of their personal health information, (2) policies for controlling disclosures across a national health information network, (3) regulatory issues such as jurisdiction and enforcement, (4) use of information by non-health care entities, and (5) establishing and maintaining the public trust that is needed to ensure the success of a national health information network. The recommendations are being evaluated by the American Health Information Community workgroups, the Certification Commission for Health IT, Health Information Technology Standards Panel, and other HHS partners. The committee intends to continue to update and refine its recommendations as the architecture and requirements of the network advance. To help promote the integration of public health data into a nationwide health information exchange, the American Health Information Community’s biosurveillance workgroup made recommendations in May 2006 intended to help the simultaneous flow of clinical care data to and among local, state, and federal biosurveillance programs. The community recommended that HHS develop sample data-use agreements and implementation guidance to facilitate the sharing of data from health care providers to public health agencies. The workgroup also recommended that HHS, in collaboration with privacy experts, state and local governmental public health agencies, and clinical care partners, develop materials to educate the public about the information that is used for biosurveillance including the benefits to the public’s health, improved national security, and the protection of patient confidentiality by September 30, 2006. Information exchange standards for sharing clinical health information (e.g., emergency department visit data and lab results) with public health are included in the 90 standards recently recommended as a result of HHS’s standards harmonization contract. The standards are intended to enable the transmission of essential ambulatory care and emergency department visit, utilization, and lab result data from electronic health care delivery and public health systems in standardized and anonymized format to authorized public health agencies within less than one day. In addition to advancing the use of electronic health records, the Gulf Coast contract is intended to help support public health emergency response by fostering the availability of field-level electronic health records to clinicians responding to disasters. As called for by the President’s executive order in April 2004, the national coordinator’s office is continuing its efforts to complete a national strategy for health IT. Since we testified in March 2006, the office has worked to evolve the initial framework and, with guidance from the American Health Information Community, has revised and refined the goals and strategies identified in the initial framework. The new draft framework—The Office of the National Coordinator: Goals, Objectives, and Strategies—provides high-level strategies for meeting the President’s goal for the adoption of interoperable health IT and is to be used to develop internal performance measures for the office’s activities. The framework identifies objectives for accomplishing each of four goals, along with 32 high-level strategies for meeting the objectives. The Office of the National Coordinator has identified and prioritized the 32 strategies for accomplishing the framework’s goals and has initiated 10 of them, which are supported by the contracts that HHS awarded in fall 2005. Table 3 illustrates the framework’s goals, objectives, and strategies and identifies the 10 strategies that have been initiated. The Office of the National Coordinator has prioritized the remaining 22 strategies defined in its framework. Six strategies are under active consideration, and the remaining 16 require future discussion. According to officials with the office, the strategies were prioritized based on guidance and direction from the American Health Information Community. The Office of the National Coordinator expects the framework to continue to evolve through collaboration among the Office of the National Coordinator and its partners, such as other federal agencies and the American Health Information Community, and as additional activities are completed through the contracts. While HHS has taken additional steps toward completing a national strategy and has initiated specific activities defined by its strategic framework, it still lacks the detailed plans, milestones, and performance measures needed to ensure that its goals are met. While the National Coordinator acknowledged the need for more detailed plans for its various initiatives and told us in March that HHS intended to release a strategic plan with detailed plans and milestones later this year, current officials with the office could not tell us when detailed plans and milestones would be defined. Given the complexity of the tasks at hand and the many activities to be completed, a national strategy that defines detailed plans, milestones, and performance measures is essential. Without it, HHS risks not meeting the President’s goal for health IT. In summary, Mr. Chairman, our work shows that HHS is continuing its efforts to help transform the use of IT in the health care industry. However, much work remains. While HHS, through the Office of the National Coordinator for Health IT and the American Health Information Community, has initiated specific actions for supporting the goals of a national strategy, detailed plans and milestones for completing the various initiatives and performance measures for tracking progress have not been developed. Until these plans, milestones, and performance measures are completed, it remains unclear specifically how the President’s goal will be met and what the interim expectations are for achieving widespread adoption of interoperable electronic health records by 2014. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions that you or other Members of the Subcommittee may have at this time. If you should have any questions about this statement, please contact me at (202) 512-9286 or by e-mail at pownerd@gao.gov. Other individuals who made key contributions to this statement are Amanda C. Gill, Nancy E. Glover, M. Saad Khan, and Teresa F. Tucker. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
As GAO and others have reported, the use of information technology (IT) has enormous potential to improve the quality of health care and is critical to improving the performance of the U.S. health care system. Given the federal government's role in providing health care in the U.S., it has been urged to take a leadership role in driving change to improve the quality and effectiveness of health care, including the adoption of IT. In April 2004, President Bush called for widespread adoption of interoperable electronic health records within 10 years and issued an executive order that established the position of the National Coordinator for Health Information Technology. A National Coordinator within the Department of Health and Human Services (HHS) was appointed in May 2004 and released a framework for strategic action two months later. In May 2005, GAO recommended that HHS establish detailed plans and milestones for each phase of the framework and take steps to ensure that its plans are followed and milestones are met. GAO was asked to identify progress made by HHS toward the development and implementation of a national health IT strategy. To do this, GAO reviewed prior reports and agency documents on the current status of relevant HHS activities. In late 2005, to help define the future direction of a national strategy, HHS awarded several health IT contracts and formed the American Health Information Community, a federal advisory committee made up of health care stakeholders from both the public and private sectors. Through the work of these contracts and the community, HHS and its Office of the National Coordinator for Health IT have made progress in five major areas associated with the President's goal of nationwide implementation of health IT. These activities and others are being used by the Office of the National Coordinator for Health IT to continue its efforts to complete a national strategy to guide the nationwide implementation of interoperable health IT. Since the release of its initial framework in 2004, the office has defined objectives and high-level strategies for accomplishing its goals. Although HHS agreed with GAO's prior recommendations and has made progress in these areas, it still lacks detailed plans, milestones, and performance measures for meeting the President's goals.
CDC currently serves as the national focal point for developing and applying disease prevention and control, environmental health, and health promotion and education activities designed to improve the health of Americans. CDC is also responsible for leading national efforts to detect, respond to, and prevent illnesses and injuries that result from natural causes or the release of biological, chemical, or radiological agents. To achieve its mission and goals, CDC relies on an array of partners, including public health associations and state and local public health agencies. CDC collaborates with these partners on initiatives such as monitoring the public’s health, investigating disease outbreaks, and implementing prevention strategies. CDC also uses its staff located in foreign countries to aid in international efforts, such as guarding against global diseases. In April 2005, CDC completed a reorganization known as the Futures Initiative, which was designed to realign its resources to better meet the challenges of 21st century health threats. Before the reorganization, CDC consisted of an Office of the Director, 10 national centers, and the National Institute for Occupational Safety and Health. The reorganization created 4 coordinating centers and 2 coordinating offices that report to the Office of the Director. (See fig. 1.) The 4 coordinating centers facilitate and integrate the work of the 11 discipline-specific national centers and 1 national office. (See app. II for a description of the work of the 4 coordinating centers and 2 coordinating offices.) The national centers are primarily responsible for operating CDC’s public health programs and generally include, among other things, a director’s office, programmatic divisions, and branches. As part of the Futures Initiative, CDC also created new agency goals—its Health Protection Goals, which are (1) Healthy People in Every Stage of Life, (2) Healthy People in Healthy Places, (3) People Prepared for Emerging Health Threats, and (4) Healthy People in a Healthy World. CDC uses the Health Protection Goals to develop Goal Action Plans, which aid in the agency’s strategic planning of the direction of its work. Goal Action Plans are associated with specific objectives, strategies, and actions, as well as performance goals, which are measured quarterly by the Organizational Excellence Assessment process. Coordinating centers and coordinating offices are charged with implementing these goals and the related Goal Action Plan in its areas of expertise, while also providing intraagency support and resources for cross-cutting issues and specific health threats. For example, the Coordinating Center for Infectious Diseases leads the Goal Action Plan associated with addressing emerging infections under the third Health Protection Goal—People Prepared for Emerging Health Threats. At the same time, this coordinating center supports the Coordinating Center for Environmental Health and Injury Prevention in its lead role for the Goal Action Plan on adolescent health, which links to the first Health Protection Goal—Healthy People in Every Stage of Life. Although each coordinating center and coordinating office conducts some of its own human capital activities, such as recruiting staff and conducting succession planning, two entities are responsible for CDC’s human capital activities agencywide—HHS’s AHRC and CDC’s Office of Workforce and Career Development (OWCD). AHRC is responsible for CDC’s administrative personnel activities, and OWCD is responsible for human- capital-related planning for the agency. Before 2004, CDC’s Human Resources Management Office was responsible for the administrative and the planning activities at CDC. The office, which was one of 40 human resource offices in HHS, reported directly to CDC management. In 2004, CDC’s Human Resources Management Office was consolidated into AHRC. At that time, CDC began reimbursing HHS for the services provided by AHRC. AHRC reports directly to HHS management and manages all of CDC’s administrative services relating to personnel, including processing pay and benefits, posting vacancy announcements, conducting initial screenings of candidates, and hiring new employees. OWCD is part of CDC’s Office of the Director. OWCD assists coordinating centers and coordinating offices with human-capital-related efforts, such as workforce analysis or succession planning. In addition to human-capital-related planning, specific activities of this office include developing and implementing a human resources leadership and career management program for all occupations within CDC and the CDC Plan. Additionally, OWCD manages the agency’s fellowship programs and is responsible for CDC University, which provides training and development opportunities to CDC staff. OPM is responsible for providing guidance to agencies on federal human capital policies and procedures and for an initiative associated with strategic human capital management within the President’s Management Agenda. As part of this responsibility, OPM developed HCAAF in conjunction with GAO and the Office of Management and Budget. HCAAF is intended to assist federal agencies with their human capital planning process, including developing strategies that support each agency’s mission and goals. HCAAF outlines an ongoing process of human capital planning for five elements: (1) strategic alignment, (2) leadership and knowledge management, (3) results-oriented performance culture, (4) talent management, and (5) accountability. The first element involves planning and goal-setting activities that are essential to promoting strategic alignment, which includes linking strategies with an agency’s mission and goals and integrating these strategies into an agency’s strategic plan, performance plan, and budget. The next three HCAAF elements are used in implementing an agency’s strategies. Specifically, leadership and knowledge management ensures continuity in leadership and maintaining organizational knowledge; results-oriented performance culture promotes a diverse, high-performing workforce; and talent management addresses gaps in needed skills. The fifth element— accountability—focuses on the importance of evaluating the results of strategies to assess their effectiveness and to determine whether adjustments are needed. In our past work on human capital issues, we identified five principles for strategic human capital planning that agencies should incorporate as they develop plans and strategies for how they will meet their current and future human capital needs. Associated with each principle are some key points for agency officials to consider when applying these principles to their planning efforts. (See table 1.) The CDC Plan includes strategies that could help the agency address five of the six key human capital challenges we identified that it faces in its efforts to sustain a skilled workforce. These six key challenges are (1) changing workforce demographics, highlighted by the potential loss of essential personnel due to retirement; (2) the limited supply of skilled public health professionals; (3) CDC’s acknowledged need to increase the diversity of its workforce; (4) changing workforce needs resulting from the agency’s expanding scope of work and responsibilities; (5) logistical difficulties involved in acquiring and retaining a skilled workforce; and (6) difficulties presented by managing a workforce with a large and growing number of contractors. While the CDC Plan includes strategies designed to address the first five challenges, it does not include strategies that address the challenge of managing a workforce with a large and growing number of contractors. The first challenge, changing workforce demographics, is highlighted by the potential loss of essential personnel due to retirement. As of the end of fiscal year 2007, about 27 percent of CDC’s overall workforce was eligible for retirement within the next 5 years. For CDC’s three most-populated occupations—general health scientist, public health analyst, and medical officer—the percentages of employees eligible for retirement in the next 5 years were 20 percent, 22 percent, and 34 percent, respectively. Collectively, these three occupations account for 34 percent of CDC’s workforce eligible to retire within the next 5 years. The potential loss of so many essential personnel creates a challenge for CDC because it could result in a shortfall of staff with the experience and skills needed to fulfill CDC’s mission and goals. For example, one of the most-populated occupations is medical officer, which is a difficult position to fill due to a shortage of physicians with specific training in public health. The limited supply of skilled public health professionals is the second challenge we identified. According to reports issued by CDC, the Institute of Medicine, and the American Public Health Association, federal, state, and local agencies are experiencing workforce shortages, some of which are severe, in many of the public health professions vital to CDC. For example, epidemiologists play an important role in responding to emerging infectious diseases. However, states have reported needing more epidemiologists than are currently available in the workforce. In addition to shortages of public health physicians and epidemiologists noted in these reports, other shortages occur in the positions of public health informatics specialists, laboratory scientists, and environmental health specialists. The third challenge that CDC faces is its acknowledged need to increase the diversity of its workforce. Results from OPM’s Federal Human Capital Surveys showed a decrease from 2004 to 2006 in the percentage of staff who agreed that CDC management worked well with employees of different backgrounds and was committed to creating a diverse workforce. CDC officials acknowledged that the agency’s workforce was not as diverse as it could be and told us the agency needs to improve its recruitment of Hispanics and persons with disabilities. However, in its plan, CDC noted that establishing a diverse workforce is a challenge for several reasons. For example, technical skills and education levels vary across racial, ethnic, and socioeconomic groups, which in turn can have an impact on the pool of qualified job applicants from which to hire. CDC’s fourth challenge is the changing workforce needs resulting from the agency’s expanding scope of work and responsibilities. For example, the globalization of health threats has increased CDC’s responsibility to prepare for and respond to infectious disease outbreaks. In 2003 the rapid spread of severe acute respiratory syndrome (SARS) in Asia showed that disease outbreaks pose an immediate threat beyond the borders of the country where they originate. For this reason, CDC needs a workforce that is capable of working with global partners, such as other countries’ ministries of health, to expand surveillance systems used to detect and respond quickly to outbreaks. In addition, throughout the SARS outbreak, CDC was the foremost participant in the multinational response effort, with CDC officials constituting about two-thirds of the public health experts deployed to affected areas. CDC’s significant role in the SARS response highlights the agency’s expanding need for a workforce that is capable of rapidly responding to international public health emergencies. A fifth challenge that CDC faces is logistical difficulties involved in acquiring and retaining a skilled workforce, including problems with the hiring process and difficulties associated with retaining employees for international positions. For example, from fiscal year 2003 to fiscal year 2007, CDC did not meet its 2007 goal of hiring new employees in an average of 58 days or fewer; instead, during this time, it has averaged between 73 and 92 days to hire each new employee. HHS and CDC officials told us that logistical difficulties were exacerbated when hiring responsibilities were centralized from CDC to HHS and the human resources staff was reduced from 178 to 105 people. Logistical difficulties have also hindered CDC’s efforts to sustain international positions. For example, HHS officials told us the process to approve and hire staff for overseas positions can take 9 months to 1 year. Officials added that part of this process—the amount of time it takes to get an individual approved, including obtaining clearance through the Department of State—can be particularly problematic because an individual may lose interest and accept another employment offer. In addition, retaining staff can be difficult because international programs have few opportunities for promotion. The sixth challenge we identified is the difficulties presented by managing a workforce with a large and growing number of contractors. From fiscal year 2000 through fiscal year 2006, the estimated number of contractors working at CDC increased 139 percent, while CDC’s federal staff increased by 3.5 percent. CDC officials told us that using contractors is beneficial, particularly because they can be brought on board quickly to fill an immediate need for specific skills. For example, as of August 2007 over 75 percent of employees in the National Center for Public Health Informatics were contractors because the area is a relatively new field and the skills needed are constantly changing. (See app. III for more information on CDC’s workforce and the number of contractors within each organizational unit.) While there are benefits to using contractors, there are also concerns. For example, CDC officials told us that because contractors are not CDC employees, the agency does not control certain aspects of their employment, such as diversity or training, and does not technically supervise their work. For instance, if a CDC manager determines that the work provided by a contractor is unsatisfactory, the manager has to communicate his or her concerns to the contractor’s firm instead of directly addressing the contractor. Moreover, CDC does not fund training to assist contractors in improving their work. CDC officials also told us that data collection on how contractors are used within the agency is primarily decentralized and not systematically monitored at an agencywide level. CDC has begun collecting more data on contractors across the agency because of increased security needs. Understanding how contractors are used across the agency is important to ensure their appropriate use and oversight. For example, federal regulations call for enhanced oversight of contracts for services that could potentially influence the authority, accountability, and responsibilities of government officials. Issues may arise when contractors have been involved in activities relating to policy development, reorganization and planning, technical advice or assistance, developing or providing information regarding regulations, or preparing budgets. Because CDC lacks information on how its contractors are used across the agency, it may not be able to ensure adequate oversight of contractors. CDC developed strategies to address the human capital challenges described by the agency in the CDC Plan, which correspond to five of the six challenges we identified. (See table 2. For a full list of strategies in the CDC Plan, see app. IV.) CDC officials told us they used the human capital challenges identified in the plan to develop related strategies that directly addressed specific areas of concern. On the basis of our analysis of the CDC Plan and additional documentation, we found that the CDC Plan contains strategies that could help address the first five challenges we identified. According to CDC officials, the CDC Plan does not include strategies to address the challenge of managing a workforce with a large and growing number of contractors—our sixth challenge—because CDC wanted to follow HHS guidance, under which contractors are not considered part of the HHS workforce, and to maintain consistency with the department in the treatment of contractors. However, without considering the challenge of managing a workforce with a large and growing number of contractors and without developing related strategies, the CDC Plan excludes any efforts to address more than one- third of the total workforce. As a result, it may not be as useful as it could be in assisting the agency with improvements in human capital management. For example, CDC cannot fully assess the human capital available across the agency and how it is assisting the agency in meeting its expanding scope of work and responsibilities without understanding how contractors are used across the agency and what gaps in skills and competencies they are filling. Because CDC does not monitor the use of contractors agencywide, the agency’s ability to determine the appropriate balance of government-performed and contractor-performed services is hindered. CDC’s lack of information to oversee contractors agencywide is also a problem because, as our reviews of other agencies have shown, adequate oversight of contractors is critical to ensure that they are producing outcomes to achieve the agencies’ respective missions and goals and the agencies are not risking having mission-related decisions influenced by contractor judgment. The CDC Plan partially meets criteria for strategic alignment. CDC relied on HCAAF guidance, which includes strategic alignment as an element, to develop its framework for the CDC Plan. The CDC Plan partially meets the criteria for strategic alignment by explicitly linking the plan’s strategies to the agency’s mission and goals. However, the CDC Plan does not integrate these strategies with the agency’s Goal Action Plans—the documents that serve as CDC’s strategic plan—or with its performance plan or budget. CDC officials told us they intended to update the CDC Plan annually and to integrate the plan with these documents as the plan is updated. CDC officials relied on HCAAF as guidance when developing the framework for the CDC Plan. According to CDC, HCAAF was the best model framework to follow because of its simplicity, transparency, and alignment with the President’s Management Agenda. The CDC Human Capital Management framework, which serves as the foundation for the CDC Plan, uses the HCAAF model. Specifically, CDC’s framework includes the same elements—strategic alignment, leadership and knowledge management, performance management for results, talent management, and accountability. (See fig. 2.) The HCAAF criteria for strategic alignment are consistent with the definition we have used in our past work. In examining the CDC Plan, we determined that the plan partially meets the criteria for strategic alignment. In developing its plan, CDC linked the strategies in the plan to its mission and goals as well as to those of HHS. The plan states that its purpose is to ensure that CDC’s human capital efforts are aligned to most effectively support the agency’s accomplishment of its mission and goals. Further, the plan integrates CDC’s Health Protection Goals and the Organizational Excellence Assessment, which CDC uses to measure its progress toward meeting the Health Protection Goals. Specifically, CDC linked the strategies in its plan to the Organizational Excellence Assessment. To ensure linkage with HHS’s mission and goals, the CDC Plan refers to HHS’s strategic plan for fiscal years 2007 through 2012, which delineates how the department will achieve its mission “to enhance the health and well-being of Americans by providing for effective health and human services and by fostering sound, sustained advances in the sciences underlying medicine, public health, and social services” and outlines HHS’s four strategic goals. CDC also linked the CDC Plan to HHS’s strategic plan. For example, the CDC Plan describes how CDC has adopted a program described in HHS’s strategic plan—the Performance Management Appraisal Program—to connect employee expectations to the agency’s mission and to link employee performance ratings with measurable outcomes. The CDC Plan only partially meets the criteria for strategic alignment as defined by GAO and OPM because the strategies in the CDC Plan are not integrated with the documents that serve as the agency’s strategic plan, its performance plan, or its budget. CDC officials told us that while the agency did not have a strategic plan, the agency’s Goal Action Plans served in this capacity. Goal Action Plans are organized according to the four Health Protection Goals and are designed to link, leverage, and coordinate CDC’s activities across the agency to increase effectiveness and impact. (See app. V for a summary of CDC’s Health Protection Goals.) While the strategies in the CDC Plan are not currently integrated with the Goal Action Plans, officials told us they intended to integrate the strategies with the Goal Action Plans and have taken initial steps to do this in their January 2008 revision of the CDC Plan. Additionally, the strategies have not been integrated with the agency’s performance plan or budget, which limits the plan’s usefulness in supporting day-to-day activities aimed at long-term human capital goals. However, officials told us they also intended to integrate the strategies in the CDC Plan with the agency performance plan and the budget as the plan is updated. CDC incorporated aspects of our five principles for strategic human capital planning into the CDC Plan and has outlined further actions it intends to take. (See table 1 for the principles.) The agency incorporated part of the first principle by having top management and a stakeholder comment on a draft of the plan, and it intends to involve nonsupervisory employees in future implementation. For the second principle, CDC conducted a preliminary workforce analysis, but it has not completed the analysis of gaps in skills and competencies. However, CDC intends to conduct additional analyses and plans to use them in subsequent plan updates. CDC incorporated an aspect of the third principle by developing strategies to acquire, retain, and develop a skilled workforce, but it is unclear to what degree these strategies will address the agency’s gaps in skills and competencies because they were developed before the gap analyses were completed. CDC has also taken steps to incorporate the fourth principle, which stresses building the capabilities needed to support the strategies. With regard to the fifth principle, while CDC previously collected limited information with which to monitor and assess its human capital efforts, the CDC Plan outlines steps to monitor and evaluate its strategies. In development of the CDC Plan, the agency incorporated aspects of our first principle, which is to involve top management, managers, other employees, and stakeholders in developing, communicating, and implementing the human capital plan, but it did not formally involve nonsupervisory employees. CDC involved top management, managers, and AHRC as a stakeholder in the development of the CDC Plan through the agency’s leadership groups, specifically the Executive Leadership Board, the Management Council, and the Center Leadership Council. AHRC participated as part of the Management Council. A CDC official involved in creating the plan briefed members of the board and councils on the outline of the plan while it was being developed, and members subsequently reviewed and provided recommendations on drafts of the plan. Additionally, OWCD officials worked with selected members of the board and the Management Council in developing some of the strategies. CDC officials told us they did not formally involve nonsupervisory employees in the development of the plan. For example, managers in the diversity office informally shared the CDC Plan with nonsupervisory employees during its development. In our prior work on the principles, we found that involving such employees on strategic workforce planning teams can identify new ways to streamline processes and improve human capital strategies. Nonsupervisory employee involvement in the development of the human capital plan can also garner support for proposed changes and help an agency develop clear and transparent procedures to implement strategies. CDC officials told us that they intended to communicate the CDC Plan within the agency and to involve nonsupervisory employees in implementing and updating it. The CDC Plan has been approved by the Director of CDC and after final clearance will be communicated via CDC’s intranet site and an intranet article, or through an e-mail message to all agency employees. CDC officials said that, in addition to top management, other agency managers, and stakeholders, they intended to involve nonsupervisory employees in implementing the plan and updating it in the future. For example, OWCD has conducted several focus groups with employees regarding the results of the 2006 Federal Human Capital Survey, and CDC officials indicated that the findings from these focus groups would be considered in updating strategies in future updates of the CDC Plan. CDC has begun to incorporate the second principle—determining the skills and competencies needed to achieve the mission and goals, including identifying gaps in these skills and competencies—by conducting a preliminary workforce analysis and is working to complete analyses to identify gaps in skills and competencies. In this preliminary analysis, CDC determined useful information regarding its workforce, including the number of individuals in each occupation, the size and diversity of its workforce, agencywide retirement eligibility, and the number of mission-critical occupations in each coordinating center and coordinating office. However, CDC has not completed competency gap analyses for its employees to determine whether employees have the skills needed to perform effectively, and to identify any gaps between their current skill levels and skills needed in the future. In 2006, CDC conducted a competency gap analysis for one of its mission-critical occupations, and it has begun competency gap analyses for its other mission-critical occupations. In addition, CDC plans to conduct additional workforce analyses, which it anticipates completing in fiscal year 2008, as part of the workforce planning process outlined in the CDC Plan. Prior to the CDC Plan, each coordinating center and coordinating office conducted its own workforce planning activities, resulting in wide variability across the agency. CDC has implemented standardized procedures for its workforce planning process, by developing a consistent methodology and approach for workforce analyses to be used throughout the agency. As part of the agencywide methodology, the coordinating centers and coordinating offices have been asked to provide information about how federal employees and contractors are used to meet their needs. Additionally, OWCD has developed a standardized template for the coordinating centers and coordinating offices to use to collect data on employees’ skills and competencies. As of January 2008, OWCD was in the process of using the template to collect information, which could then be aggregated to an agencywide level and used in the annual update of the CDC Plan. The CDC Plan includes strategies to improve its current efforts to acquire, retain, and develop its skilled staff, and their implementation could address some past weaknesses in CDC’s efforts. The CDC Plan thus incorporates an aspect of our third principle, which is to develop strategies to acquire, retain, and develop a skilled workforce and to address skill and competency gaps. However, the plan’s strategies may have limitations in how well they address skill and competency gaps, because they were developed before the agency finished its gap analyses. Developing new strategies to acquire and retain staff is important because CDC’s efforts conducted prior to the publication of the CDC Plan had several weaknesses with regard to recruitment and retention. For example, because recruitment efforts were decentralized throughout the agency, CDC and AHRC officials conducted recruitment efforts on an ad hoc basis, and coordinating centers and offices offered recruitment and relocation incentives as part of their recruitment efforts with little coordination. Regarding retention, we found that CDC had programs and incentives designed to promote retention, but lacked information on their effectiveness. For example, CDC offers retention incentives to key individuals to induce them to remain with the agency. However, CDC does not collect and analyze data on how successful these programs and incentives have been in retaining skilled employees. As part of CDC’s efforts to improve recruiting and retention efforts, one of the strategies in the CDC Plan calls for developing, implementing, and evaluating a collaborative strategic recruitment effort, for the purpose of establishing initiatives, resources, operational strategies, and practices to ensure agency access to quality candidates and to aid in meeting CDC’s recruitment objectives. In January 2008, CDC established a strategic recruitment team comprised of representatives from various entities, including each coordinating center and coordinating office, AHRC, and OWCD. As part of its work, this team intends to develop a database for targeted recruitment, which is scheduled for completion in February 2009. This strategy could help address weaknesses in CDC’s current ad hoc approach. Another strategy involves expanding the use of “career ladders” within the agency, including identifying target positions to be used on a career ladder and assessing the career ladder program for potential areas of improvement. CDC anticipates completing this strategy by the end of 2008. Our review of CDC’s current efforts to develop skilled staff found that the agency based its current employee development efforts on a training needs assessment and had additional strategies that could improve employee development in the CDC Plan. According to CDC officials, CDC University, the unit responsible for training at CDC, worked with partners throughout the agency to develop and implement agencywide strategies for training and to identify the skills needed by CDC’s workforce in the future. CDC University also conducted annual competency-based needs assessments that allow employees and supervisors to review the competencies for each occupation and determine whether sufficient training exists or additional training is needed. Several strategies in the CDC Plan could build on these current efforts. For example, CDC plans to implement a transition from its current training system to HHS’s Learning Management System. According to CDC, this transition will improve the career development of its employees, in part by allowing CDC to target its learning plans to specific groups of employees and to track competency gaps by employee, CDC entity, occupational group, and specific competency. CDC has begun this transition and expects it to be completed by September 2008. Although CDC has developed strategies that may improve some of its current efforts, it is unclear how well these strategies will address current gaps in skills and competencies. In our prior work on the principles, we found that it is important for agencies to consider how their strategies can be aligned to eliminate gaps and improve the contribution of critical skills and competencies. However, developing strategies to eliminate gaps assumes that an agency has identified the gaps in skills and competencies before its strategies are developed, and while CDC has begun gap analyses, these analyses were not completed when the CDC Plan was developed. As a result, the strategies in the plan could not be tailored to address specific gaps in skills and competencies. However, CDC recognized this need, is completing the gap analyses, and has outlined, as part of one strategy, the development of additional steps to close identified gaps in skills and competencies. It is also working to improve its ability to identify training needs to address skill and competency gaps. Consistent with the fourth principle, CDC has taken steps to build the capabilities needed to support its strategies. Developing and effectively utilizing agencies’ resources, human capital flexibilities, and personnel are essential to the successful implementation of strategies. CDC is making efforts to establish these capabilities. For example, OWCD has hired a strategic recruiter to oversee the development and implementation of its recruitment function strategy, as described in the CDC Plan. CDC officials are also planning to streamline the agency’s administrative processes, with a focus on hiring. In response to AHRC’s current efforts to achieve its goal of hiring new employees within an average of 58 workdays, CDC and AHRC have developed a system to track the hiring process and have created a committee to evaluate the current hiring process. The system has generated reports that would allow managers to see how long the hiring process takes. In addition, CDC officials are implementing some of the recommendations made by the hiring committee. One problem the committee identified was the use of individualized position descriptions for vacancies. Historically, managers requested individualized descriptions for most positions. New position descriptions needed to be formally reviewed, adding time and complexity to the hiring process. As of February 2008, AHRC and CDC have standardized position descriptions for 20 occupations, which could help the agency reduce the time it spends filling positions. In addition, CDC is working to create transparency and accountability and to improve the utilization of its human capital flexibilities. For example, a responsible individual has been identified for each of the strategies described in the plan. According to CDC officials, the agency intends to incorporate this responsibility into these individuals’ performance reviews. Also, agency supervisors and managers are to receive training on their roles and responsibilities in employee development, which includes using human capital flexibilities. Detailed information on these flexibilities is available via the Web to all employees. However, CDC officials told us they were limited in how they implemented some of these flexibilities because policies and practices of this type are developed at the department level by the HHS Office of Human Resources. As these policies are delegated to the agency, CDC management may in turn develop implementing policies and practices for the agency that support the department’s policies. Prior to the CDC Plan, CDC had limited information with which to assess its human capital efforts. However, consistent with the fifth principle, the agency has incorporated efforts to monitor and evaluate its human capital strategies into its plan. In our prior work on the principles, we found that high-performing agencies understood the fundamental importance of measuring both the outcomes of their human capital strategies and how these outcomes have helped them accomplish their mission and goals. CDC officials told us that prior to completing the CDC Plan they relied on multiple mechanisms to evaluate the effectiveness of the agency’s strategies to acquire, retain, and develop staff. However, these strategies were not always effective. For example, while retention was measured in part by evaluating exit survey data, only 20 percent of departing employees completed the exit survey. The CDC Plan includes strategies to address the issues of limited data for monitoring and evaluation. For example, one strategy related to retention evaluates the factors affecting turnover and is designed to develop plans for improvement. This strategy outlines specific milestones and time frames for addressing this issue, such as conducting a literature review of factors affecting employee turnover, which was completed in November 2007. In addition, CDC officials planned to develop strategies in January 2008 to increase the exit survey response rate. Improving the response rate could make the data collected more valuable. The CDC Plan also has a milestone to develop recommendations for improving the collection and analysis of employee data associated with turnover by September 2009. Further, the plan includes a strategy to develop an outreach plan with materials and activities targeted to specific groups of potential employees. As part of this strategy, CDC has a milestone to evaluate outcomes of these outreach materials, including attainment of goals and objectives and return on investment for its efforts by September 2008. CDC officials told us the CDC Plan principally focuses on using data from existing measures to develop strategies for improvement. They noted that while some monitoring and evaluation approaches might be refined, the emphasis in the plan is on how to use the data currently being collected more effectively. CDC identified challenges it faced in achieving its human capital needs in the CDC Plan and considered the challenges in developing its human capital strategies. However, the strategies in the CDC Plan do not address the sixth challenge we identified—the difficulties presented by managing a workforce with an increasing number of contractors, which make up more than one-third of the agency’s workers. Without addressing this challenge as part of its strategies, the CDC Plan may not be as useful as it could be in providing the agency with a strategic view of its governmental and contractor workforce. Thus, the plan will be less helpful to guide the agency in improving the management of its entire human capital so it can effectively and efficiently meet its expanding scope of work and responsibilities and thereby achieve its mission and goals. The strategies in the CDC Plan are linked with the agency’s mission and goals; however, they are not integrated into a strategic plan, performance plan, or budget. CDC officials told us they intended to integrate the strategies in the CDC Plan with the documents that serve as the agency’s strategic plan, the performance plan, and the budget as the plan is updated. Completing this effort is important because without it the CDC Plan may not be as effective as it could be in helping the agency meet its human capital needs or in assessing and understanding the extent to which CDC’s workforce contributes to achieving its mission and goals. Additionally, the plan may be limited in its usefulness in supporting day-to- day activities aimed at long-term human capital goals. The CDC Plan represents progress in the agency’s human capital planning efforts because the CDC Plan includes strategies, due dates, and the individuals responsible for implementing them. However, because the plan is new and has not been fully implemented, it is too soon to determine the degree to which it will improve CDC’s human capital management. As the agency moves forward with the CDC Plan, it is important that the planned strategies are fully implemented and the agency continues to incorporate HCAAF and our principles for strategic human capital planning into subsequent plan updates, in order to strengthen its human capital efforts. To improve CDC’s ability to use its human capital planning efforts to meet its current and future needs for a skilled workforce, we recommend that the Director of CDC incorporate strategies that address the challenge of managing a workforce with a large and growing number of contractors into future updates of the CDC Plan. HHS provided written comments on a draft of this report, which are included in appendix VI, and a technical comment, which we partially incorporated. In its comments, HHS concurred with our conclusion that the strategic alignment component of the September 2007 edition of the CDC Plan could be improved by better connecting the plan with the agency’s Goal Action Plans. HHS stated that it addressed the issue of aligning the plan with the Goal Action Plans in its January 2008 revision of the CDC Plan, but the documentation it provided to us did not show how strategies from the CDC Plan would be integrated with the Goal Action Plans. Further, strategic alignment includes integrating the CDC Plan with the agency’s performance plan and budget, a step that CDC has yet to complete. HHS also stated that our recommendation—to incorporate strategies in the CDC Plan that address the challenge of managing a workforce with a large and growing number of contractors—was somewhat unexpected. HHS noted that CDC officials reviewed human capital plans of other agencies and several GAO and OPM human capital reports and did not address the use of contractors in detail when developing the CDC Plan in order to be consistent with these sources. Further, it stated that the agency does not control contractors’ hiring, diversity, compensation, training, and other key human capital factors and noted that our draft report did not recognize the legal, regulatory, and policy prohibitions in treating contractors as if they were federal employees. We believe that HHS misinterpreted our findings and recommendation related to the challenge of managing a workforce with a large and growing number of contractors. At CDC, contractors represent more than one-third of the agency’s workforce and thus are clearly a critical part of the agency’s human capital. Our December 2003 report on key principles for effective strategic human capital planning noted that it involves developing long-term strategies for acquiring, developing, and retaining an organization’s total workforce, which includes full- and part-time federal staff and contractors. In our current report, we clearly state that CDC does not control certain aspects of contractor employment such as diversity or training and technically does not supervise the work of contractors. Nevertheless, as we have explained in this report, strategic human capital planning includes identifying the skills and competencies needed and developing strategies to address those needs. CDC could not provide us with specific information on how contractors were being used agencywide to complement federal staff. Without this information, the CDC Plan cannot present the nature of the current balance of government-performed and contractor-performed work at the agency, a complete picture of the skills and competencies needed agencywide, or strategies to address those needs. It is unclear to us how the entire workforce of both federal and contractor staff could be managed strategically without such information. Such information would facilitate making informed decisions, such as whether CDC needs to increase training for federal staff or contract for those skills. Similarly, without information on how contractors are used throughout the agency, it remains unclear to us how top-level management can be assured that contractors are being used appropriately and that sufficient oversight is provided for contractor staff engaged in activities that could potentially influence the authority, accountability, and responsibilities of government officials. Consequently, we concluded that CDC should incorporate strategies related to the use of contractors into the CDC Plan. HHS also commented that our report indicated that CDC does not have a comprehensive repository of human capital information on its contracting staff and thus does not ensure adequate contractor oversight. HHS said that it disagreed with our assessment. However, we did not make such an assessment. We did not suggest or recommend that CDC develop a comprehensive repository of human capital information on contractor staff. In addition, we did not review whether such a repository would be needed for effective contractor oversight, because such work was outside the scope of this engagement. Our concern is that CDC does not have a strategic human capital plan that encompasses strategies for the use of its contractors as complements to its federal employees so that the agency can most effectively manage these blended resources to achieve its mission and goals. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from its date. At that time we will send copies to the Secretary of HHS, the Director of CDC, and other interested parties. We will also make copies available to others on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or BascettaC@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. To determine whether the Centers for Disease Control and Prevention (CDC) 2007 Strategic Human Capital Management Plan (CDC Plan) was designed to address the challenges CDC faces in sustaining a skilled workforce, we analyzed interviews we conducted with multiple entities from CDC, the Department of Health and Human Services (HHS), the Office of Personnel Management (OPM), and three policy research and professional associations. Specifically, the interviews included officials from CDC’s four coordinating centers and two coordinating offices, Office of Workforce and Career Development (OWCD), Executive Leadership Board, Management Council, Center Leadership Council, Division Directors Council, and Office of Diversity. We also interviewed officials from the U.S. Public Health Service Commissioned Corps who work at CDC and officials from HHS’s Office of Global Health Affairs and Atlanta Human Resources Center (AHRC). Further, we interviewed policy research and professional association officials who work with CDC, including officials from the National Academy of Public Administration, the Association of State and Territorial Health Officials, and the National Association of County and City Health Officials. We corroborated testimonial evidence from our interviews with analysis of relevant documents, workforce statistics, and retirement eligibility data computed by CDC. We assessed the reliability of CDC’s data by confirming that the data included the elements we requested and were consistent with CDC- provided documentation and information collected from interviews, including interviews with officials responsible for maintaining these databases. As a result, we determined that the data generated from CDC’s system were sufficiently reliable for the purposes of this report. In addition, we reviewed reports on the public health workforce written by the Institute of Medicine, the American Public Health Association, and CDC, as well as our prior work on the use and management of contractors in the federal government. Some interviewees noted the difficulties of managing CDC’s responsibilities given its funding; however, we did not assess the adequacy of CDC’s budget. Based on our analysis of these interviews, reports, and data, we identified the challenges that CDC faces in sustaining a skilled workforce. In order to determine whether the CDC Plan was designed to address the challenges we identified, we reviewed CDC’s plan. We also interviewed OWCD officials about how they used the challenges CDC identified in the plan to develop related strategies. We then compared the CDC challenges to the challenges we identified and determined how the strategies in the CDC Plan corresponded to the challenges we identified. To determine the extent to which the CDC Plan is strategically aligned, we interviewed CDC officials from OWCD and the Office of Strategy and Innovation. We also reviewed and analyzed the CDC Plan, OPM’s Human Capital Assessment and Accountability Framework (HCAAF), and our prior work on human capital planning to understand the guidance used to develop the plan. Additionally, to determine how the strategies in the plan were linked to the agency’s mission and goals, we analyzed the CDC Plan, the CDC Health Protection Goals, and HHS’s 2007-2012 Strategic Plan. To determine how the plan was integrated into other agency documents, we reviewed CDC’s budget documents for fiscal years 2006 through 2008, annual performance plans and reports for fiscal years 2005 and 2006, and CDC Goal Action Plans and related documents, which serve as the agency’s strategic plan. We also interviewed officials from HHS’s Office of the Assistant Secretary for Administration and Management about the criteria and guidance that office provides to CDC on human capital planning efforts. To determine the extent to which the CDC Plan incorporated the five principles for effective strategic human capital planning, we reviewed our previous work on the five principles and examined the CDC Plan. Specifically, for the first principle—involving top management, managers, other employees, and stakeholders in developing, communicating, and implementing the plan—we interviewed officials from CDC’s Executive Leadership Board, Management Council, Center Leadership Council, and the Division Directors Council to determine management and employee involvement in the development of the plan. We interviewed officials from AHRC, a stakeholder in CDC’s human capital planning, to determine its involvement in the development of the plan. We also interviewed officials with OWCD, the entity responsible for the plan, to determine how management, stakeholders, and employees would be involved in communicating and implementing the plan. For the second principle—determining the skills and competencies needed to achieve the agency’s mission and goals—we analyzed documents on CDC’s (1) workforce analysis, (2) training needs assessments, and (3) competency gap assessments. We also interviewed officials in OWCD and discussed plans for additional workforce analyses. For the third principle—developing strategies to acquire, retain, and develop a skilled workforce and to address gaps in skills and competencies—we examined the CDC Plan and how the strategies in the plan related to CDC’s workforce analysis. We also interviewed CDC officials and reviewed pertinent documents. To determine how the strategies in the plan compared to CDC’s efforts prior to the plan, we interviewed officials and analyzed prior human capital documents. We interviewed officials from HHS’s AHRC and CDC’s four coordinating centers and two coordinating offices, the National Institute for Occupational Safety and Health, CDC University, and OWCD to discuss human capital efforts prior to the CDC Plan and how they related to the efforts in the CDC Plan. We also analyzed documents from these entities, including AHRC’s July 2006 Workforce Plan, CDC’s Talent Management Plan, CDC’s 2006 Strategic Human Capital Plan, and human capital documents from the coordinating centers and coordinating offices. For the fourth principle—building capabilities needed to support the strategies—we examined CDC’s new programs and processes that support human capital planning. Additionally, we interviewed officials from both OWCD and AHRC. For the fifth principle—monitoring and evaluating the contribution that strategies have made toward achieving the agency’s mission and goals— we reviewed the CDC Plan and related documents and interviewed OWCD officials in order to determine CDC’s current monitoring efforts and how CDC planned to monitor and evaluate the agency’s progress toward its human capital goals. We conducted our work from March 2007 to May 2008 in accordance with generally accepted government auditing standards. To plan, direct, and coordinate national and global public health research, programs, and laboratory sciences that improve health and eliminate illness, disability, and/or death caused by injuries or environmental exposures. To assure that CDC provides high-quality information and programs in the most effective ways to help people, families, and communities protect their health and safety. To plan, direct, and coordinate a national program for the prevention of prematurity, mortality, morbidity, and disability due to chronic diseases, genomics, disabilities, birth defects, reproductive outcomes, and adverse consequences of hereditary conditions. To protect health and enhance the potential for full, satisfying, and productive living across the lifespan of all people in all communities related to infectious diseases. To provide leadership, coordination, and support for CDC’s global health activities in collaboration with CDC’s global health partners. The office’s mission is to increase life expectancy and years of quality life, especially among those at highest risk for premature death, particularly vulnerable children and women, and increase global preparedness to prevent and control naturally occurring and man-made threats to health. To protect health and enhance the potential for full, satisfying, and productive living across the lifespan of all people in all communities related to community preparedness and response. National Center for Preparedness, Detection, and Control of Infectious Diseases National Institute for Occupational Safety and Health uirements than employees under the Civil Service system. (a) Promote healthy pregnancy and birth outcomes toddlers that have a strong start for healthy and safe lives (Infants and Toddlers, ages 0-3 years) (b) Promote social and physical environments that support the health, safety, and development of infants and toddlers (c) Promote optimal development among infants and toddlers (d) Increase early identification, tracking, and follow up of infants and toddlers with special health care and developmental needs (e) Prevent infectious diseases and their consequences among infants and toddlers (f) Prevent injury and violence and their consequences among infants and toddlers (g) Promote access to and receipt of quality, comprehensive pediatric health services, including dental services, by infants and toddlers 2. Grow Safe and Strong—increase the number of children who grow up healthy, safe, and ready to learn (Children, ages 4-11 years) Achieve Health Independence—increase the number of adolescents who are prepared to be healthy, safe, independent, and productive members of society (Adolescents, ages 12-19 years) (a) Promote social and physical environments that are accessible; that support health, safety, and development; and that promote healthy behaviors among adolescents (b) Promote access to and receipt of recommended quality, effective, evidence-based preventive and health care services, including dental and mental health care, among adolescents (c) Promote social, emotional, and mental well-being for adolescents (d) Prevent injury, violence, and suicide and their consequences among adolescents (e) Prevent Human Immunodeficiency Virus, sexually transmitted diseases, and unintended pregnancies and their consequences among adolescents (f) Promote healthy activity and nutrition behaviors and prevent overweight and its consequences among adolescents (g) Prevent substance use and its consequences, including tobacco, alcohol, and other substance use, among adolescents 4. Live a Healthy, Productive, and Satisfying Life— increase the number of adults who are healthy and able to participate fully in life activities and enter their later years with optimum health (Adults, ages 20-49 years) (a) Promote social and physical environments that are accessible; that support health, safety, and quality of life; and that promote healthy behaviors among adults (b) Promote access to and receipt of recommended quality, effective, evidence-based preventive and health care services, including dental and mental health care, among adults (c) Promote, social, emotional, and mental well-being for adults (d) Promote reproductive and sexual health among adults (e) Prevent chronic diseases and their consequences among adults (f) Prevent infectious diseases and their consequences among adults (g) Prevent injury, violence, suicide, and their consequences among adults (h) Improve behaviors among adults that promote health and well-being 5. Live Better Longer—increase the number of older adults who live longer, high-quality, productive, and independent lives (Older Adults and Seniors, ages 50 and over) (a) Promote social and physical environments that are accessible; that support health, safety, and quality of life; and that promote healthy behaviors among older adults (b) Promote access to and receipt of recommended quality, effective, evidence-based preventive and health care services, including dental and mental health care, among older adults (c) Promote independence, optimal physical, emotional, mental, sexual health, and social functioning among older adults (d) Prevent chronic diseases and their consequences among older adults (e) Prevent infectious diseases and their consequences among older adults (f) Prevent injury, violence, and suicide and their consequences among older adults (g) Improve behaviors among older adults that promote health and well- being Healthy People in Healthy Places—The places where people live, work, learn, and play will protect and promote their health and safety, especially those at greater risk of health disparities 1. Healthy Communities—increase the number of communities that protect and promote health and safety and prevent illness and injury (a) Promote safe and high-quality air, water, food, and waste disposal, and safety from toxic, infectious, and other hazards, in communities (b) Support the design and development of built environments that promote physical and mental health by encouraging healthy behaviors, quality of life, and social connectedness (c) Support a robust, sustainable capacity to provide access to and ensure receipt of essential public health, health promotion, health education, and medical services (d) Understand and reduce the negative health consequences of climate change (e) Prevent injuries and violence and their consequences in communities (f) Improve the social determinants of health among communities with excess burden and risk 2. Healthy Homes—protect and promote health through (a) Promote homes that are healthy, safe, and accessible safe and healthy home environments (b) Promote adoption of behaviors that keep people healthy and safe in their homes (c) Promote the availability of healthy, safe, and accessible homes 3. Healthy Schools—increase the number of schools that protect and promote the health, safety, and development of all students, and protect and promote the health and safety of all staff (e.g., healthy food vending, physical activity programs) Improve the timeliness and accuracy of communications regarding threats to the public’s health Decrease the time to identify causes, risk factors, and appropriate interventions for those affected by threats to the public’s health Decrease the time needed to provide countermeasures and health guidance to those affected by threats to the public’s health Decrease the time needed to restore health services and environmental safety to pre-event levels. In addition to the contact named above, Sheila K. Avruch, Assistant Director; Danielle Bernstein; George Bogart; La Sherri Bush; Gay Hee Lee; and Roseanne Price made key contributions to this report.
The Centers for Disease Control and Prevention (CDC)--an agency in the Department of Health and Human Services (HHS)--has experienced an expanding workload due to emerging health threats, such as bioterrorism. Strategic planning helps agencies like CDC sustain a workforce with the necessary education, skills, and competencies--human capital--to fulfill their missions. In September 2007, CDC released its Strategic Human Capital Management Plan (CDC Plan). GAO was asked to review CDC's human capital planning. GAO determined (1) whether the CDC Plan was designed to address the human capital challenges CDC faces; (2) the extent to which the CDC Plan is strategically aligned with agency goals, plans, and budget; and (3) the extent to which CDC incorporated GAO's principles for strategic human capital planning. To do so, GAO interviewed officials and analyzed data and documents. GAO identified six key challenges CDC faces in its efforts to sustain a skilled workforce to fulfill its mission and goals, and the CDC Plan includes strategies that could help the agency address five of them. These challenges are (1) changing workforce demographics, highlighted by the potential loss of essential personnel due to retirement; (2) the limited supply of skilled public health professionals; (3) CDC's acknowledged need to increase the diversity of its workforce; (4) changing workforce needs resulting from the agency's expanding scope of work and responsibilities; (5) logistical difficulties involved in acquiring and retaining a skilled workforce; and (6) difficulties presented by managing a workforce with a large and growing number of contractors. While the CDC Plan includes strategies designed to address the first five challenges, it does not address the challenge involving contractors, which represent more than one-third of its workforce. Thus, the CDC Plan may not be as useful as it could be to provide a strategic view of its contractor workforce and to assist the agency with managing all of its human capital. The CDC Plan only partially meets the criteria for strategic alignment: the strategies in it are linked with the agency's mission and goals, but they are not integrated with the documents that serve as the strategic plan, performance plan, or budget. According to CDC officials, the agency will update the CDC Plan annually and will integrate it with these documents as it is updated. CDC incorporated aspects of all of GAO's principles of strategic human capital planning into the CDC Plan and has outlined intended actions that could further incorporate the principles in subsequent updates. CDC partially incorporated the first principle--to involve managers, other employees, and stakeholders in developing, communicating, and implementing the human capital plan--by formally involving management and stakeholders in plan development. CDC intends to involve other employees in implementation and future updates. CDC partially incorporated the second principle--to determine the skills and competencies needed to achieve agency mission and goals, including identifying skill and competency gaps--by conducting a preliminary workforce analysis. The agency had not completed its analyses of skill and competency gaps for the occupations it deemed most critical when the plan was developed, but has now completed an analysis for one critical occupation and is conducting others. The plan partially follows the third principle--to develop strategies to acquire, retain, and develop a skilled workforce and to address gaps. CDC developed strategies for its plan and intends to target gaps once they are identified. CDC has incorporated the fourth principle--to build capabilities to support the strategies--through such activities as ongoing efforts to streamline hiring. The fifth principle is to monitor and evaluate the contribution that strategies have made toward achieving mission and goals. The agency indicated in the CDC Plan that it intends to monitor and evaluate its strategies as part of its implementation activities. Further incorporation of GAO's principles into plan updates could help the agency strengthen its human capital efforts.
DFAS Columbus uses MOCAS to make contract payments for the Army, Navy, Air Force, and other DOD organizations. In fiscal year 2002, DFAS Columbus reported that it made about $87 billion of contract payments. DOD, including DFAS Columbus, uses a line of accounting to accumulate appropriation, budget, and management information for contract payments. Figure 1 shows a line of accounting on the Air Force contract that we reviewed. A line of accounting provides various information, such as (1) department code (for example, those for the military services) and (2) fiscal year and appropriation account financing the contract. For all contracts, the contracting office assigns an ACRN to each line containing unique accounting information in accordance with the requirements contained in the Defense Federal Acquisition Regulation Supplement (DFARS). Obligations are established at the ACRN level to ensure that funds are available to cover disbursements. DFAS Columbus allocates payment amounts to ACRNs to match contractor payments to the corresponding obligations. DOD payment and accounting processes are complex, generally involving separate functions carried out by separate offices in different locations using different procurement, accounting, and payment systems. The processes are not always integrated and require data to be entered and sometimes reentered manually. Figure 2 shows the payment process information flow for the Air Force contract that we reviewed. As illustrated above, the payment process information flow for the Air Force contract began when DOD funding activities requested that the Air Force contracting office procure engineering and technical services as well as spare parts. The Air Force contracting office awarded the contract and modified it to procure additional items. The Air Force contracting office forwarded the contract and modifications to several organizations, such as the communications contractor and DFAS Columbus paying office. Upon receipt of the contract and modifications, the communications contractor performed work for the DOD activities and submitted invoices to DFAS Columbus for payment. For goods procured under the contract, the Defense Contract Management Agency, which is located at the contractor’s site, accepted the goods on behalf of the DOD activities and provided receiving report information to DFAS Columbus. The communications contractor then forwarded the goods to the DOD activities. For services provided by the contractor, the contractor submitted vouchers for services directly to DFAS Columbus for payment. The vouchers were subject to later audit by the Defense Contract Audit Agency. Before making payments to the contractor, DFAS Columbus matched the documents—through automated and manual processes—provided by the Air Force contracting office, the communications contractor, and Defense Contract Management Agency to ensure that (1) items ordered were received and (2) funds were obligated and available to make the payments. Finally, DFAS Columbus paid the contractor, recorded the payment data in DFAS Columbus records, and forwarded these data to the DFAS accounting stations responsible for recording the data in the various DOD organizations’ accounting systems. When errors occurred in allocating payments to the correct ACRNs, the DFAS Columbus contract reconciliation branch made adjustments to correct the payment allocations in DFAS Columbus and the applicable DFAS accounting station records. In order to identify some of the problems DFAS Columbus has experienced in properly allocating payments to the ACRNs on contracts, we selected an Army and an Air Force contract for a detailed review. These contracts support two programs—the Army Tactical Missile System and the Army Data Link System. A description of each of these programs is presented below. We reviewed an Army contract (contract number DAAH01-98-C-0093) with Lockheed Martin Vought Systems Corporation concerning the Army Tactical Missile System. This missile system is one of a family of complementary weapons initially developed by the Army and Air Force for engaging enemy forces deep behind the front battle lines. The missile system was designed to attack those forces that are in a position to have an immediate or directly supporting impact on the close-in battle, but are beyond the range of cannon and rocket artillery systems. It is intended to delay, disrupt, neutralize, or destroy targets, such as second echelon maneuver units, missile sites, and forward command posts. The Army Tactical Missile System consists of a surface-to-surface ballistic missile that can be launched from and controlled by the Army’s Multiple Launch Rocket System. The missile system was initially fielded with an “antipersonnel/antimaterial warhead” for attacking stationary targets. Since the weapon system was first fielded, the missile system has been modified to increase its range, improve its guidance systems, and reduce collateral damage. This missile system was used in the recent war in Iraq. Figure 3 is a photograph of the missile system. We reviewed an Air Force contract (contract number F09604-00-C-0090) with L-3 Communications to maintain the Army portion of the Army Data Link System. The Army and Air Force developed the Army Data Link System to transfer near-real-time targeting information collected by aircraft, satellites, and ground stations and provides this information to aircraft and tactical commanders on the ground in-theater. The system consists of three major components—the Army Interoperable Data Link, the Direct Air to Satellite Relay, and the Reach Back Relay. The Army Interoperable Data Link provides two-way secure direct communications between aircraft and aircraft-to-ground stations. The Direct Air to Satellite Relay communicates data gathered by aircraft through a secure satellite link to an in-theater ground processing facility. The Reach Back Relay communicates data gathered through a secure satellite link to ground processing facilities in the continental United States. The Data Link System was also used in the recent war in Iraq. Figure 4 shows how the communications system transfers data. For fiscal year 2002, our analysis of DFAS Columbus data showed that about $1 of every $4 in contract payment transactions in MOCAS was for adjustments to previously recorded payments—$49 billion of adjustments out of $198 billion in disbursement, collection, and adjustment transactions. This is an improvement over fiscal year 1999 when DFAS Columbus data showed that about $1 of every $3 in contract payment transactions (transactions for disbursements, collections, and adjustments) in MOCAS was for adjustments to previously recorded payments—$51 billion of adjustments out of $157 billion in transactions. While DOD has been working on resolving these problems for years, it has yet to correct them. To research the payment allocation problems and make adjustments to correct the disbursing and accounting records, DFAS Columbus reported that it incurred costs of about $34 million in fiscal year 2002, primarily for hundreds of DOD and contractor staff. This represented about 35 percent of the $97.4 million that DFAS Columbus spent on contract pay service operations. Our review showed that the specific contracting offices that contributed to payment allocation problems resulting in adjustments did not pay for all of the work DFAS Columbus performed to make the adjustments. This occurred because DFAS Columbus currently bills DOD activities (for example, the Army) for contract pay services based solely on the number of lines of accounting on an invoice. Consequently, all DOD activities pay the same line rate, regardless of whether substantial work is needed to reconcile problem contracts and adjust the payment records. As a result, the contracting offices that contributed to payment allocation problems had insufficient incentives to reduce payment errors and associated costs. As discussed later in this report, DOD is taking action to change its billing structure for DOD activities. Our analysis of an Army and an Air Force contract showed that the contracts and related payment instructions were complex because of a combination of factors, including the (1) legal and DOD requirements to track and report on the funds used to finance the contract, (2) number of modifications made to the contract over the years that added goods and/or services, or added or changed payment instructions for these goods and/or services, and (3) different pricing provisions to pay for goods and services on the contract. While we identified these three factors as unique areas, the factors are interrelated and contributed to the contracts containing complex payment instructions and the difficulty DFAS Columbus had in properly allocating payment amounts to the correct ACRNs, ultimately contributing to a high rate of adjustments. In order to maintain administrative control over appropriated funds, DOD has established a system of controls to help ensure that funds obligated and then expended for the procurement of goods and services were used as intended and in accordance with applicable laws and regulations. A system of controls should be designed to help ensure that agencies do not obligate or expend more funds than available. However, DOD’s system contributes to the complexity of the contracts. To report on the status of its appropriated funds—including amounts obligated and expended—DOD uses a line of accounting to accumulate appropriation, budget, and management information. For all contracts, the contracting office assigns a two-digit ACRN to each line containing unique accounting information in accordance with the requirements contained in DFARS 204.7107 (c). DFAS Columbus allocates payments to the ACRNs to match contract payments to the corresponding obligations. For the two contracts that we reviewed, the Army contract that was valued at $565 million contained 74 separate ACRNs funded by 8 different appropriation accounts and sales to three foreign countries, and the Air Force contract that was valued at $49 million contained 89 ACRNs funded by 23 different appropriation accounts. Each ACRN was established to comply with the DFARS requirement for a separate ACRN for each unique line of accounting. The information on the line of accounting (1) is needed to track the obligations and disbursements back to the DOD activity authorizing the work and (2) provides information on the obligations and disbursement data, such as the organizations providing the funding. While DOD created all of these ACRNs to comply with its requirements, our analysis of the lines of accounting showed that DOD used ACRNs to provide the information needed to comply with legal requirements to account for obligations by appropriation account and by object class. On the Army contract that contained 74 ACRNs, 24 of these ACRNs—about one-third—were used by DOD to provide the information needed to satisfy the legal requirements. Likewise, on the Air Force contract that contained 89 ACRNs, 48 of these ACRNs—or more than half—were used by DOD to provide the information needed to satisfy the legal requirements. DOD accounts for each of these ACRNs separately—in effect treating them as separate bank accounts—even though they all fund the same contract. Each additional ACRN increases the risk of incorrectly allocating payments to the wrong ACRN. While accounting requirements and related ACRNs contributed to complex contracts, frequent contract modifications to procure additional goods and services are another factor that contributed to complex contracts. When DOD orders more goods and/or services than provided on the original contract, DOD modifies the contract and pays the contractor for the additional goods and/or services. Many times different appropriation accounts are used to pay for these additional goods and/or services resulting in DOD creating more ACRNs to account for the funds. Our analysis of two DOD contracts showed that they were modified many times over the years to procure additional goods and/or services, as well as to add or change payment instructions. Our review found that modifications that changed payment instructions resulted in DFAS Columbus making adjustments to correct prior payment allocations to ACRNs. In 1997, the Army contracted with Lockheed Martin Vought Systems Corporation to produce an updated version of the Army Tactical Missile System. The basic contract was for the procurement of 100 guided missiles and launching assemblies for the Army missile program. The Army program office initially obligated $14.2 million in 1997 for this effort. As of September 30, 2002, the estimated contract value increased to almost $565 million. Our analysis of this contract showed that it was modified 122 times over a 5-year period to (1) increase the number and type of missile systems ordered for the Army and three foreign countries from 100 to 833, (2) procure over 270,000 engineering service hours to support the production of the missile systems, and (3) make other changes necessary to administer the contract. The Army contracting office also issued six modifications to provide detailed payment instructions to DFAS Columbus. According to the Administrative Contracting Officer, the payment instructions were issued to resolve payment allocation errors made by DFAS Columbus and to ensure that the payments were applied to the correct ACRNs on the contract. Like the Army contract, the Air Force contract was also modified a number of times to procure additional goods and services and to administer the contract. In October 1999, the Air Force contracted with L-3 Communications to maintain the Army portion of the Army Data Link System. The basic contract contained a description of the engineering and technical services and spare parts necessary to maintain the communications system worldwide. The contract also stated that funding for the engineering and technical services as well as miscellaneous spare parts would be included on individual funding modifications on this contract. As of September 30, 2002, the estimated contract value was about $49 million. Our analysis of this contract showed that it was modified 82 times over a 3-year period by five different procurement contracting officers to (1) provide funding for and/or increase/decrease the requirements for engineering and technical services and miscellaneous spare parts to maintain the Army assets for the Army Data Link System and (2) make other changes necessary to administer the contract. Furthermore, 73 of the 82 modifications revised the payment instructions. Our analysis of two DOD contracts showed that contract-pricing provisions were a third factor that contributed to the complexity of these contracts. As stated previously, the Army and Air Force contracting offices issued many contract modifications to procure goods and services on behalf of the military services. These modifications included several contract line items that contained numerous goods or services with different pricing provisions. Contract pricing provisions can be placed into two broad categories—fixed price or cost reimbursable. For example, the Army contract contained firm-fixed-price provisions for procuring 833 missiles, and cost-plus-fixed-fee and cost-plus-award-fee provisions for procuring 270,000 engineering service hours to support the missile production. Our review found that contracts containing different pricing provisions are more complex, and thus it is more difficult to properly allocate payments to the correct ACRNs because DFAS Columbus voucher examiners must allocate payment amounts manually, resulting in a greater opportunity for error. When DFAS Columbus voucher examiners manually allocate payment amounts to contract ACRNs, the voucher examiners must ensure that the payment amounts associated with fixed price and cost reimbursable provisions are allocated to those ACRNs funding those payment provisions only. However, in some cases it is difficult for the voucher examiner to readily identify these ACRNs without performing a labor-intensive review of the contract. As a result, sometimes the voucher examiner incorrectly applies the payment amounts to ACRNs funding fixed price provisions instead of ACRNs funding cost reimbursable provisions. Our review of the Army contract found that it contained 25 separate contract line items—15 were to be paid for under fixed price provisions and 10 were to be paid for under cost reimbursable provisions. Similarly, the Air Force contract contained 66 separate contract line items—16 were to be paid for under fixed price provisions and 50 were to be paid for under cost reimbursable provisions. As stated previously, the Army and Air Force contracts that we reviewed were complex due to a number of factors, including legal and DOD requirements, contract modifications, and pricing provisions. These factors contributed to the difficulty DFAS Columbus had in properly allocating payment amounts to the correct ACRNs. As a result, payment amounts on these contracts were not allocated to the correct ACRNs, and DFAS Columbus made substantial adjustments to correct the payment allocations. Our evaluation of $160 million of adjustments showed that DFAS Columbus made these adjustments to reallocate payments to the correct ACRNs. Table 1 summarizes the reasons for the adjustments and provides the number and dollar amount of adjustment transactions made to reallocate payments to the correct ACRNs. From 1998 through 2001, DFAS Columbus paid 43 invoices totaling $63.5 million for the procurement of several missile systems. DFAS allocated these payment amounts to two ACRNs according to the payment instructions in effect at the time of the payment. Subsequently, the contractor submitted price reductions to the Army for certain contract items that DFAS Columbus had previously paid. In response to the price reductions, the Army issued a contract modification to account for the reductions. When the Army processed this modification, the Army contract writing system incorrectly deobligated the amount for the missiles on the two ACRNs in error and established two new ACRNs on the contract containing the reduced amount. As a result, in January 2002, DFAS Columbus processed 92 adjustment transactions totaling about $127.2 million to move payment amounts to the new ACRNs. In discussing this problem with Army officials, they informed us that they did not know that the system error resulted in DFAS Columbus having to do additional work to make these adjustments. According to these officials, the system problem that resulted in the creation of the new ACRNs was corrected in 2001. From June 1999 through April 2001, DFAS Columbus paid 38 invoices totaling about $16 million for engineering services on the Army contract. When DFAS Columbus paid the contractor, the contract did not contain specific payment instructions on how to allocate payment amounts to ACRNs as required by DFARS 204.7107 (e)(3)(i). According to this regulation, when a contract line item is funded by multiple ACRNs, the contracting officer shall provide adequate instructions in the contract to permit the paying office (DFAS Columbus in this case) to accurately charge the ACRNs assigned to that contract line item. Without these payment instructions, DFAS Columbus voucher examiners should follow DFAS Columbus internal procedures. These procedures require voucher examiners to prorate payment amounts across all available ACRNs under cost reimbursable provisions when the contract or contractor’s invoice does not provide specific payment instructions on which ACRNs should be charged. However, instead of charging ACRNs funding cost reimbursable provisions only (engineering services), DFAS Columbus voucher examiners manually allocated the payment amounts to ACRNs that funded both engineering services (cost reimbursable provisions) and missiles (fixed price provisions). As a result, some payment amounts for engineering services were incorrectly allocated to ACRNs funding the procurement of missiles. According to Army contracting officials, in April 2001—almost 2 years after DFAS Columbus paid the first invoice—the Army issued a modification containing detailed payment instructions once it became aware that DFAS was having difficulty in allocating payment amounts to the correct ACRNs. These instructions were different from the payment allocation procedures followed by DFAS Columbus. However, by that time, DFAS Columbus had made 38 payments to the contractor for engineering services and allocated these payments to several ACRNs. To correct payment allocation problems associated with 7 of the previous 38 payments, DFAS Columbus processed 88 adjustment transactions totaling about $4.7 million to reallocate previously recorded payments according to the new instructions. DFAS Columbus also processed 16 transactions totaling about $2.4 million in adjustments to correct payment errors made by DFAS Columbus voucher examiners when they manually applied payment amounts to ACRNs on the Army contract. In April 2001, the Army issued a contract modification that provided specific payment instructions to ensure that funds were used prior to cancellation. DFAS Columbus officials told us that these payment instructions were complex and changed several times after the modification was first issued. For example, the following instructions were included in contract modifications to provide payment instructions for contract line item number (CLIN) 0030. Contract modification 74 dated April 2001 stated that, “Subclins under CLIN 0030 – prorate across ACRNs BR, BS, and BT.” Seven months later, in November 2001, contract modification 89 added additional payment terms for CLIN 0030 by incorporating instructions for CLIN 0031 and instructions for contract award fees under these two CLINs. The modification stated, “Subclins under CLIN 0030/0031 – prorate across ACRNs BR, BS, BT, BX, BY, and CD, unless voucher identifies award fee then prorate across ACRNs CE, CF, CG, and CH.” Seven months later, in June 2002, contract modification 109 provided more payment instructions for CLIN 0030/0031. The modification noted that, “Subclins under CLIN 0030/0031 – prorate across ACRNs BR, BS, BT, BX, CD, CN, CT, CU, CW and DA, unless voucher identifies award fee then prorate across ACRNs CE, CF, CG, CH and CV, or if voucher identifies technical publications then prorate across ACRNs BY, CR, and DA.” Our analysis of the payment instructions showed that the instructions were complex, changed several times, and were difficult to administer properly. We found that $2.4 million of adjustment transactions were the result of errors made by voucher examiners. These errors occurred because the examiners did not follow the complex, frequently changed, and nonstandard payment instructions correctly. In order for DFAS Columbus to properly allocate payments on CLIN 0030, our work showed that the voucher examiner must (1) identify the current modification in effect at the time of payment to ensure payments are allocated in accordance with the payment instructions, (2) determine the type of invoice to ensure the allocations are made against the correct ACRNs, for example, technical publications or award fee, (3) identify the current available balance associated with ACRNs funding the services, and (4) calculate a prorated balance to be distributed to each ACRN funding the services. For example, for one invoice totaling $350,635 on the Army contract, DFAS Columbus paid two contract line items and allocated the payment amounts to 23 ACRNs in an attempt to comply with the payment instructions on the contract that were in effect on the payment date. This condition resulted in errors in the contract records when voucher examiners incorrectly allocated payment amounts to the wrong ACRNs. In discussing this problem with DFAS Columbus officials, they confirmed our analysis that the payment instructions were complex and difficult to administer properly. The officials stated that when a contract contains payment instructions similar to the instructions presented above, DFAS Columbus voucher examiners must manually allocate the payment amounts to contract ACRNs. The officials also stated that the instructions on this contract were very complicated and could easily be misinterpreted if voucher examiners do not carefully review the payment instructions prior to allocating the payment amount to ACRNs on the contract. In March 2001, DFAS Columbus processed 1,262 transactions totaling over $26 million to adjust previously recorded payment allocations on the Air Force contract. At the time these adjustments were made, the Air Force had already issued 42 modifications, which changed the payment percentages that DFAS Columbus was required to follow to make correct payment allocations. Because the number and frequency of the payment percentages changed, DFAS Columbus did not allocate payment amounts to the correct ACRNs in many cases. The Air Force awarded a contract in October 1999 to procure engineering and technical services and spare parts to maintain the Army Data Link System. Over the next 3 years, the contract was modified numerous times to increase the requirements for engineering and technical services and spare parts, along with the necessary incremental funding amounts to support these requirements. As additional funds were added to the contract, (1) new ACRNs were added or obligation balances for existing ACRNs increased and (2) the payment percentages were modified to reflect the new obligation balances of the affected ACRNs. For example, the Air Force modified contract line item number 0006 for engineering services three times over a 2-month period to incrementally fund these services. Each time, the Air Force changed the ACRN payment percentages funding the contract line item. Our analysis of the contract showed that allocating payments on this contract has been very difficult, and voucher examiners could easily misinterpret the payment instructions because of the numerous contract modifications that changed ACRN payment percentages. These instructions were complex and difficult to administer because (1) modifications frequently changed the payment instructions and (2) many ACRNs were financing numerous contract line items. The percentages changed so frequently that DFAS Columbus voucher examiners could not keep track of ACRN balances in order to allocate payment percentages properly. Also, when many ACRNs financed numerous contract line items, DFAS Columbus had difficulty identifying how much of an ACRN’s obligation amount related to each line item. Therefore, many payments were not allocated in accordance with the current modification, and adjustments were needed to correct these allocations. Figure 5 illustrates the current funding structure for 3 of the 66 contract line items on the Air Force contract. As shown above, the relationship of CLINs to ACRNs is complex because there is not a one-to-one relationship. This makes it difficult for DFAS Columbus to accurately allocate payments to ACRNs. Because the contract funding structure was complex, DFAS Columbus voucher examiners did not properly allocate payments to the correct ACRNs. Thus, DFAS Columbus sent the contractor’s invoices to its contract reconciliation branch for payment. In addition, beginning in the fall of 2001, the contractor began providing a detailed payment distribution schedule with each invoice submitted to DFAS Columbus to assist it in properly allocating payments to the correct ACRNs. We found that an invoice totaling $94,237.18 contained 31 pages of contractor costs and billing charges for 45 contract line items charging 56 different ACRNs. The amounts charged to the ACRNs by the contractor were as little as $.59 to as much as $88,107.03. DOD officials acknowledged that there have been long-standing contract payment allocation problems that have required DFAS Columbus to undertake time-consuming and costly reconciliations to correct allocation errors. DOD has initiated a major long-term effort to develop and implement an enterprise architecture, which is intended to improve its business operations, including its acquisition and disbursement activities. If implemented successfully, this initiative may help correct many of the contract payment allocation problems. In the interim, DOD has several initiatives under way to address the payment allocation problems caused by complex contracts with confusing payment instructions. First, DFAS plans to bill reconciliation costs to contracting offices that contribute to payment allocation problems. Second, DFAS Columbus is briefing the DOD acquisition community on methods for presenting payment instructions in contracts. Finally, a DOD working group is examining payment allocation problems and plans to develop and implement payment allocation options for presenting standard payment instructions on contracts DOD-wide to address these problems. Since 1995, DOD had been attempting to develop a new system—DPPS—to replace MOCAS, which was developed in the 1960s. DPPS was being designed to resolve DOD’s long-standing disbursement problems, streamline contract and vendor payment processes, and reduce manual interventions. However as we previously reported, DOD terminated DPPS in December 2002, after 7 years in development at a cost of over $126 million, because of poor program performance and increasing life cycle costs. DOD officials informed us that enhancements to MOCAS are now being considered to provide some of the automated capabilities that DPPS had been attempting to achieve. The failure of DPPS to become DOD’s standard procurement payment system is indicative of DOD’s long-standing inability to efficiently and effectively modernize its financial management and business systems. For example, we recently reported that over $300 million has been invested to develop several DFAS financial management systems and that DOD has not demonstrated that this investment will substantially improve the financial management information needed for decision-making and financial reporting purposes. To help avoid this type of result, we recommended in 2001 that DOD develop and implement an enterprise architecture, an essential modernization management tool. As part of its current effort to transform its business operations, DOD is developing a business enterprise architecture. A key area of focus is DOD’s acquisition and disbursement activities. As we have previously reported, DOD contract management has been a high-risk area within the department since 1992. To address these problems, DOD’s business enterprise architecture development effort is intended to (1) incorporate federal accounting and financial management requirements, (2) consider leading practices in procurement and contract payments, and (3) reengineer its business processes. If implemented as planned, this initiative has the potential to address many of the contract payment allocation problems discussed in this report. However, this is a long-term effort that will take many years to implement. As pointed out earlier, the contracting offices that contributed to payment allocation problems have insufficient incentives to structure their contracts, including payment instructions, in a manner that would reduce payment errors and related reconciliation costs. This condition exists because DFAS Columbus currently bills DOD activities (for example, the Army) for contract pay services based solely on the number of lines of accounting on an invoice. DFAS Columbus officials recognized this shortcoming in establishing their billing rates and have informed us that they plan to use a separate billing rate, based on DFAS contract reconciliation costs, to bill customers for reconciling, adjusting, and correcting contract payments beginning in fiscal year 2004. According to the officials, the direct billing hour rate for reconciliation services will be $74.55 in fiscal year 2004 and will be billed to the contracting offices responsible for writing the contracts that require reconciliation. According to the Deputy Director of DFAS Columbus’s Commercial Pay Services, separately billing contracting offices for the reconciliation work should provide an incentive to those offices to reduce the number of payment allocation problems that result in adjustments. In our view, this billing initiative should encourage contracting offices to structure contracts, including payment instructions, in a manner that should help reduce payment errors and reconciliation costs. DFAS Columbus, in partnership with the Defense Contract Management Agency, is providing formal briefings to the DOD acquisition community on various issues related to contract administration, payment, and closeout. These briefings are designed to give contracting, procurement, and budget personnel throughout DOD better insight into the contract entitlement, payment, and accounting processes provided by DFAS Columbus. They also help to promote teamwork between DFAS Columbus and the acquisition community and provide an opportunity to enhance the communications link that is necessary for these organizations to interoperate efficiently and effectively. These briefings began in November 2001 and have been provided to numerous activities across the military services. As of March 2003, presentations had been provided to 18 major acquisition organizations, such as the Army Aviation and Missile Command, the Navy Space and Naval Warfare Systems Command, and the Air Force Space and Missile Systems Center. Among the topics included in these briefings are methods for correctly presenting payment instructions in contracts. The briefing materials emphasize that payment instructions provide a method for assigning payments to the appropriate ACRNs, based on anticipated work performance. Specifically, the materials discuss payment instruction requirements for fixed price and cost reimbursement contracts and provide recommendations for contracting officers to consider when they develop payment instructions. Reference materials that provide additional payment instruction information, available on the Internet, are also cited in these briefings. In September 2002, DOD formed a working group to review contract payment instructions. The working group’s results were incorporated into a broader DOD initiative to identify needed improvements and reductions to procurement policies, procedures, and processes in DFARS. The working group completed the first phase of its work by researching the types of payment instructions that have caused payment allocation problems and developing proposed changes to DFARS. The working group concluded that payment allocation errors at DFAS Columbus (1) were often the result of problems experienced with confusing payment instructions and (2) were further compounded because DOD did not have payment allocation options for including standard contract payment instructions for use on DOD contracts. As illustrated earlier in this report, payment instructions can be unique to each contract. For example, DFAS Columbus made $2.4 million in payment allocation errors on the Army contract because the voucher examiners made errors when they manually applied payment amounts to ACRNs following complex, nonstandard payment instructions that changed several times. This makes it difficult for DFAS Columbus personnel—who must process the payments and make the adjustments manually—to properly allocate the payments to the correct ACRNs. In the second phase of this effort, the working group developed proposals for regulatory changes to address the types of standard payment instructions applicable to specific contracting situations. Depending on their specific requirements, contracting officers would be able to choose a contracting option that would include the standard payment instructions applicable to that particular situation. For example, the working group is evaluating several different standard payment instructions that DOD contracting offices may use on contracts, including standard payment instructions (1) for ensuring that funds are used before they are no longer available for expenditure and (2) for ensuring that funds can be allocated to the various contract ACRNs based on the balance available on each contract ACRN. The group’s consensus was that such standardization should enable DFAS Columbus to substantially increase its level of automated payments. During phase three, the final phase that began in May 2003, the working group’s proposals will be presented to the Defense Acquisition Regulations Council and other experts for review. The results from this review process will eventually determine whether the working group’s proposals will be accepted. DOD has not yet established a milestone date for completing the third phase of this effort and has not yet made the final decision to implement the options for presenting standard payment instructions. While the working group did not study the possibility of automating the standard payment instructions, it informed us that automating them would be the next logical step if they were implemented. Our analysis of the working group initiative to develop standard payment instructions showed that it is a good first step because it should reduce some of the payment allocation errors that were the result of DFAS Columbus voucher examiners misinterpreting contract payment instructions. However, even with the standard payment instructions, DFAS Columbus will still manually allocate payments to contract ACRNs on complex contracts such as the two mentioned in this report. As we previously stated, manual payment allocations increase the opportunity for errors. In order to eliminate payment allocation errors, DFAS should take the next step and automate the standard payment instructions so that DFAS Columbus can electronically process the payments with minimal manual intervention. Resolving DOD’s long-standing contract payment problems will require major improvements to its processes and systems. One key element of DOD’s efforts to improve its business operations is its effort to develop an enterprise architecture to guide and constrain its ongoing and planned investments in business systems. Another key element to resolving contract payment problems would be to determine whether DOD can reengineer the way its contracts are written, including the length of time covered by a contract as well as the number of modifications made to the contract. If successful, these efforts could result in reengineered business processes and financial management systems that could address many of DOD’s long-standing contract payment allocation problems that have required DFAS Columbus to undertake time-consuming and costly reconciliations to correct allocation errors. However, these reengineered processes and systems are years from becoming reality. In the interim, DOD is addressing some of the fundamental weaknesses that have resulted in billions of dollars of adjustments to correct contract payment allocations annually. However, DOD has not yet completed its work to (1) provide information to the procuring activities on the correct methods for presenting payment instructions on contracts, (2) develop and implement standardized payment instructions to be used DOD-wide, and (3) fully automate the payment process using these instructions. Without standardized and automated payment instructions, DOD will continue to spend millions of dollars each year to process payments manually and then adjust those payments. Further, DOD activities must follow existing regulations and procedures covering payment instructions to help ensure that payment data are accurately recorded against the correct obligations. Until DOD successfully modernizes its business operations, these interim steps will help avoid the inaccurate contract payment data that have hindered DOD’s ability to accurately account for and report on contract disbursements. We recommend that the Secretary of Defense direct the Under Secretary of Defense (Acquisition, Technology, and Logistics) to develop payment allocation options for presenting standard payment instructions in contracts containing multifunded contract line items and issue guidance to the contracting community reiterating the requirement in DFARS that all contracts containing multifunded contract line items contain payment instructions and that these instructions be revised when additional ACRNs are added. We also recommend that the Secretary of Defense direct the Under Secretary of Defense (Comptroller) to direct the Director of the Defense Finance and Accounting Service to automate the standardized payment instructions in MOCAS once the standard payment instructions are adopted and issue guidance reiterating DFAS’s internal requirement that when a contract does not contain payment instructions and the contractor invoice does not contain payment instructions, the payments for costs and services must be allocated to ACRNs financed on a cost reimbursable basis in accordance with DFAS’s desk procedures. DOD provided written comments on a draft of this report. In its comments, DOD concurred with two of the four recommendations and partially concurred with the remaining two recommendations. DOD’s comments are reprinted in appendix II. For the two recommendations with which it concurred, DOD stated that the department will issue memorandums to (1) contracting personnel reiterating the requirements contained in DFARS and (2) DFAS personnel to adhere to DFAS policies and procedures, especially as they relate to the lack of definitive payment instructions in the contractual documents. Regarding the two partial concurrences, DOD stated that it would establish a standard section of the contract for placement of payment provisions, which would include any payment allocation provisions. In addition, DOD agreed that it would, as part of the working group effort referred to in this report, evaluate the feasibility of developing and automating payment allocation options to include in standard payment instructions. However, until the coordination and review process within DOD is complete, DOD stated that it could not commit to the development and automation of standard payment instructions. We understand that actions to develop and automate standard payment instructions must be coordinated with interested parties throughout DOD. At the same time, a clear commitment to completing this effort in a timely manner is critical if DOD is to resolve its long-standing problem of spending tens of millions of dollars each year to make tens of billions of dollars in adjustments to correct the payment allocation problems. When $1 out of every $4 in contract payment transactions continues to be for adjustments to previously recorded payments, decisive steps towards a lasting solution are essential. One concern that we have is that DOD has not indicated any time frame for completing the coordination and review process referred to in its response. As we recently testified, cultural resistance to change, military service parochialism, and stovepiped operations have played a significant role in previous failed attempts to implement management reforms at DOD. Breaking down these barriers will be critical to successfully reforming DOD’s contract payment processes and saving the millions of dollars currently spent annually on inefficient and inaccurate manual processes. In addition, DOD stated that our findings were based on a review of only two contracts that had known problems. DOD recommended that the report specifically state that, due to the nature of this review, the results cannot be extrapolated to other DOD contracts. The draft report already included such a statement. The report states that the two contracts we reviewed are not representative of all DOD contracts but, based on our experience, have characteristics similar to other complex contracts. DFAS Columbus provided these contracts as examples of complex contracts with which they had encountered problems in correctly allocating payments to ACRNs. We selected these two contracts so we could identify the root cause of payments not being properly allocated to ACRNs and to determine what actions DOD is taking to address the problem. We believe that the two contracts provide a good perspective regarding the types of serious problems that have long plagued DOD’s contract payment process. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its date. At that time, we will send copies of the report to interested congressional committees. We will also send copies of this report to the Secretary of Defense; the Under Secretary of Defense (Comptroller); the Under Secretary of Defense (Acquisition, Technology, and Logistics); the Secretaries of the Army, Navy, and Air Force; and the Director of the Defense Finance and Accounting Service. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-9505 or kutzg@gao.gov or Greg E. Pugnetti, Assistant Director, at (703) 695-6922 or pugnettig@gao.gov. Major contributors to this report are acknowledged in appendix III. To determine the magnitude of adjustments that affected previously recorded payments and Defense Finance and Accounting Service (DFAS) Columbus’s reported cost to make these adjustments in fiscal year 2002, we obtained and analyzed a Mechanization of Contract Administration Services (MOCAS) database of contract payment transactions, including disbursements, collections, and adjustments, made for fiscal year 2002. We then determined the dollar amount and percentage of those adjustments that were made to previously recorded payments. We also interviewed and obtained cost information from DFAS officials to determine the costs for the DFAS Columbus commercial pay services, including the cost incurred by DFAS Columbus to make adjustments to previously recorded payments. To determine why contracts, including payment instructions, were complex, we reviewed applicable laws, Department of Defense (DOD) memorandums, regulations, administrative guidelines, policies, and procedures governing contract payments. These included a review of the key contract payment provisions provided in the Defense Federal Acquisition Regulation, DOD acquisition guidance, and DFAS policies. We also requested that DFAS Columbus provide us with several contracts that created problems for DFAS Columbus technicians when recording payments. DFAS Columbus officials provided us with 10 contracts that contained problems that DFAS Columbus was having with properly allocating payments to accounting classification reference numbers (ACRN), such as (1) missing payment instructions, (2) complex payment instructions, or (3) the contracting office changing payment instructions. Based on our review of contract documentation and interviews with DFAS Columbus officials, we selected two contracts for a detailed review to determine why the contracts, including payment instructions, were complex and caused payment allocation problems. The two contracts selected were an Army missile contract (contract number DAAH01-98-C-0093) and an Air Force communications contract (contract number F09604-00-C-0090). In selecting these two contracts, we considered several factors, including (1) goods and/or services purchased, (2) the dollar amount of obligations and disbursements made on the contract, (3) the number of modifications made to the contract, (4) the number of ACRNs financing the contract, (5) payment provisions on the contract, and (6) the number of contract reconciliations performed by DFAS Columbus. Because contract data were constantly changing, we used a cutoff point of September 30, 2002, to gather, review, and analyze data on the two contracts. To determine the key factors that caused DFAS Columbus to make payment adjustments for the two contracts reviewed, we obtained the contracts, purchase requests, contract modifications, vouchers, invoices, and other contract documentation. We reviewed this information and analyzed in detail (1) the payment instructions contained in the contract and contract modifications, (2) the purpose for and the number of ACRNs funding the contract, and (3) the number, dollar amount, and reasons for adjustments on the contracts. To determine why these adjustments were necessary, we analyzed 2 of the 67 reconciliations performed by DFAS Columbus (1 review for each contract), which resulted in $160 million of the $264 million in adjustments. Finally, to determine what steps DOD has taken or planned to address the adjustment problem, we (1) reviewed applicable DOD policies to identify changes in payment instruction guidance, (2) interviewed DFAS officials responsible for system development projects that affected MOCAS payments, and (3) interviewed DFAS officials on a working group formed to improve payment instructions. In addition, we discussed with officials from DFAS and the Office of the Under Secretary of Defense (Acquisition, Technology, and Logistics) why the contract payment instructions were so complex, whether they needed to be so complex, and what the officials were doing to address this problem. We performed our review at the headquarters Offices of the Under Secretary of Defense (Comptroller) and the Under Secretary of Defense (Acquisition, Technology, and Logistics), Washington D.C., and DFAS, Arlington, Virginia; DFAS, Columbus, Ohio; Army Aviation and Missile Command, Redstone Arsenal, Alabama; and Air Force Materiel Command, Robins Air Force Base, Georgia. Our review was performed from August 2002 through July 2003 in accordance with U.S. generally accepted government auditing standards, except that we did not validate the accuracy of (1) DFAS Columbus disbursement data pertaining to the dollar amount and percentage of those adjustments that were made to previously recorded payments and (2) the cost incurred to reconcile DFAS Columbus contracts. We also did not review the DOD acquisition process, including how contracts are written. We did analyze the payment instructions in the two contracts that we reviewed. We requested comments on a draft of this report from the Secretary of Defense or his designee. DOD provided written comments on July 3, 2003, which are discussed in the “Agency Comments and Our Evaluation” section of this report and are reprinted in appendix II. Staff members who made key contributions to this report were Francine M. DelVecchio, Francis L. Dymond, Dennis B. Fauber, Keith E. McDaniel, and Harold P. Santarelli. Canceled DOD Appropriations: Improvements Made but More Corrective Actions Are Needed. GAO-02-747. Washington, D.C.: July 31, 2002. DOD Contract Management: Overpayments Continue and Management and Accounting Issues Remain. GAO-02-635. Washington, D.C.: May 30, 2002. Canceled DOD Appropriations: $615 Million of Illegal or Otherwise Improper Adjustments. GAO-01-697. Washington, D.C.: July 26, 2001. Financial Management: Differences in Army and Air Force Disbursing and Accounting Records. GAO/AIMD-00-20. Washington, D.C.: March 7, 2000. Financial Management: Seven DOD Initiatives That Affect the Contract Payment Process. GAO/AIMD-98-40. Washington, D.C.: January 30, 1998. Financial Management: Improved Reporting Needed for DOD Problem Disbursements. GAO/AIMD-97-59. Washington, D.C.: May 1, 1997. Contract Management: Fixing DOD’s Payment Problems Is Imperative. GAO/NSIAD-97-37. Washington, D.C.: April 10, 1997. Financial Management: Status of Defense Efforts to Correct Disbursement Problems. GAO/AIMD-95-7. Washington, D.C.: October 5, 1994. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
GAO has reported that the Department of Defense's (DOD) inability to accurately account for and report on disbursements is a long-term, major problem. GAO was requested to determine (1) the magnitude of the adjustments and related costs in fiscal year 2002, (2) why contracts, including payment terms, are so complex, (3) the key factors that caused Defense Finance and Accounting Service (DFAS) Columbus to make payment adjustments, and (4) what steps DOD is taking to address the payment allocation problems. For fiscal year 2002, DFAS Columbus data showed that about $1 of every $4 in contract payment transactions in the MOCAS system was for adjustments to previously recorded payments--$49 billion of adjustments out of $198 billion in transactions. To research payment allocation problems, DFAS Columbus reported that it incurred costs of about $34 million in fiscal year 2002. This represents about 35 percent of the total $97 million that DFAS Columbus spent on contract pay services. DFAS Columbus bills DOD activities for contract pay services based on the number of accounting lines on an invoice. Consequently, all DOD activities pay the same line rate, regardless of whether substantial work is needed to reconcile problem contracts and adjust payment records. GAO's analysis of two contracts showed that the contracts were complex because of the (1) legal and DOD requirements to track and report on the funds used to finance the contracts, (2) substantial number of modifications made on the contracts to procure goods and/or services, and (3) different pricing provisions on the contracts. GAO's review of $160 million of adjustments showed that the adjustments were made for four reasons. The Army made an error in accounting for obligations, resulting in about $127 million in payment allocation adjustments. DFAS Columbus did not follow its internal procedures for allocating payments to accounts on an Army contract containing multiple pricing provisions, resulting in about $5 million in adjustments. DFAS made over $2 million in adjustments to correct recording errors on an Army contract due to complex and changing payment instructions. The Air Force frequently changed payment instructions after payments were made on an Air Force contract, resulting in about $26 million in adjustments. DOD has initiated a major long-term effort to improve its business operations, including its acquisition and disbursement activities. If implemented successfully, this initiative may help correct many of the contract payment allocation problems. In the interim, DOD has initiatives under way to address payment allocation problems, including (1) billing DOD contracting offices for contract reconciliation services, (2) providing DOD activities information on the correct method for presenting payment instructions, and (3) establishing a working group to develop options for presenting standard contract payment instructions. While the DOD working group initiative may reduce payment allocation errors associated with misinterpreting contract payment instructions, DOD needs to automate the standard payment instructions to eliminate payment allocation errors associated with manually allocated payments.
Medicare FFS consists of Part A, hospital insurance, which covers inpatient stays, care in skilled nursing facilities, hospice care, and some home health care; and Part B, which covers certain physician visits, outpatient hospital treatments, and laboratory services, among other services. Most persons aged 65 and older, certain individuals with disabilities, and most individuals with end-stage renal disease are eligible to receive coverage for Part A services at no premium. Individuals eligible for Part A can also enroll in Part B, although they are charged a Part B premium. MA plans are required to provide benefits that are covered under the Medicare FFS program. Most Medicare beneficiaries who are eligible for Medicare FFS can choose to enroll in the MA program, operated through Medicare Part C, instead of Medicare FFS. All Medicare beneficiaries, regardless of their source of coverage, can choose to receive outpatient prescription drug coverage through Medicare Part D. Beneficiaries in both Medicare FFS and MA face cost-sharing requirements for medical services. In Medicare FFS, cost sharing includes a Part A and a Part B deductible, the amount beneficiaries must pay for services before Medicare FFS begins to pay. Medicare FFS cost sharing also includes coinsurance—a percentage payment for a given service that a beneficiary must pay, and copayments—a standard amount a beneficiary must pay for a medical service. Medicare allows MA plans to have cost-sharing requirements that are different from Medicare FFS’s cost-sharing requirements, although an MA plan cannot require overall projected average cost sharing that exceeds what beneficiaries would be expected to pay under Medicare FFS. MA plans are permitted to establish dollar limits on the amount a beneficiary spends on cost sharing in a year of coverage, although Medicare FFS has no total cost-sharing limit. MA plans can use both out-of-pocket maximums, limits that can apply to all services but can exclude certain service categories, and service-specific maximums, which are limits that apply to a single service category. These limits help provide financial protection to beneficiaries who might otherwise have high cost- sharing expenses. MA plans projected that, on average, they would allocate most of the rebates to beneficiaries as reduced cost sharing and reduced premiums for Part B services, Part D services, or both. In 2007, almost all MA plans in our study (1,874 of the 2,055 plans, or 91 percent) received a rebate payment from Medicare that averaged $87 PMPM. MA plans projected they would allocate 69 percent of the rebate ($61 PMPM) to reduced cost sharing and 20 percent ($17 PMPM) to reduced premiums. MA plans projected they would allocate relatively little of the rebates (11 percent or $10 PMPM) to additional benefits that are not covered under Medicare FFS. (See fig. 1.) On average, for plans that provided detailed cost estimates, the projected dollar amounts of the common additional benefits ranged from a low of $0.11 PMPM for international outpatient emergency services to $4 PMPM for dental services. Additional benefits commonly offered included dental services, health education services, and hearing services. About 41 percent of beneficiaries, or 2.3 million people, were enrolled in an MA plan that also charged additional premiums to pay for additional benefits, reduced cost sharing, or a combination of the two. The average additional premium charged was $58 PMPM. Based on plans’ projections, we estimated that about 77 percent of the additional benefits and reduction in beneficiary cost sharing was funded by rebates, with the remainder being funded by additional beneficiary premiums. For 2007, MA plans projected that MA beneficiary cost sharing, funded by both rebates and additional premiums, would be 42 percent of estimated cost sharing in Medicare FFS. Plans projected that their beneficiaries, on average, would pay $49 PMPM in cost sharing, and they estimated that the Medicare FFS equivalent cost sharing for their beneficiaries was $116 PMPM. Although plans projected that beneficiaries’ overall cost sharing was lower, on average, than Medicare FFS cost-sharing estimates, some MA plans projected that cost sharing for certain categories of services was higher than Medicare FFS cost-sharing estimates. This is because overall cost sharing in MA plans is required to be actuarially equivalent or lower compared to overall cost sharing in Medicare FFS, but may be higher or lower for specific categories of services. For example, 19 percent of MA beneficiaries were enrolled in plans that projected higher cost sharing for home health services, on average, than in Medicare FFS, which does not require any cost sharing for home health services. Similarly, 16 percent of MA beneficiaries were in plans with higher projected cost sharing for inpatient services relative to Medicare FFS. (See table 1.) Some MA beneficiaries who frequently used these services with higher cost sharing than Medicare FFS could have had overall cost sharing that was higher than what they would pay under Medicare FFS. Cost sharing for particular categories of services varied substantially among MA plans. For example, with regards to inpatient cost sharing, more than half a million beneficiaries were in MA plans that had no cost sharing at all. In contrast, a similar number of beneficiaries were in MA plans that required cost sharing that could result in $2,000 or more for a 10-day hospital stay and $3,000 or more for three average-length hospital stays. In Medicare FFS in 2007, beneficiaries paid a $992 deductible for the first hospital stay in a benefit period, no deductible for subsequent hospital stays in the same benefit period, and a 20 percent coinsurance for physician services that averaged $73 per day for the first 4 days of a hospital stay and $58 per day for subsequent days in the stay. Figure 2 provides an illustrative example of an MA plan that could have exposed a beneficiary to higher inpatient costs than under Medicare FFS. While the plan in this illustrative example had lower cost sharing than Medicare FFS for initial hospital stays of 4 days or less as well as initial hospital stays of 30 days or more, for stays of other lengths the MA plan could have cost beneficiaries more than $1,000 above out-of-pocket costs under Medicare FFS. The disparity between out-of-pocket costs under the MA plan and costs under Medicare FFS was largest when comparing additional hospital visits in the same benefit period, since Medicare FFS does not charge a deductible if an admission occurs within 60 days of a previous admission. Some MA plans had out-of-pocket maximums, which help protect beneficiaries against high spending on cost sharing. As of August 2007, about 48 percent of beneficiaries were enrolled in plans that had an out-of- pocket maximum. However, some plans excluded certain services from the out-of-pocket maximum. Services that were typically excluded were Part B drugs obtained from a pharmacy, outpatient substance abuse and mental health services, home health services, and durable medical equipment. For 2007, MA plans projected that of their total revenues ($783 PMPM), they would spend approximately 87 percent ($683 PMPM) on medical expenses. Plans further projected they would spend approximately 9 percent of total revenue ($71 PMPM) on nonmedical expenses, such as administration expenses and marketing expenses, and approximately 4 percent ($30 PMPM) on the plans’ profits, on average. There was variation among individual plans in the percent of revenues projected to be spent on medical expenses. For example, about 30 percent of beneficiaries—1.7 million—were enrolled in plans that projected spending less than 85 percent on medical expenses. While there is no definitive standard for the percentage of revenues that should be spent on medical expenses, Congress adopted the 85 percent threshold to require minimum thresholds for MA plans in the Children’s Health and Medicare Protection Act of 2007. MA plans projected expenses separately for certain categories of nonmedical expenses, including marketing and sales. One type of MA plan—Private Fee-for-Service (PFFS)—allocated a larger percentage of revenue to marketing and sales than other plan types. On average, as a percentage of total revenue, marketing and sales expenses were 3.6 percent for PFFS plans compared to 2.4 percent for all MA plans. Medicare spends more per beneficiary in MA than it does for beneficiaries in Medicare FFS, at an estimated additional cost to Medicare of $54 billion from 2009 through 2012. In 2007, the average MA plan receives a Medicare rebate equal to approximately $87 PMPM, on average. MA plans projected they would allocate the vast majority of their rebates—approximately 89 percent—to beneficiaries to reduce premiums and to lower their cost- sharing for Medicare-covered services. Plans projected they would use a relatively small portion of their rebates—approximately 11 percent—to provide additional benefits that are not covered under Medicare FFS. Although the rebates generally have helped to make health care more affordable for many beneficiaries enrolled in MA plans, some beneficiaries may face higher expenses than they would in Medicare FFS. Further, because premiums paid by beneficiaries in Medicare FFS are tied to both Medicare FFS and MA costs, beneficiaries covered under Medicare FFS are subsidizing the additional benefits and lower costs that MA beneficiaries receive. Whether the value that MA beneficiaries receive in the form of reduced cost sharing, lower premiums, and extra benefits is worth the increased cost borne by beneficiaries in Medicare FFS is a decision for policymakers. However, if the policy objective is to subsidize health-care costs of low-income Medicare beneficiaries, it may be more efficient to directly target subsidies to a defined low-income population than to subsidize premiums and cost sharing for all MA beneficiaries, including those who are well off. As Congress considers the design and cost of the MA program, it will be important for policymakers to balance the needs of beneficiaries—including those in MA plans and those in Medicare FFS—with the necessity of addressing Medicare’s long-term financial health. Mr. Chairman, this completes my prepared remarks. I would be happy to respond to any questions you or other Members of the Subcommittee may have at this time. For further information about this testimony, please contact James Cosgrove at (202) 512-7114 or cosgrovej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Christine Brudevold, Assistant Director; Jennie Apter, Alexander Dworkowitz, Gregory Giusto, Drew Long, and Christina C. Serna made key contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Although private health plans were originally envisioned in the 1980s as a potential source of Medicare savings, such plans have generally increased program spending. In 2006, Medicare paid $59 billion to Medicare Advantage (MA) plans--an estimated $7.1 billion more than Medicare would have spent if MA beneficiaries had received care in Medicare fee-for-service (FFS). MA plans receive a per member per month (PMPM) payment to provide services covered under Medicare FFS. Almost all MA plans receive an additional Medicare payment, known as a rebate. Plans use rebates and sometimes additional beneficiary premiums to fund benefits not covered under Medicare fee-for-service; reduce premiums; or reduce beneficiary cost sharing. In 2007, MA plans received about $8.3 billion in rebate payments. This testimony is based on GAO's report, Medicare Advantage: Increased Spending Relative to Medicare Fee-for-Service May Not Always Reduce Beneficiary Out-of-Pocket Costs ( GAO-08-359 , February 2008). For this testimony, GAO examined MA plans' (1) projected allocation of rebates, (2) projected cost sharing, and (3) projected revenues and expenses. GAO used 2007 data on MA plans' projected revenues and covered benefits, accounting for 71 percent of beneficiaries in MA plans. GAO found that MA plans projected they would use their rebates primarily to reduce cost sharing, with relatively little of their rebates projected to be spent on additional benefits. Nearly all plans--91 percent of the 2,055 plans in the study--received a rebate. Of the average rebate payment of $87 PMPM, plans projected they would allocate about $78 PMPM (89 percent) to reduced cost sharing and reduced premiums and $10 PMPM (11 percent) to additional benefits. The average projected PMPM costs of specific additional benefits across all MA plans ranged from $0.11 PMPM for international outpatient emergency services to $4 PMPM for dental care. While MA plans projected that, on average, beneficiaries in their plans would have cost sharing that was 42 percent of Medicare FFS cost-sharing estimates, some beneficiaries could have higher cost sharing for certain service categories. For example, some plans projected that their beneficiaries would have higher cost sharing, on average, for home health services and inpatient stays, than in Medicare FFS. If beneficiaries frequently used these services that required higher cost sharing than Medicare FFS, it was possible that their overall cost sharing was higher than what they would have paid under Medicare FFS. Out of total revenues of $783 PMPM, on average, MA plans projected that they would allocate about 87 percent ($683 PMPM) to medical expenses. MA plans projected they would allocate, on average, about 9 percent of total revenue ($71 PMPM) to nonmedical expenses, including administration and marketing expenses; and about 4 percent ($30 PMPM) to the plans' profits. About 30 percent of beneficiaries were enrolled in plans that projected they would allocate less than 85 percent of their revenues to medical expenses. As GAO concluded in its report, whether the value that MA beneficiaries receive in the form of reduced cost sharing, lower premiums, and additional benefits is worth the additional cost to Medicare is a decision for policymakers. However, if the policy objective is to subsidize health care costs of low-income Medicare beneficiaries, it may be more efficient to directly target subsidies to a defined low-income population than to subsidize premiums and cost sharing for all MA beneficiaries, including those who are well off. As Congress considers the design and cost of MA, it will be important for policymakers to balance the needs of beneficiaries and the necessity of addressing Medicare's long-term financial health.
The Defense Logistics Agency (DLA), service headquarters, and inventory control points are responsible for managing secondary inventory. Through their respective item managers, DLA and service inventory control points ensure that needed items are available to the operating forces when and where needed. An item manager’s tasks include determining when to repair or purchase items, positioning them at depots to meet demands, and disposing of unneeded items. The items managed by DLA and service item managers are stored at depots operated and managed by DLA. Depot managers have no authority over what items are stored or whether they should be disposed of. These decisions are made by the item managers. The current DLA distribution depot system consists of two distribution region headquarters. They are located at New Cumberland, Pennsylvania, and Stockton, California. Each of the 27 distribution depots report to one of these regions. For fiscal year 1994, total DOD distribution costs amounted to about $1.5 billion. Figure 1 shows the locations of these depots. When inventory is managed efficiently, enough is stored to meet wartime and peacetime requirements and unnecessary storage costs are avoided. When the total on-hand and due-in inventory falls to or below a certain level—called the reorder point—inventory control points place an order for additional inventory. The reorder point includes items needed to satisfy war reserve requirements and items to be issued during the lead time (the time between when an order is placed and when it is received). In addition, a safety level of inventory is kept on hand in case of minor interruptions in the resupply process or unpredictable fluctuations in demand. By placing orders when the reorder point is reached, item managers ensure that inventory arrives before stock runs out. Generally, the amount of inventory ordered is based on a formula that DOD calls an economic order quantity (also known as a replenishment formula). According to DLA, DOD’s secondary inventory occupies about 360 million cubic feet of storage space and has an actual volume of about 300 million cubic feet. We obtained computerized inventory data records from DLA and each of the military services and identified secondary inventory items with a volume of 218.8 million cubic feet. Our figure differs from DLA’s 300 million cubic feet because approximately 12 percent of the items on the DLA and service data tapes that we used did not have storage space data. To determine whether there are opportunities for reductions, we analyzed DOD’s secondary inventory as it relates to war reserve and current operational needs and in terms of the years of supply that is on hand on an item basis. Using this data, we visited selected storage activities to examine the condition and reasons for continuing to store items that appeared to be no longer needed. This work showed that DOD has a substantial number of items that (1) have over a 20-year supply beyond the levels needed to meet war reserve and operational needs, (2) are for weapon systems no longer in use, (3) are no longer usable, and (4) are not needed. Our analysis of DOD’s September 30, 1993, Supply System Inventory Report and inventory stratification reports indicates that $36.3 billion of the $77.5 billion secondary inventory that DOD reported exceeded current war reserve and operating requirements. On the basis of our analysis of computerized records, we determined that about 2.2 million different items had a volume of 130.4 million cubic feet. A typical DOD warehouse is approximately 595 feet long and 180 feet deep. DLA officials said that it would take approximately 205 warehouses to store the 130.4 million cubic feet of inventory. Figure 2 shows that inventory by DOD component. DLA estimates that the holding costs for the 130 million cubic feet are approximately $94 million per year, which is less than 1 percent of the inventory value. This is low when compared to industry experience, which according to one study, ranges from 5 to 15 percent. For purchase decisions, some inventory control points use a percentage of the item’s value, which can be as high as 18 to 22 percent of the value. However, DOD believes that the holding costs for items already on hand is considerably less than the 18 to 22 percent. As discussed later, DOD has an effort underway to benchmark its holding costs with private industry (see p. 17). The concern about unnecessary secondary inventory storage is not new. In 1992, we reported that storing unneeded secondary inventory would prevent DLA from realizing savings from depot consolidations. We recommended that DLA reduce this inventory so that fewer depots would be required. To estimate the years of supply for each of the types of items, we divided the on-hand inventory by past or projected demand data. We had demand data for about 488,000 of the 2.2 million items that were not needed to satisfy current war reserves or operating requirements. Those items occupied about 73 percent (95.7 million cubic feet) of the 130.4 million cubic feet of space; 84,000 of the items (41.7 million cubic feet) had more than a 20-year supply. The 1.7 million items that did not have demand dataoccupied 34.7 million cubic feet of space. In figure 3, we show the years of supply by service. Figure 4 shows the space occupied by these items. To identify items that will likely never be used, we (1) used DLA and service databases to determine the amounts of stock on hand, (2) discussed with item managers the likelihood of these items being used and plans to dispose of them, and (3) visited supply depots to inspect items that had been in storage for an extensive period of time with little or no demand. Some examples of the items we identified follow. At the Fleet Industrial Supply Center, Norfolk, Virginia, three pump rotors (costing about $22,000 each) for a ship water pump have remained in storage since 1970. Recently, these items were transferred to DLA for management under the Consumable Item Transfer Program. Under this program, DLA assumes management responsibility for selected consumable items used by more than one service. Because DLA now manages these items, they will not be considered for disposal for at least 2 years due to DLA’s disposal policy. At the same location, 10 bearings ($5,590 each) for a gear assembly on an aircraft carrier had been in storage since 1986. After our discussions with the item manager, the Navy disposed of all 10 of these bearings. Figure 5 shows the bearings in storage. At Warner Robins Air Logistics Center, Warner Robins, Georgia, 79 modular radio transmitters belonging to the Army and valued at approximately $16,000 were in storage. Although 69 of these items are excess, the Air Force had not taken any action to determine whether they were needed by the Army. Air Force officials told us that they planned to contact the Army for disposal authority. Figure 6 shows the modular radio transmitters in storage. At the Defense Construction Supply Center, Columbus, Ohio, we were informed that 65 housings for air cylinders used on a electric generating unit have had no demand in years, and no demand is forecasted for the coming year. The item manager indicated that it is unlikely that all the housings will be used, but they cannot be disposed of until additional information is available concerning possible uses for them. Some items have become obsolete as technology has advanced and weapon systems and equipment have been phased out of the inventory. At the Fleet Industrial Supply Center in Norfolk, Virginia, we located two electric pumps valued at approximately $90,700 (about $45,350 each). Though these pumps were for destroyer class ships no longer in the U.S. inventory, they remained in storage. When we questioned this retention decision, the Navy item manager informed us that the pumps were being retained for potential foreign military sales. Despite the absence of U.S. military users, responsibility for their management was transferred to DLA under the Consumable Item Transfer Program. Thus, the electric pumps will be stored for at least 2 years. Figure 7 shows them in storage. DLA also assumed management responsibility for four large distillation units for which there were no known users. The items (costing $72,140 each) have been in storage since 1968 and were used to distill water on Navy ships. According to the Navy, the decision to retain the items was predicated on their high cost. Because of this cost, the Navy chose to research the possible uses of these items before disposing of them. Like other items transferred to DLA, they will not be considered for disposal for at least 2 years. Figure 8 shows the distillation units stored at the Fleet Industrial Supply Center, Norfolk, Virginia. At Warner Robins Air Logistics Center, Warner Robins, Georgia, 4,044 missile control systems (a total cost of approximately $21 million) are being phased out of the inventory. These items have been in storage for many years with no demands. However, subsequent to our visit, the item manager received approval to dispose of them. Also, at Warner Robins Air Logistics Center, three equalizer assemblies costing approximately $75,000 had been in storage for at least 3 years. The assemblies were part of the F-4 aircraft reconnaissance system. Though the items were obsolete to DOD, they were being retained for possible foreign military sales. Figure 9 shows the assemblies in storage. Many items have deteriorated to the point that they are no longer usable. For example, at the Fleet Industrial Supply Center, in Norfolk, Virginia, a hoisting antenna (which cost about $48,500) had been stored outside so long that grass and rust covered it. The Navy item manager informed us that the item is no longer usable and will be disposed of. Figure 10 shows the antenna in outside storage. Also, at the Fleet Industrial Supply Center in Norfolk, Virginia, 13 modernization kits for the P-3C aircraft have been in storage since 1978. These kits (which cost about $4,480 each, for a total cost of approximately $58,240) are obsolete. During subsequent discussions with Navy officials, they indicated that these items will be disposed of. At the Defense Supply Depot, New Cumberland, Pennsylvania, seven obsolete Army clutch assemblies were in storage. They cost approximately $5,334 and were previously used on the M125 10-ton Prime Mover. As a result of our visit, the Army decided to dispose of all seven items. In addition, at the San Antonio Air Logistics Center in San Antonio, Texas, two maintenance antennae valued at approximately $230,000 each had been in storage for at least 5 years. Though these items were in need of repair, both were being retained, and the Air Force has no plans to dispose of them. The item manager informed us that the items would have to be researched to determine any possible users before any disposal action could be taken, but as of November 30, 1994, the item manager had not initiated this action. Figure 11 shows the maintenance antennae in storage. In 1990, we reported on 57 Navy items that we identified as candidates for disposal that had little or no potential for future use. During that review, we sampled 100 items that had unneeded inventory and identified 57 items that had one or more of the following characteristics: (1) no active users, (2) no demands in the previous 2 years, and (3) no demands forecasted. When we followed up on these items in 1994, we found that of the 57 items that were on hand in 1990, 32 were still in the inventory. The Navy still manages 26 of these items, which have approximately $2.7 million in stock exceeding the reorder point and replenishment formula. The other six had been transferred to DLA. Six of the items still under Navy management had demand forecasted for the following year. Four of these had excessive stock on hand, ranging from 6 to more than 20 years of supply. DOD has implemented several programs—some DOD-wide and others service specific—to reduce secondary inventory. Over the last 3 years, DOD disposals have amounted to about $43.4 billion. (See table 1.) One reason more progress has not been made is because incentive for the disposal of secondary items was lacking. In 1992, DOD consolidated its industrial and stock funds into the Defense Business Operations Fund. DOD was partly motivated to consolidate the funds in order to improve the visibility of storage costs. However, neither the inventory control points nor the weapon system program managers have an incentive to reduce storage costs. The service unit (customer) that requests and uses the inventory pays for the cost of storage because cost is included in the price charged the customer. For fiscal year 1996, DLA plans to begin charging inventory control points for storing the material they manage. Although rates will vary by type of commodity and storage, the rate for covered storage (which applies to most secondary items) will be $5.15 a square foot. This charge should be an incentive for item managers to dispose of material that is not needed. In addition, DOD has initiated a study to determine its inventory holding costs. As part of this study, DOD will compare its holding costs with those of private industry. In commenting on a draft of this report, DOD said that it had no preconceptions as to what impact, if any, the project would have on retention or disposal decisions. The project is scheduled for completion in the spring of 1995. Furthermore, as manager of DOD’s depot system, DOD and DLA have developed strategic plans for reducing DOD’s storage capacity as secondary item inventories are reduced. DLA officials told us that a number of contributing factors, including Base Closure and Realignment Commission actions and its own efforts, have resulted in storage facilities being vacated and substantial reductions in storage requirements during the past 2 fiscal years. DLA projects that DOD’s secondary inventory will be reduced to approximately $54 billion by 2001 and that its total requirement for covered space will be reduced to approximately 400 million cubic feet. According to DLA officials, these reductions take into account additional requirements generated as a result of units returning secondary items from Europe, as well as moving items currently stored outside into covered storage. We believe that DOD’s efforts are a good start and that continued emphasis should be placed on getting rid of inventory that is not needed. Therefore, we recommend that the Secretary of Defense develop a systematic approach for reviewing the secondary inventory currently on hand. The Secretary could begin by instructing inventory control points and program managers to focus their inventory reduction efforts on the material that occupies a great deal of storage space and has more than 20 years of supply on hand. In commenting on a draft of this report (see app. I), DOD said that it generally agrees that inventories should be reduced and excess storage capacity should be eliminated. DOD partially agreed with our findings and recommendations. While DOD agrees that it holds secondary inventory that will probably never be used and should be disposed of, it does not agree with the criteria we used for assessing the potential for reducing the amount of inventory it currently holds. Our analysis focused on the stock that exceeded the war reserve and current operating requirements. We believe this is a logical starting point and our report points out that we are not suggesting that DOD dispose of all stock that exceeds that level. Rather, we point out that DOD should focus its reduction efforts on stock that occupies a great deal of space and has more than 20 years of supply on hand. DOD expressed concern that the implication of using our criteria would be that this material should be disposed of and the related warehouse space eliminated. It also points out that our criteria are used for ordering stock, not for making decisions concerning whether to retain it. However, in its 1993 material management regulation, DOD used this same criteria as the maximum quantity of material to be maintained on hand or on order to sustain current operations and core war reserves. DOD stated that in hindsight it would not order much of the stock it has on hand, but wants to be careful not to dispose of any stock that might be needed in the future. DOD stated that it might already have disposed of much of the material we discuss in our report. We acknowledge that some of this material might have been disposed of while our review was on going. However, we do not believe that DOD had the opportunity to dispose of most of this material. We obtained the computerized records on which we based our analysis from DLA and the services as they were available. The tapes for DLA, for example, were not obtained until August 1994, and therefore, DLA would have had limited opportunity to dispose of DLA material we identified. DOD partially concurred with our recommendation that DOD develop a systematic approach for reducing inventories. DOD emphasized that it already has in place a systematic approach to reducing inventory and is tracking its progress toward meeting established goals. DOD agreed that the number of storage locations should be reduced, but stated that the depot system is already being downsized. DOD indicated that its requirement for covered storage space had been reduced more than 180 million cubic feet, or 28 percent, between September 1992 and September 1994. In the draft of this report submitted to DOD for comment, we included a recommendation for the Secretary to consider the significant amount of inventory that exceeds current requirements when determining the number of depots to close or consolidate in the 1995 base closure and realignment process. Since the Secretary’s recommendations to close and realign bases have been made, we deleted this recommendation from our final report. We conducted our work between January 1993 and September 1994 in accordance with generally accepted government auditing standards. (See description of our scope and methodology in app. II.) Unless you publicly announce this report’s contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Chairmen, Senate Committee on Armed Services, Senate and House Committees on Appropriations, and House Committee on National Security; the Secretaries of Defense, the Air Force, the Army, and the Navy; and the Directors of the Defense Logistics Agency and the Office of Management and Budget. We will also make copies available to others upon request. If you have any questions, I may be reached at (202) 512-8412. Major contributors to this report are listed in appendix III. The following are GAO’s comments on the Department of Defense’s (DOD) letter dated March 23, 1995. 1. The points raised in DOD’s transmittal letter are addressed in the section of this report entitled agency comments and our evaluation. 2. By using the criteria we selected for assessing DOD’s use of warehouse space, we do not believe that all the material we identified as exceeding current war reserve and operating requirements needs to be disposed of. As we stated in our report, many of these items may have potential future use and should be retained. 3. We agree that a certain amount of uncertainty is associated with projecting spare parts usage. DOD has insurance items to account for the fact that accidents, abnormal equipment or system failures, or other unexpected demands occur. The requirements for these items are included in operating stocks that we excluded from our analyses. 4. We believe that DOD’s comment supports our position. Even after disposing of excess stock, the supply system was able to satisfy customer demand. 5. DOD commented that during hostilities, items (particularly insurance items) with more than 100 years of supply can very quickly become exhausted. Our analysis considered only items with demand. Insurance items, because they had no demand, were excluded. With respect to the noninsurance items with more than 100 years of supply, it is unlikely that all the quantities will be used. We agree, however, that DOD should focus not only on the number of years of supply on hand, but also on the space that the items occupy. 6. DOD commented that the Defense Logistics Agency’s (DLA) inventory managers are authorized to dispose of stocks transferred to DLA by other services sooner, with approval from the losing service. However, DLA’s item managers informed us that they do not consider disposing of such material for 2 years. 7. We reported in August 1994 that DOD’s reported inventory values decreased by $31.9 billion between fiscal years 1989 and 1993, from $109.4 billion to $77.5 billion. However, because of accounting changes, the values were not comparable. When the inventory was valued on a comparable basis, we estimated that the total reduction was $11.2 billion, not $31.9 billion. We believe that there are further opportunities for inventory reductions with appropriate incentives. 8. We agree with DOD that, to date, the major incentive to reduce inventory has been imposed externally by the Congress in the form of budget reductions. We believe that internal incentives, such as DOD’s future plan to charge organizations that cause inventory to be stored for storage costs, should be effective in reducing unneeded inventory. 9. We believe that DOD is capable of further inventory reductions. The statement that inventory disposals have been insufficient to offset increases in material returns is from DOD officials. Since DOD took exception with the statement, we removed it from the report. 10. DOD stated that it holds inventory that will likely never be used. In view of the number of items with more than 20 years of supply, we believe that it is unlikely that much of this inventory would have to be repurchased if DOD systematically reviewed and disposed of material for which it forecasted no need. We visited the following sites to review policies, procedures, and documents related to retaining and disposing of inventory: the Office of the Deputy Under Secretary of Defense for Logistics; the Army, the Navy, and the Air Force headquarters, Washington, D.C.; the Defense Logisics Agency, Alexandria, Virginia. Inventory commands: the Army Material Command, Alexandria, Virginia; the Naval Supply Systems Command, Washington, D.C.; the Air Force Material Command, Wright-Patterson Air Force Base, the Defense Logistics Services Center, Battle Creek, Michigan. Inventory control points: Army—Tank-Automotive Command, Warren, Michigan; Navy—Aviation Supply Office, Philadelphia, Pennsylvania and the Ships Parts Control Center, Mechanicsburg, Pennsylvania; Air Force—Ogden Air Logistics Center, Ogden, Utah; Oklahoma City Air Logistics Center, Oklahoma City, Oklahoma; San Antonio Air Logistics Center, San Antonio, Texas; and Warner Robins Air Logistics Center, Warner Robins, Georgia; and DLA—Defense Construction Supply Center, Columbus, Ohio. Naval Fleet Industrial Supply Center, Norfolk, Virginia; the Air Logistics Centers at Tinker Air Force Base, Oklahoma; Warner Robins Air Force Base, Georgia; Kelly Air Force Base, Texas; and Hill Air Force Base, Utah; and DOD Supply Depot, Columbus, Ohio. In conducting our work, we used the same computer files, records, and reports that DOD uses to make stocking decisions for secondary items. We did not independently determine the reliability of these sources. To determine the extent of inventory not needed to satisfy current war reserve and operating requirements, we analyzed computerized files of DLA and service inventories between March 31, 1993, and August 31, 1994. Specifically, we compared, on an item-by-item basis, on-hand inventory needed to satisfy war reserve and current operating requirements to the total inventory that was on hand. To determine why inventory was being retained and whether retention was justified, we selected a sample of approximately 150 line items from computerized inventory records for the inventory control points visited. At the inventory control points, we reviewed inventory records and interviewed officials to identify the reasons for retaining inventory. To determine the extent of space required to store items beyond the current war reserve and operating requirements, we matched DLA and service inventory files with the cube information DOD provided by national stock number. Approximately 12 percent of the items analyzed had no cube data in the DLA or service computer records and were assigned a cube size of zero. This reduced our calculation of the space occupied by secondary inventory. When we visited the depots, we observed selected items to determine the accuracy of the cube data in DOD’s databases and found this data to be relatively accurate. To compute years of supply for the Army, the Navy, and the Air Force, we used DOD’s computerized inventory records to determine, on an item-by-item basis, the amount of inventory that was not needed to satisfy war reserve and operating requirements. We divided that inventory by projected annual demands to determine how many years it would take to use the inventory. By excluding items that did not have projected demands from this analysis, we were able to avoid computing years of supply for insurance items that had no projected demand. Because projected demands were not available for DLA items, we used historical demands in lieu of projected demands to compute years of supply. We excluded items that had no historical demand data from this analysis. Organizational Culture: Use of Training to Help Change DOD Inventory Management Culture (GAO/NSIAD-94-207, Aug. 30, 1994). Army Inventory: Unfilled War Reserve Requirements Could Be Met With Items From Other Inventory (GAO/NSIAD-94-207, Aug. 25, 1994). Defense Inventory: Changes in DOD’s Inventory, 1989-93 (GAO/NSIAD-94-235, Aug. 17, 1994). Navy Supply: Improved Material Management Can Reduce Shipyard Costs (GAO/NSIAD-94-181, July 27, 1994). Commercial Practices: DOD Could Reduce Electronics Inventories by Using Private Sector Techniques (GAO/NSIAD-94-129, June 29, 1994). Army Inventory: Changes to Stock Funding Repairables Would Save Operations and Maintenance Funds (GAO/NSIAD-94-131, May 31, 1994). Defense Management Initiatives: Limited Progress in Implementing Management Improvement Initiatives (GAO/AIMD-94-105, Apr. 14, 1994). Commercial Practices: Leading-Edge Practices Can Help DOD Better Manage Clothing and Textile Stocks (GAO/NSIAD-94-64, Apr. 13, 1994). Defense Inventory: Changes in DOD’s Inventory Reporting, 1989-92 (GAO/NSIAD-94-112, Feb. 10, 1994). Defense Inventory: More Accurate Reporting Categories Are Needed (GAO/NSIAD-93-31, Aug. 12, 1993). Commercial Practices: DOD Could Save Millions by Reducing Maintenance and Repair Inventories (GAO/NSIAD-93-110, June 4, 1993). Army Inventory: Current Operating and War Reserve Requirements Can Be Reduced (GAO/NSIAD-93-119, Apr. 14, 1993). Defense Logistics Agency: Why Retention of Unneeded Supplies Persists (GAO/NSIAD-93-29, Nov. 4, 1992). Army Inventory: Divisions’ Authorized Levels of Demand-Based Items Can Be Reduced (GAO/NSIAD-93-09, Oct. 20, 1992). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of Defense's (DOD) inventory management system, focusing on the: (1) size and space occupied by DOD secondary inventory; (2) cost of storing this inventory; and (3) efforts taken to reduce DOD secondary inventory. GAO found that: (1) DOD secondary inventory occupies about 218.8 million cubic feet; (2) 60 percent of the secondary inventory is not needed to satisfy current war reserve or operating requirements, however, many items may have potential future use; (3) the inventory not currently needed consists of 2.2 million different types of items; (4) DOD has more than a 20-year supply for some items, but many others have deteriorated or become obsolete; (5) DOD should get rid of unneeded items that occupy space and exceed more than 20 years of supply; (6) although DOD has begun programs to reduce the secondary inventory, its efforts have been partially offset by decreasing inventory demands and increasing returns of materials by deactivated forces; (7) DOD disposed of secondary inventory items valued at $43 billion during the past 3 fiscal years; and (8) the Defense Logistics Agency is implementing a pricing procedure that should increase inventory managers' incentives for disposing of unneeded items.